Why there is no unsigned double?
  • four
    In [ IEE754 ] [1] there is no unsigned concept of a real number, so it is not in C# . [1]: en.wikipedia.org/wiki/IEEE_754-2008 - Costantino Rupert
  • one
    Because in real problems, unsigned double is not needed by anyone. - avp
  • one
    @Asen, why is it not possible to easily minus comments? A question about double, and you about 32-bit registers. I'm not even talking about the fact that far from everything that is in a language (platform) should be directly supported by hardware. - avp
  • 3
    @Asen - And here the registers in general? - IEE754 specifies a set of some real types ( binary16 - 128 and similar decimal ) in which information is recorded in the appropriate number of bits. Through which registers to transmit this information is a personal matter of the current platform with accuracy up to ABI . - In IEE754 there is no concept of an unsigned real number. You can write and approve your standard in which this concept will and will be described the way to encode it, but it will not be IEE754, which is implemented in platforms like .NET - Costantino Rupert
  • one
    For such questions should be shot without regret! This is about the same as asking why the natural series does not include negative numbers! - Barmaley

2 answers 2

Because the double type stores a floating point number. Its significant numbers and order are stored separately. Unlike ordinary numbers, where the order is expressed by the number of zeros, which are significant digits.

Because of such a complicated way of storing a number, it is not possible to apply quick shift operations to it, because they will break everything. After all, they manipulate binary digits, and the order expresses the power of tens. This eliminates half the tricks for which implementing an unsigned type would be useful. The other half excludes the fact that it is not possible to perform modulo operations with floating numbers. As a result, an unsigned type is almost completely meaningless.

Of the benefits, only one bit per sign remains. Obviously, this was not enough for the processor developers to make the sign bit part of the number. They preferred to greatly simplify their lives and keep them separate. The developers of languages, on their own level, naturally, cannot effectively use this extra bit to store a meaningful digit.

  • I blunted it a little. In machine representation, order, of course, expresses a power of sixteen. Although a dozen is also possible, but at the level of a language or even a library that implements BCD arithmetic. - Shamov
  • In general, several architectures with decimal floating point arithmetic are mentioned in en.wikipedia.org/wiki/IEEE_754-2008 . Only now I have never come across them (and have never even heard of them before) - alexlz
  • I don't know about these either. The decimal exponent was seen only in the implementation of BCD numbers. - Shamov pm

The IEEE 754-2008 standard , which describes the format of floating-point numbers, prescribes that the character, module, and scale factor be stored separately:

enter image description here

For integers, the sign is stored and output implicitly, based on the characteristics of the overflow. For fractional numbers, the sign is written explicitly, which allows, by the way, to store also a negative zero, which is in demand in the mat. analysis.

Unfortunately for you, it will not be possible to get rid of the sign bit - you break the standard and the part of the processor responsible for fractional numbers simply won’t understand you. Because of this, you have to implement all the work with fractional numbers from scratch.