# Display the number of double (10 ^ 18) +1

After seeing this code

``#include <iostream> #include <cstdio> using namespace std; int main() { double d = 1000000000000000001; cout.setf(ios::fixed); cout.precision(0); //0 - число символов после точки cout << d << endl; printf("%.0lf\n",d); return 0; }` `

Unexpectedly seen as a result (http://ideone.com/1SEHpO)

` `1000000000000000000 1000000000000000000` `

Where did go 1? Why is this happening? Features of writing floating point types?

Double type has limited accuracy, so it is impossible to distinguish between it 100000000000000000000 and 1000000000000000001. If you declare

` `double d1 = 1000000000000000001; double d2 = 1000000000000000000;` `

- then `d1` and `d2` will be the same number. Proof: http://ideone.com/WqQSAd

The problem is that floating point numbers are not infinitely accurate. For `double` type, for example, 52 bits are allocated for significant digits (and 11 more bits for exponent), the number inside is stored as if encoded as `CCCCCCC * 2^PPPP` ( `C` is significant digits, `P` is power). This means that the numbers that can be represented as a `double` are arranged with a certain step (which depends on the magnitude of the order): the numbers located “between” representable numbers cannot be expressed exactly with the help of `double` , and they are automatically rounded to the nearest representable number. An example of such a number is 0.1: it is not expressed in (binary) fraction, and the constant 0.1 inside is stored as approximately

` `0.1000000000000000055511151231257827021181583404541015625` `

The maximum value that can be added to one so that it does not change is called machine epsilon . For type `double` machine epsilon is obviously 2⁻⁵³, that is, around `1.11e-16` .

Why obvious? Because for a unit, the significant bits are: 10,000,000 ... 000, which means that the next largest number, which can be expressed in `double` type, must have significant digits 10,000,000 ... 001. (In fact, a little more complicated: the lead unit is not stored, but is implied.)

From this it follows that `1 + 1e-16` for the `double` type is indistinguishable from 1. As the order increases for larger numbers, as the first term increases, the machine epsilon increases approximately proportionally. Accordingly, `1e16 + 1` will be equal to `1e16` . In your case, you are two orders of magnitude higher than the limit: you add to `1e18` .

Let's experiment further: http://ideone.com/4XJxFj

` `1 + 1.110223024625156e-16 == 1` `

` `1 + 1.110223024625157e-16 != 1` `

This is because `1.110223024625156e-16` is an approximation to 2⁻⁵³.

If you need to represent numbers with high accuracy, even the type of `long double` may not be enough. In this case, you may have to use numbers of infinite precision . Such numbers are embedded in some languages ​​(for example, Java and C #), and for C and C ++ there are good libraries that provide such numbers. I would recommend GMP .