0.1 + 0.2 == 0.3 -> false 
 0.1 + 0.2 -> 0.30000000000000004 

What's happening?

  • 12
    Another “canonical” question for closing other issues as duplicates. Translation-compilation QA Is floating point math broken? It has an answer with a detailed review of the hardware side of the problem , but it is too tough for me. If you want to translate, dare. :) - Athari
  • Here's more on the topic. - VladD
  • since this is a question-answer-translation from the EnSO community wiki - can it be made “shared” in RuSO as well? - PashaPash
  • 3
    @PashaPash If all translations are shared, no one will translate. :) Translation is also a job. Yes, and the "canonical" QA is not always common, it is rather historically due to automatic conversion in the past. - Athari
  • one
    @Athari: the comment is not part of the question, for example, the author of the question should not post the information necessary to answer in the comments. The reference manual recommends that you specify both the link and the author, otherwise it looks like plagiarism. - jfs

2 answers 2

These are features of calculations on binary floating point numbers . In most programming languages, they are based on the IEEE 754 standard . Numbers in JavaScript, double in C ++, C # and Java use a 64-bit representation. The source of the problem lies in the fact that the numbers are expressed in powers of two. As a result, rational numbers (such as 0.1, that is, 110 ), the denominator of which is not a power of two, cannot be expressed accurately.

The number 0.1 in binary 64-bit format looks like this:

And as a rational number, that is, 110 , can be written exactly:

  • 0.1 as a number in decimal notation, or
  • 0x1.99999999999999...p-4 in hexadecimal notation, where ... is an infinite sequence of nines.

Constants 0.2 and 0.3 will also be expressed approximately. The binary floating-point number closest to 0.2 will be slightly larger than the rational number 0.2, and the closest number to 0.3 will be slightly smaller. As a result, the sum of 0.1 and 0.2 turns out to be greater than 0.3, and the equality turns out to be incorrect.

Usually, for comparing floating point numbers, some small number of epsilon and the difference between the numbers of the modules is compared with it: abs(a - b) < epsilon . If the inequality is true, then the numbers a and b approximately equal.

With successive calculations, the error accumulates. Often the accuracy of the result depends on the order of calculations. There is no single universal epsilon that is suitable for all cases.

For calculations with money, you should use special types of numbers based on the decimal system, if they are available, for example, Decimal in C #, BigDecimal in Java, etc. They use a decimal internal representation, which allows you to work with numbers like 29.99 without rounding. True calculations on them are much slower.

Recommended reading:

  • 7
    Addition : Details in Russian can be read here - Dmi7ry
  • one
    The basics of floating-point format are discussed in more detail on this site . The material will be updated. - Zealint

At the moment, all the answers here affect the question in dry technical terms. I would like to give an explanation so that it was clear not only to techies.

Imagine slicing a pizza. You have a robotic knife that can cut pizza slices in half. He can cut the whole pizza in half, or he can cut the existing slice in half, but in any case, the cut in half is always accurate.

If you start with a whole pizza, cut it in half and continue making cuts, you can cut it in half 53 times before a cut that is too small. At this point, you can no longer halve this part and must either include or exclude it as is.

How would you combine all the sliced ​​pieces to form one-tenth (0.1) or one-fifth (0.2) pizza? Actually think about it and try to figure it out. You can even try using real pizza :)

Most experienced programmers, of course, know the real answer, which is that it is not possible to combine pieces into exactly one-tenth or one-fifth of a pizza using these sections, no matter how finely you slice them. You can realize a fairly accurate approximation, and if you add an approximation of 0.1 with an approximation of 0.2, you get close enough to 0.3, but this is still only an approximation. Further more about this.

For double-precision numbers (this is the accuracy that allows you to repeat the pizza cut 53 times), the numbers closest to 0.1 (approximation) are 0.09999999999999999167332731531132594682276248931884765625 and 0.100000000000055551115123125782702111545403 The latter is slightly closer to 0.1 than the first, so the numerical parser, taking into account input 0.1, will select the last number.

(The difference between these two numbers is the “smallest slice” that we have to include, which introduces a shift up, or eliminates it, which leads to a shift down. The technical term for this smallest fragment is ULP .)

In the case of 0.2, the numbers are all the same, just increasing by 2 times. Again, preference will be given to a value that is slightly above 0.2.

Note that in both cases, the approximation for 0.1 and 0.2 has a slight upward shift. If we add enough of these offsets, they will shift the digit further and further away from what we need, and in the case of 0.1 + 0.2, the offset is large enough so that the resulting number is no longer the closest number to 0.3 .

Specifically, it is the 0.15 days old.

In addition: you can consider scaling your values ​​to avoid problems with floating-point arithmetic ( example ).

PS Some programming languages ​​also provide "pizza nippers" that can divide slices into exact dozens .

  • @Kromster did my best. Let me know if there are still gross inaccuracies - Alex M
  • one
    Thanks for the reaction. +1 and corrected a couple of little things. - Kromster