Found the answer:
The thing is that in the IEEE 754 standard exactly 8 bytes (= 64 bits) are allocated to the number, no more and no less.
The number 0.1 (one tenth) is written simply in decimal format. But in the binary number system it is an infinite fraction, since the unit of ten in the binary system is not so simple to divide. Also an infinite fraction is 0.2 (= 2/10).
The binary value of the infinite fractions is stored only up to a certain sign, so inaccuracy occurs. It can even be seen:
alert( 0.1.toFixed(20) ); // 0.10000000000000000555
When we add 0.1 and 0.2, then two inaccuracies add up, we get a minor, but still an error in the calculations.
There are two ways to add 0.1 and 0.2:
- Make them whole, fold, and then divide:
alert( (0.1 * 10 + 0.2 * 10) / 10 ); // 0.3
- Add and then round to a reasonable decimal. Rounding up to the 10th digit is usually enough to cut off the calculation error:
var result = 0.1 + 0.2; alert( +result.toFixed(10) ); // 0.3