This question has already been answered:

I add two decimal numbers and get: 0.30000000000000004

console.log(0.1 + 0.2); 

I understand that this is how js works. This is its internal representation of floating-point numbers, which is allocated as many bits. My question is the following: what reliable techniques, formal and non-formal, allow you to avoid such problems.

Reported as a duplicate by Grundy , aleksandr barakin , user194374, Bald , VenZell July 7 '16 at 8:54 .

A similar question was asked earlier and an answer has already been received. If the answers provided are not exhaustive, please ask a new question .

    2 answers 2

    Found the answer:

    The thing is that in the IEEE 754 standard exactly 8 bytes (= 64 bits) are allocated to the number, no more and no less.

    The number 0.1 (one tenth) is written simply in decimal format. But in the binary number system it is an infinite fraction, since the unit of ten in the binary system is not so simple to divide. Also an infinite fraction is 0.2 (= 2/10).

    The binary value of the infinite fractions is stored only up to a certain sign, so inaccuracy occurs. It can even be seen:

     alert( 0.1.toFixed(20) ); // 0.10000000000000000555 

    When we add 0.1 and 0.2, then two inaccuracies add up, we get a minor, but still an error in the calculations.

    There are two ways to add 0.1 and 0.2:

    1. Make them whole, fold, and then divide:

    alert( (0.1 * 10 + 0.2 * 10) / 10 ); // 0.3

    1. Add and then round to a reasonable decimal. Rounding up to the 10th digit is usually enough to cut off the calculation error:

    var result = 0.1 + 0.2; alert( +result.toFixed(10) ); // 0.3

       parseFloat(0.1 + 0.2).toFixed(2); 
      • one
        Just still need to cast to the number, because toFixed () returns the string. - Alexey Ukolov
      • @ AlekseyUkolov - I agree. - C.Raf.T