Which Numbers Will or Will Not Be Rounded
A finite number can be represented in the common IEEE-754 double-precision format if and only if it equals M•2e for some integers M and e such that -253 < M < 253 and -1074 ≤ e ≤ 971.
Every other finite number converted from decimal or resulting from another operation will be rounded.
(This is the format JavaScript uses because it conforms to ECMA-262, which says that the IEEE-754 64-bit binary floating-point format is used. The significand, M in the above, is often expressed as a value between 1 and 2 with a certain number of bits after a radix point, but I scaled it to an integer for easier analysis, and the exponent bounds are adjusted to match.)
All Numbers in the Question Are Rounded
This means all of the numbers in your example will be rounded:
- There is no way to scale 0.1 by a power of 2 to make an integer for M. As we multiply 0.1 by 2 repeatedly, we get 0.1, 0.2, 0.4, 0.8, 1.6, 3.2, 6.4, and we can see the fraction part forever repeats .2, .4, .8, .6,… So it never reaches .0. Since 0.1 cannot be represented as M•2e, it must be rounded.
- Similarly, 0.2, 0.3, and 0.4 also cannot be scaled by any power of 2 to make an integer for M.
- When these numbers 0.1, 0.2, 0.3, and 0.4 are converted to JavaScript’s
Number
format, the results are:
- 0.1000000000000000055511151231257827021181583404541015625.
- 0.200000000000000011102230246251565404236316680908203125.
- 0.299999999999999988897769753748434595763683319091796875.
- 0.40000000000000002220446049250313080847263336181640625.
- Considering the mathematics a bit more formally, 0.1 is 1/10. It can never equal M•2e because then we would have M•2e = 1/10, so 2•5•M•2e = 1. Since M is an integer, 2•5•M is an integer, so 2e must cancel out the 5. But even for negative e, no power of 2 can cancel a prime factor other than 2.
In contrast the numbers 0.25 or 0.375 are representable. When we multiply 0.25 by 2, we get 0.5 and then 1, so 0.25 = 1•2−2, which matches the format above. And 0.375 produces 0.75, 1.5, and then 3, so 0.375 = 3•2−3, which also matches the format.
Why It Appears Some Numbers Are Not Rounded
Two confounding issues create the illusion that some operations are exact:
- JavaScript’s default display of a value uses just enough decimal digits to uniquely distinguish the
Number
value. This comes from step 5 in clause 7.1.12.1 of the ECMAScript 2017 Language Specification..
- Thus, for 0.1000000000000000055511151231257827021181583404541015625, for example, JavaScript displays it as “0.1” because that is enough—converting “0.1” to floating-point results in that same value, so there is no need for more digits.
- This hides the rounding because for any decimal numeral up to 15 significant digits, converting it to
Number
and then displaying it produces the same number. For example, we have 0.12345
→ 0.123450000000000004174438572590588591992855072021484375 → “0.12345”. The default formatting rule causes any numeral up to 15 digits to be the one produced by displaying the Number
value that results from that numeral.
- Sometimes when evaluating
a + b == c
for decimal numerals a
, b
, and c
, the rounding of a + b
happens to coincide with the rounding that occurs for c
. Sometimes it does not.
- In
0.1 + 0.3 == 0.4
, 0.1000000000000000055511151231257827021181583404541015625 and 0.299999999999999988897769753748434595763683319091796875 are added, and the rounded result is 0.40000000000000002220446049250313080847263336181640625. That is the same as the result of 0.4
, so the evaluation reports true even though there were rounding errors.
- In
0.1 + 0.2 == 0.3
, 0.1000000000000000055511151231257827021181583404541015625 and 0.200000000000000011102230246251565404236316680908203125 are added, and the rounded result is 0.3000000000000000444089209850062616169452667236328125. That differs from the result for .3
, which is 0.299999999999999988897769753748434595763683319091796875. So the evaluation reports false.
The latter result shows us why displaying the result of 0.1 + 0.2
produces “0.30000000000000004”. It is close to 0.3, but 0.299999999999999988897769753748434595763683319091796875 is closer, so, to uniquely distinguish 0.3000000000000000444089209850062616169452667236328125 from that closer value, JavaScript has to use more digits—it produces zeros until it gets to the first non-zero digit, resulting in “ 0.30000000000000004”.
We could ask when will a + b == c
evaluate to true? The mathematics absolutely determines this; a
, b
, and c
are each converted to the nearest representable value, the addition is performed and its result is rounded to the nearest representable value, and then the expression is true if the left and right results are equal. But there is no simple pattern for this. It depends on the patterns the decimal numerals form in binary. You can find various patterns here and there. But, by and large, they are effectively random.
(0.1).toString(2)
and(0.2).toString(2)
, etc, and doing some operations on them – CertainPerformance0.1 + 0.3 === 0.4
evaluates totrue
. Somehow,0.1 + 0.3
's result, once represented and rounded, does not have any trailing digits, unlike0.1 + 0.2
– CertainPerformance