4
votes

enter image description here

If I understand correctly, JavaScript numbers are always stored as double precision floating point numbers, following the international IEEE 754 standard. Which mean it uses 52 bits for fraction significand. But in the picture above, it seems like 0.57 in binary uses 54 bits.

Another thing is (if I understand correctly) 0.55 in binary is also an repeating number. But why 0.55 + 1 = 1.55 (no loss) and 0.57 + 1 = 1.5699999999999998

4"

3
Possible duplicate of Is floating point math broken?phuzi
I guess .toString(2) does rounding during the conversion ...Jonas Wilms
You're fighting the vagaries of JS Math library implementations in different browsers. Additionally, machine (hardware) architecture may enforce different standards for internal representations. Add the wackiness of floating-point math implementations on different OSs, and you have almost no hope of answering your question in general. Which processor (cpu), which OS, which browser...having all of those, you MAY be able to answer your question for that specific combination.Richard Uie
@RichardUie: JavaScript implements ECMA-262, and ECMA-262 specifies the Number format and its arithmetic. These operation in this question do not differ between different correct implementations of JavaScript.Eric Postpischil
@phuzi: No, it is not a duplicate. The display behavior asked about here arises due to the ECMA-262 specification, not due to floating-point generally.Eric Postpischil

3 Answers

6
votes

Which mean it uses 52 bits for fraction significand. But in the picture above, it seems like 0.57 in binary uses 54 bits.

JavaScript’s Number type, which is essentially IEEE 754 basic 64-bit binary floating-point, has 53-bit significands. 52 bits are encoded in the “trailing significand” field. The leading bit is encoded via the exponent field (an exponent field of 1-2046 means the leading bit is one, an exponent field of 0 means the leading bit is zero, and an exponent field of 2047 is used for infinity or NaN).

The value you see for .57 has 53 significant bits. The leading “0.” is produced by the toString operation; it is not part of the encoding of the number.

But why 0.55 + 1 = 1.55 (no loss) and 0.57 + 1 = 1.5699999999999998.

When JavaScript is formatting some Number x for display with its default rules, those rules say to produce the shortest decimal numeral (in its significant digits, not counting decorations like a leading “0.”) that, when converted back to the Number format, produces x. Purposes of this rule include (a) always ensuring the display uniquely identifies which exact Number value was the source value and (b) not using more digits than necessary to accomplish (a).

Thus, if you start with a decimal numeral such as .57 and convert it to a Number, you get some value x that is a result of the conversion having to round to a number representable in the Number format. Then, when x is formatted for display, you get the original number, because the rule that says to produce the shortest number that converts back to x naturally produces the number you started with.

(But that x does not exactly represent 0.57. The nearest double to 0.57 is slightly below it; see the decimal and binary64 representations of it on an IEEE double calculator).

On the other hand, when you perform some operation such as .57 + 1, you are doing some arithmetic that produces a number y that did not start as a simple decimal numeral. So, when formatting such a number for display, the rule may require more digits be used for it. In other words. when you add .57 and 1, the result in the Number format is not the same number as you get from 1.57. So, to format the result of .57 + 1, JavaScript has to use more digits to distinguish that number from the number you get from 1.57—they are different and must be displayed differently.


If 0.57 was exactly representable as a double, the pre-rounding result of the sum would be exactly 1.57, so 1 + 0.57 would round to the same double as 1.57.

But that's not the case, it's actually 1 + nearest_double(0.57) =
1.569999999999999951150186916493 (pre-rounding, not a double) which rounds down to
1.56999999999999984012788445398. These decimal representations of numbers have many more digits than we need to distinguish 1ulp (unit in the last place) of the significand, or even the 0.5 ulp max rounding error.

1.57 rounds to ~1.57000000000000006217248937901, so that's not an option for printing the result of 1 + 0.57. The decimal string needs to distinguish the number from adjacent binary64 values.


It just so happens that the rounding that occurs in .55 + 1 yields the same number one gets from converting 1.55 to Number, so displaying the result of .55 + 1 produces “1.55”.

3
votes

toString(2) prints string up to last non-zero digit.

1.57 has different bit representation than 1 + 0.57 ( but it's not impossible to get result 1.57),
but 1 + 0.55 in binary equals 1.55 as you can see in snippet below:

console.log(1.57)
console.log(1.57.toString(2))
console.log((1+.57).toString(2))
console.log("1.32 + 0.25 = ",1.32 + .25)
console.log((1.32 + .25).toString(2))
console.log(1.55)
console.log(1.55.toString(2))
console.log((1+.55).toString(2))

Remember that computer performs operations on binary numbers, 1.57 or 1.55 is just a human-readable output

1
votes

Number.prototype.toString roughly implements the following section of the ES262 spec:

7.1.12.1 NumberToString(m)

let n, k, and s be integers such that k ≥ 1, 10 ** k-1 ≤ s < 10 ** k,

the Number value for s × 10 ** n-k is m,

and k is as small as possible.

Therefore toString just estimates the value, it does not return the exact bytes stored.

What you see in the console is not an exact representation either.