5
votes
#include <stdio.h>

int main(void) {
    double x = 0.12345678901234567890123456789;
    printf("%0.16f\n", x);
    return 0;
};

In the code above I'm initializing x with literal that is too large to be represented by the IEEE 754 double. On my PC with gcc 4.9.2 it works well. The literal is rounded to the nearest value that fits into double. I'm wondering what happens behind the scene (on the compiler level) in this case? Does this behaviour depend on the platform? Is it legal?

1
C11 draft standard n1570: 6.4.4.2 Floating constants For decimal floating constants, and also for hexadecimal floating constants when FLT_RADIX is not a power of 2, the result is either the nearest representable value, or the larger or smaller representable value immediately adjacent to the nearest representable value, chosen in an implementation-defined manner. For hexadecimal floating constants when FLT_RADIX is a power of 2, the result is correctly rounded. tl;dr: The value of x does not have to the exactly-rounded literal.EOF
@EOF Why not answer?Eugene Sh.
Floating point literal 0.12345678901234567890123456789; is not large. It is explicitly precise. 1e123456789 is large.chux - Reinstate Monica

1 Answers

6
votes

When you write double x = 0.1;, the decimal number you have written is rounded to the nearest double. So what happens when you write 0.12345678901234567890123456789 is not fundamentally different.

The behavior is essentially implementation-defined, but most compilers will use the nearest representable double in place of the constant. The C standard specifies that it has to be either the double immediately above or the one immediately below.