I have a 64-bit linux system. I compile and run my Fortran code using gfortran and this outputs some number to double precision i.e. ~ 16 decimal places. e.g.
gfortran some_code.f -o executable1
./executable1
10.1234567898765432
If I now compile with the flag -fdefault-real-8, the Fortran double precision type is promoted to 16 bytes = 128 bit and some number is output, but to a higher precision ~33 decimal places. e.g.
gfortran -fdefault-real-8 some_code.f -o executable2
./executable2
10.12345678987654321234567898765432
My question is: how can this computation have been done to such a high precision if my computer is only 64 bit?