So i'm trying to return a quad precision (128 bit) value from fortran and manipulate it in python. But the return value i get does not seem to match what i expect.
Fortran code:
module testquad
implicit none
integer, parameter :: qp = selected_real_kind(p=30)
contains
real(qp) function func_quad()
real(qp) :: x
x = 5_qp
write(*,*) x,sizeof(x)
write(*,'(B128)') x
func_quad = x
end function func_quad
end module testquad
Compiled with:
gfortran -fpic -shared -fdump-tree-all -o libtestquad.so testQuad.f90
Python side I use a c_ubyte*16 array to hold the return value (as ctypes doesn't expose a 128 bit quad type)
import ctypes
lib = ctypes.CDLL('./libtestquad.so')
f = getattr(lib, '__testquad_MOD_func_quad')
f.argtypes = []
f.restype = ctypes.c_ubyte*16
result = f()
bb=[]
for i in bytearray(result)[::-1]:
bb.append(bin(i)[2:].rjust(8,'0'))
bb=''.join(bb)
print(bb)
Running the code the output from calling the fortran function is:
5.00000000000000000000000000000000000 16
1000000000000010100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
So we get the 5 and we see the size is 16 (so i am getting a quad number)
It seems gfortran skips printing the first 0/1 for the sign bit. But we can see it has 3 1's and the rest are 0's in the bit pattern.
Following the wikipedia page on quad's then we have
sign = 0 (gfortran misses this)
exponent = 100000000000001
significand = 0100....
Converting that out:
a = 2**(int('100000000000001',base=2)-16383) # 4
b = 1 + 0/2**1 + 1/2**2 + 0/2**3 .... # 1.25
print(a * b)
5
So gfortran is, as expected, printing the right bit pattern for the number 5.
However, printing out a bit pattern in python for the result:
00000000000000000111111110110001001111101010010111111001001010000000000000000000000000000000000000000000000000000000000000000000
Which is clearly different.
Have i screwed up printing the bit pattern?
Is there something special in how gfortran returns a __float128 that means I can't just interpret it as 16 bytes?
Is there something special in how ctypes handles returning an array of c_bytes that breaks things?
Something else?
Extras:
using -fdump-tree-all and looking at the .original file we see:
func_quad ()
{
real(kind=16) x;
real(kind=16) __result_func_quad;
x = 5.0e+0;
// snip write statement code
__result_func_quad = x;
return __result_func_quad;
}
So it seems there is nothing "special" going on, variables are not being assigned to structs/pointers/arrays to handle the quad type.
1
for the sign bit. – evets