31
votes

What precision does numpy.float128 map to internally? Is it __float128 or long double? Or something else entirely?

A potential follow on question if anybody knows: is it safe in C to cast a __float128 to a (16 byte) long double, with just a loss in precision? (this is for interfacing with a C lib that operates on long doubles).

Edit: In response to the comment, the platform is 'Linux-3.0.0-14-generic-x86_64-with-Ubuntu-11.10-oneiric'. Now, if numpy.float128 has varying precision dependent on the platform, that is also useful knowledge for me!

Just to be clear, it is the precision I am interested in, not the size of an element.

2
"The versions with a number following correspond to whatever words are available on the specific platform you are using which have at least that many bits in them" seems clear. 128 bits. What was confusing about that? It is platform specific and you didn't list a platform, making it impossible to answer your question as asked. Please update the question with the exact Python platform information. Hint: there's a platform package. - S.Lott
"seems clear" -- assuming it also says what happens when no such type is available on the specific platform. - Steve Jessop
I've been assuming that the numpy precision is platform independent, so information to the contrary is certainly useful. I would assume that float128 maps to something like __float128 internally, but long double is also 128 bits on my system, so it could reasonably be that. - Henry Gomersall
"assuming that the numpy precision is platform independent"? Why? The documentation is quite clear that it's not platform independent. The precision depends on the size of the element. 64 bits is one precision. 128 bits is a different precision. Both are documented in the IEEE floating-point specifications. The question you need to ask is "how do I figure out which size my particular numpy is using?" - S.Lott
What? The question is referring to numpy.float128. Does the precision of that change across platforms? I do appreciate that not all platforms offer that dtype, but it's not so silly to assume that those that do, define it the same way. Would you be so good as to point me to docs that might contradict that? This page doesn't even refer to float128 (but does nicely define those types it does document). I find it reasonable that the type maps to IEEE 754 quadruple type, and that's what I'm trying to confirm (or not). - Henry Gomersall

2 Answers

12
votes

It's quite recommendend to use longdouble instead of float128, since it's quite a mess, ATM. Python will cast it to float64 during initialisation.

Inside numpy, it can be a double or a long double. It's defined in npy_common.h and depends of your platform. I don't know if you can include it out-of-the-box into your source code.

If you don't need performance in this part of your algorithm, a safer way could be to export it to a string and use strold afterwards.

49
votes

numpy.longdouble refers to whatever type your C compiler calls long double. Currently, this is the only extended precision floating point type that numpy supports.

On x86-32 and x86-64, this is an 80-bit floating point type. On more exotic systems it may be something else (IIRC on Sparc it's an actual 128-bit IEEE float, and on PPC it's double-double). (It also may depend on what OS and compiler you're using -- e.g. MSVC on Windows doesn't support any kind of extended precision at all.)

Numpy will also export some name like numpy.float96 or numpy.float128. Which of these names is exported depends on your platform/compiler, but whatever you get always refers to the same underlying type as longdouble. Also, these names are highly misleading. They do not indicate a 96- or 128-bit IEEE floating point format. Instead, they indicate the number of bits of alignment used by the underlying long double type. So e.g. on x86-32, long double is 80 bits, but gets padded up to 96 bits to maintain 32-bit alignment, and numpy calls this float96. On x86-64, long double is again the identical 80 bit type, but now it gets padded up to 128 bits to maintain 64-bit alignment, and numpy calls this float128. There's no extra precision, just extra padding.

Recommendation: ignore the float96/float128 names, just use numpy.longdouble. Or better yet stick to doubles unless you have a truly compelling reason. They'll be faster, more portable, etc.