0
votes

I have an actually very easy question about the IEEE-754 standard in which numbers are coded and saved on the computer.

At uni (exams) I have come across the following definition for 16-bit IEEE-754-format (half precision): 1 sign bit, 6 exponent bits & 9 mantissa bits.

An internet search (or books) reveal another definition: 1 sign bit, 5 exponent bits & 10 mantissa bits

The reason why I’m asking is that I cannot believe the uni might have made such a simple mistake, so are there multiple definitions for numbers given in 16-bit IEEE-754 format?

1
The standards committee had a rational reason for 8 and 11 bits of exponent size for single and double. Their intent was to prescribe a formula for extending upward to quad (and beyond) and downward into half. Alas, I don't remember the rationale, but I suspect there is a reason why 5 (or 6) would be "right" and 6 (or 5) would be "not quite as good".Rick James

1 Answers

1
votes

Conforming to an IEEE standard is voluntary. People are free to use other formats. The IEEE-754 standard specifies a binary16 format that uses 1 bit for the sign, 5 bits for the exponent, and 10 bits for the primary significand encoding.

People may use other formats because they want more or less precision in the significand or range in the exponent.

Textbooks and academic exercises often use non-standard formats for the purpose of inducing students to reason about them on their own rather than looking up answers or learning existing formats by rote.

If the hardware you are using supports a 16-bit floating-point format, the binding specification for that format is in the hardware documentation, not in the IEEE-754 standard.