I'm currently trying to convert the following IEEE 754 hex number 0x805c00f0 to its decimal equivalent which according to online converters is about -8.44920195816662938E-39. Working it out on paper step-by-step, I get the following:
805c00f0 = 1000 0000 0101 1100 0000 0000 1111 0000 Leftmost 1 means the number is negative. The next eight bits, 000 0000 0 means an exponent of -127 after subtracting the bias. I'm left with bits 101 1100 0000 0000 1111 0000, the mantissa.
After recalling the implicit 1, I have -1.101 1100 0000 0000 1111 0000 * 2^-127. Moving the decimal point to the left 127 places, I have -0.00(...)1101 1100 0000 0000 1111 0000. Summing up, I get -1(2^(-127)+2^(-128)+2^(-130)+2^(-131)+2^(-132)+2^(-143)+2^(-144)+2^(-145)+2^(-146)) = -1.01020727331947522E-38. This is not equal to what converters have given me and I cannot understand why. What am I getting wrong?