I was reading Kip Irvine's book on x86 assembly programming when in the explanation of signed integers, it was mentioned that the MSB (Most Significant Bit) is used as the signed bit, 0 for +ve and 1 for -ve, if I am not wrong. However, I cannot understand why is the MSB used for denoting the sign. If LSB was used instead, IMHO, a larger number could have been stored in the same number of bits. Is it because the LSB, that is the first bit (bit at the zeroth position), necessary for representing odd numbers ?
1 Answers
Having MSB to represent sign allows unsigned and signed addition to be performed with exactly the same set of hardware (transistors).
This is related to the fact that signed integers use the same bit-pattern as unsigned for non-negative values, which wouldn't be the case if you put the sign bit anywhere else. It means operations like C (unsigned)my_intvar
are free, instead of needing a shift or rotate instruction, or some special conversion instruction.
AFAIK, this is the most logical and the least surprising position for the sign bit. Indeed, if one would want a 16-bit signed number to allow encoding of the value 65534, one would need to give up the odd numbers.
From mathematical point it also makes more sense to think that each bit has a different weight: 1,2,4, ..., 16384, 32768 for unsigned 16-bit values, but 1,2,4, ..., 16384, -32768 for signed values. This simplifies the analysis of HW algorithms, allowing e.g. signed and unsigned multiplication to share the vast majority of logic.