2
votes

I was reading Kip Irvine's book on x86 assembly programming when in the explanation of signed integers, it was mentioned that the MSB (Most Significant Bit) is used as the signed bit, 0 for +ve and 1 for -ve, if I am not wrong. However, I cannot understand why is the MSB used for denoting the sign. If LSB was used instead, IMHO, a larger number could have been stored in the same number of bits. Is it because the LSB, that is the first bit (bit at the zeroth position), necessary for representing odd numbers ?

1
Does it matter which bit it is? The thing is that you have 31 bits for that number regardless of where they are, which means ve <= 2^31-1.Victor
If LSB was used instead, IMHO, a larger number could have been stored in the same number of bits - are you giving up the ability to store odd numbers then, to free up the 2^0's place-value position for use instead as part of a sign/magnitude representation?Peter Cordes
LSB can be used for sign bit, but that is done typically to allow variable length coding of negative values by "folding" all negative values between all positive values. However this will not increase the absolute value of the maximum representable signed number, which is still 128 for 8-bits and 32768 for 16-bits. The downside is that arithmetic will be (much) more complicated in that format.Aki Suihkonen

1 Answers

5
votes

Having MSB to represent sign allows unsigned and signed addition to be performed with exactly the same set of hardware (transistors).

This is related to the fact that signed integers use the same bit-pattern as unsigned for non-negative values, which wouldn't be the case if you put the sign bit anywhere else. It means operations like C (unsigned)my_intvar are free, instead of needing a shift or rotate instruction, or some special conversion instruction.


AFAIK, this is the most logical and the least surprising position for the sign bit. Indeed, if one would want a 16-bit signed number to allow encoding of the value 65534, one would need to give up the odd numbers.

From mathematical point it also makes more sense to think that each bit has a different weight: 1,2,4, ..., 16384, 32768 for unsigned 16-bit values, but 1,2,4, ..., 16384, -32768 for signed values. This simplifies the analysis of HW algorithms, allowing e.g. signed and unsigned multiplication to share the vast majority of logic.