In section 7.18.1.1 paragraph 1 of the C99 standard:
The typedef name
intN_tdesignates a signed integer type with width N, no padding bits, and a two’s complement representation.
According to the C99 standard, exact-width signed integer types are required to have a two's complement representation. This means, for example, int8_t has a minimum value of -128 as opposed to the one's complement minimum value of -127.
Section 6.2.6.2 paragraph 2 allows the implementation to decide whether to interpret a sign bit as sign and magnitude, two's complement, or one's complement:
If the sign bit is one, the value shall be modified in one of the following ways:
— the corresponding value with sign bit 0 is negated (sign and magnitude);
— the sign bit has the value -(2N) (two’s complement);
— the sign bit has the value -(2N - 1) (ones’ complement).
The distinct between the methods is important because the minimum value of an integer in two's complement (-128) can be outside the range of values representable in ones' complement (-127 to 127).
Suppose an implementation defines the int types as having ones' complement representation, while the int16_t type has two's complement representation as guaranteed by the C99 standard.
int16_t foo = -32768;
int bar = foo;
In this case, would the conversion from int16_t to int cause implementation-defined behavior since the value held by foo is outside the range of values representable by bar?
int16_tandintwith one's complement signed representation. It is the rationale for C to mark these exact-width integer types as optional. - ouah