I can not find anywhere in the C standard that justifies the following:
int n = -0x80000000 // set n to -2^31
Assume an implementation where int is 32 bits. The apparent problem is that the integer constant has type unsigned int, as per the table in the committee draft standard at 6.4.4.1 paragraph 5. Then the negation is calculated according to 6.5.3.3 paragraph 3:
The result of the unary - operator is the negative of its (promoted) operand. The integer promotions are performed on the operand, and the result has the promoted type.
Performing the integer promotions does not change the type (unsigned int stays unsigned int). Then the negative is taken. Since the result retains the promoted type, it is reduced modulo 2^32, producing 2^31 (so the negation has no effect).
Assigning an out of range value to type int is covered by the following:
6.3.1.3 Signed and unsigned integers
1 When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.
2 Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type. 60)
3 Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.
So, in the end, we get implementation defined behavior when we try to assign a valid int value to an int object (assuming 2's complement with no trap representation).
The following would be standard guaranteed to have the expected result:
int n = -(long long)0x80000000 // set n to -2^31
So, do you really need to cast up to validly make an in range assignment, or am I missing something?
int
can be 16-bit. – chux - Reinstate Monicaint n = -0x80000000
makes no sense. If you write really weird code, you will often trigger really weird C standard behavior as well. – Lundin