I'm trying to make sense of the arithmetic conversion rules of the C11 standard. The standard specifies the different integer types with strictly decreasing ranks as:
long long int
long int
int
short int
char
Furthermore, all the other types are by default signed whereas char is either signed or unsigned depending on the implementation. All the types have both signed and unsigned versions. The standard furthermore defines the real types as:
long double
double
float
The way I'm reading the standard is that if we have are adding e.g.
a + b
and a
is a real type and b
is any integer type, then b
is converted to the type of a
. In other words, if a
has type float
and b
has type long long int
, we first convert b
to a float
and then do the addition, or am I reading this incorrectly? In other words, it doesn't matter what the rank of the integer type is, it's the real type that specifies what real type the integer type is converted to.
Finally, I have trouble following this. Assume that a
is unsigned of higher rank than b
which is signed, what happens to b
? The standard says that we convert b
to the unsigned version of a
. How is this done? I see two logical options for this conversion. Say that a
is an unsigned long and b
is a a signed int, then we can do either:
signed int -> unsigned int -> unsigned long
signed int -> signed long -> unsigned long
These would possibly produce different values since in the first case we add UINT_MAX+1 to b
while in the second we add ULONG_MAX+1 to b
if b
is negative.
Finally, what should happen when say a
is of a signed type of higher rank than b
, yet the value range of b
can't fit within the type of a
? This seems to be the last possible case that the standard is referring to? I assume this is what you get on e.g. a 32-bit architecture when int and long have the same size representation wise, so that a signed long can't necessary accommodate all unsigned ints.
Am I getting this right or is there some parts that I'm interpreting in the wrong way?