Signed integers in C represent numbers. If a
and b
are variables of signed integer types, the standard will never require that a compiler make the expression a+=b
store into a
anything other than the arithmetic sum of their respective values. To be sure, if the arithmetic sum would not fit into a
, the processor might not be able to put it there, but the standard would not require the compiler to truncate or wrap the value, or do anything else for that matter if values that exceed the limits for their types. Note that while the standard does not require it, C implementations are allowed to trap arithmetic overflows with signed values.
Unsigned integers in C behave as abstract algebraic rings of integers which are congruent modulo some power of two, except in scenarios involving conversions to, or operations with, larger types. Converting an integer of any size to a 32-bit unsigned type will yield the member corresponding to things which are congruent to that integer mod 4,294,967,296. The reason subtracting 3 from 2 yields 4,294,967,295 is that adding something congruent to 3 to something congruent to 4,294,967,295 will yield something congruent to 2.
Abstract algebraic rings types are often handy things to have; unfortunately, C uses signedness as the deciding factor for whether a type should behave as a ring. Worse, unsigned values are treated as numbers rather than ring members when converted to larger types, and unsigned values smaller than int
get converted to numbers when any arithmetic is performed upon them. If v
is a uint32_t
which equals 4,294,967,294
, then v*=v;
should make v=4
. Unfortunately, if int
is 64 bits, then there's no telling what v*=v;
could do.
Given the standard as it is, I would suggest using unsigned types in situations where one wants the behavior associated with algebraic rings, and signed types when one wants to represent numbers. It's unfortunate that C drew the distinctions the way it did, but they are what they are.