From the standard (4.7) it looks like the conversion from int to unsigned int, when they both use the same number of bits, is purely conceptual:
If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo 2 n where n is the number of bits used to represent the unsigned type). [ Note: In a two’s complement representation, this conversion is conceptual and there is no change in the bit pattern (if there is no truncation). — end note ]
So in this direction the conversion preserves the bitmask. I am not sure the standard guarantees the same for the conversion from unsigned int to int (again, assuming the same number of bits are used). The standard here says:
If the destination type is signed, the value is unchanged if it can be represented in the destination type (and bit-field width); otherwise, the value is implementation-defined.
What does it exactly mean "the destination type" here? For instance 2^32-1 cannot be represented by a 32 bit int. Does that mean that it cannot be represented in the destination type and therefore it cannot be assumed that the bit pattern will stay the same?