2
votes

I'm a beginner to C.

This code does what SCHAR_MIN in <limits.h> do:

#include<stdio.h>

int main(void)
{
    printf("Minimum Signed Char %d\n",-(char)((unsigned char) ~0 >> 1) - 1);
    return 0;
}

This is what I understood: (unsigned char) takes the bits of an unsigned char which is "0000 0000". ~0 gives the complement of it which is "1111 1111" and >> 1 turns the "1" on the left side of "1111 1111" to 0, so it will give "0111 1111". Converting "0111 1111" to integer will give 127 which is the maximum signed char. To get the minimum, we need to invert 127, so we multiply it by - to get -127 and - 1 gives us -128 which is the minimum. Tell me if I misunderstood something.

Question:

what's the role of (char) here? Right before ((unsigned char) ~0 >> 1)? What does it represent?

2
The role of (char) is to convert the result of the right expression to a char. This is a cast. By the way, do not consider char is always 8 bits!fpiette
@fpiette Why would we convert the result to char? if I remove it, it'll work just fine. what difference does it make? Is it to make the bit size smaller?user11329352
remove the - 1 and then try with and without (char)Eraklon
@Eraklon: The results are the same.Eric Postpischil
@Eraklon Nothing obvious changed...user11329352

2 Answers

1
votes
  • The integer constant 0 is of type int, so ~0 results in an int value such as 0xFFFFFFFF (assuming 32 bit). Which is actually a negative value corresponding to decimal -1 in 2's complement.
  • In case we're only interested in the least significant byte, we can mask out that one by casting to unsigned char, ending up with 0xFF. That way we also drop the signed format, for now.
  • Then the >> operator implicitly promotes our temporary unsigned char operand back up to int, but since the value 0xFF (255) fits inside an int, that's the value we get. We end up bit-shifting a signed type but has a positive value.
  • 0xFF >> 1 gives 0x7F = 127. In fact the whole ((unsigned char) ~0 >> 1) is just a complicated way of typing 127. Because on all normal systems with 8 bit bytes, that's what we get.
  • Now this is explicitly converted to char with a cast. It's still the same value 127.
  • - gives -127. The unary - operator implicit promotes the result back to int.
  • -127 - 1 = -128.

Note that the signedness of char is implementation-defined. It might as well be unsigned, in which case the cast doesn't make much sense. The implicit promotion by the - gives us a signed int anyway, so the cast to char achieves nothing.

See Implicit type promotion rules for details about integer promotion.

1
votes

This code may have arisen as a pattern that works for all signed integer types, as the cast is necessary for int and wider types, although it is not needed for char in ordinary C implementations.

Consider -(Type)((unsigned Type) ~0 >> 1) - 1). With the <code>(<i>Type</i>)</code> cast, we get the desired negative result. Without it, we would get a wrong positive result: When Type is int, and two’s complement is used, ((unsigned <i>Type</i>) ~0 >> 1 is an unsigned int with its high bit off and others on. Applying unary - to this would produce an unsigned int, and so would the subtraction. So the result would be positive, not the desired negative value. With the cast (Type), the value is converted to the signed type before the unary -, so a negative value is produced.

Thus, this code likely arose by using the pattern -(Type)((unsigned Type) ~0 >> 1) - 1) for all the signed integer types. Even though the cast is not necessary for char or short in ordinary C implementations, it remains there because simple substitutions were made when drafting the code in a pattern.

This also explains why the cast is to char instead of signed char: It was a simple substitution of long long int, long int, int, short, and char for Type, neglecting the special treatment of char versus signed char. (This is not a problem in ordinary C implementations where char is narrower than int, but using signed char would make the code work in exotic C implementations where char is the same width as int.)

Inspecting the limits.h file you found this in would likely show the same pattern for the other types.