how can I assign a value > 127
The result of converting an out-of-range integer value to a signed integer type is either an implementation-defined result or an implementation-defined signal (6.3.1.3/3). So your code is legal C, it just doesn't have the same behavior on all implementations.
without getting an overflow warning
It's entirely up to GCC to decide whether to warn or not about valid code. I'm not quite sure what its rules are, but I get a warning for initializing a signed char
with 256
, but not with 255
. I guess that's because a warning for code like char a = 0xFF
would normally not be wanted by the programmer, even when char is signed. There is a portability issue, in that the same code on another compiler might raise a signal or result in the value 0
or 23
.
-pedantic
enables a warning for this (thanks, pmg), which makes sense since -pedantic
is intended to help write portable code. Or arguably doesn't make sense, since as R.. points out it's beyond the scope of merely putting the compiler into standard-conformance mode. However, the man page for gcc says that -pedantic
enables diagnostics required by the standard. This one isn't, but the man page also says:
Some users try to use -pedantic to check programs for strict ISO C
conformance. They soon find that it does not do quite what they want:
it finds some non-ISO practices, but not all---only those for which
ISO C requires a diagnostic, and some others for which diagnostics
have been added.
This leaves me wondering what a "non-ISO practice" is, and suspecting that char a = 255
is one of the ones for which a diagnostic has been specifically added. Certainly "non-ISO" means more than just things for which the standard demands a diagnostic, but gcc obviously is not going so far as to diagnose all non-strictly-conforming code of this kind.
I also get a warning for initializing an int
with ((long long)UINT_MAX) + 1
, but not with UINT_MAX
. Looks as if by default gcc consistently gives you the first power of 2 for free, but after that it thinks you've made a mistake.
Use -Wconversion
to get a warning about all of those initializations, including char a = 255
. Beware that will give you a boatload of other warnings that you may or may not want.
all this implicitness wouldn't make it easier to understand
You'll have to take that up with Dennis Ritchie. C is weakly-typed as far as arithmetic types are concerned. They all implicitly convert to each other, with various levels of bad behavior when the value is out of range depending on the types involved. Again, -Wconversion
warns about the dangerous ones.
There are other design decisions in C that mean the weakness is quite important to avoid unwieldy code. For example, the fact that arithmetic is always done in at least an int
means that char a = 1, b = 2; a = a + b
involves an implicit conversion from int
to char
when the result of the addition is assigned to a
. If you use -Wconversion
, or if C didn't have the implicit conversion at all, you'd have to write a = (char)(a+b)
, which wouldn't be too popular. For that matter, char a = 1
and even char a = 'a'
are both implicit conversions from int
to char
, since C has no literals of type char
. So if it wasn't for all those implicit conversions either various other parts of the language would have to be different, or else you'd have to absolutely litter your code with casts. Some programmers want strong typing, which is fair enough, but you don't get it in C.