4
votes

why is the output fffffffa rather than 0000000a for this code

char c=0Xaa;
     int b=(int)c;
     b=b>>4;
     printf("%x",b);

what i thought was char c=OXaa will be aa and when it is typecasted to int it changes to 000000aa.

can anyone tell me what is happening when the char is being typecasted to integer..

3
Already your assignment of a value larger than 127 to a char may lead to undefined behavior. Don't do that, but always assign 'a' type constants to plain char. To have an hexadecimal value in there you may use '\xaa' as in your example, but there are more chances that the compiler will tell you when you exceed the bounds.Jens Gustedt

3 Answers

4
votes

int is signed so the upcast is sign-extending. Consider the binary representations.

0xAA = 10101010

char is often signed, so when you cast to the (signed by default) int, the first 1 means that it's interpreted as a negative twos-complement number:

((int) ((signed char)0xAA) ) = 11111111111111111111111110101010

To avoid this, use an unsigned char or an unsigned int.

1
votes

The char type of your compiler is signed, so when it's converted to int it is sign-extended since the highest bit is set.

Then, the right-shift operator maintains the negative-ness, and shifts in new ones at the top. Right-shifting a negative value is an undefined operation, so don't do this.

0
votes

This is dependant on the architecture used (CPU) as well as the compiler.

In your case, my guess is that the value of variable c is placed in a cpu register, where only the lower 8 bits is defined. When casted to int and copied to another variable then those bits with "undefined" value gets copied too, and since they now are part of a integer value, are treated as valid.

To overcome this, you may want to copy the c value like this:

int b = (int)c & 0xff;