I'm seeing behaviour I don't expect when compiling this code with different optimization levels in gcc.
The function test should fill a 64 bit unsigned integer with ones, shift them shift_size bits to the left, and return the 32 low bits as a 32 bit unsigned integer.
When I compile with -O0 I get the results I expect.
When I compile with -O2 I do not, if I try to shift by 32 bits or more.
In fact, I get exactly the results I'd expect if I were shifting a 32 bit integer by shifts greater than or equal to bit width on x86, which is a shift using only the 5 low bits of the shift size.
But I'm shifting a 64 bit number, so shifts < 64 should be legal right?
I assume it's a bug in my understanding and not in the compiler, but I haven't been able to figure it out.
My machine: gcc (Ubuntu/Linaro 4.4.4-14ubuntu5) 4.4.5 i686-linux-gnu
#include <stdint.h>
#include <stdio.h>
#include <inttypes.h>
uint32_t test(unsigned int shift_size) {
uint64_t res = 0;
res = ~res;
res = res << shift_size; //Shift size < uint64_t width so this should work
return res; //Implicit cast to uint32_t
}
int main(int argc, char *argv[])
{
int dst;
sscanf(argv[1], "%d", &dst); //Get arg from outside so optimizer doesn't eat everything
printf("%" PRIu32 "l\n", test(dst));
return 0;
}
Usage:
$ gcc -Wall -O0 test.c
$ ./a.out 32
0l
$ gcc -Wall -O2 test.c
$ ./a.out 32
4294967295l
-S
for gcc) – Greg Hewgill0l
) for me on all optimisation levels from-O0
to-O3
using gcc 4.2.1, so I suspect you might have found a gcc bug. – Paul R