I recently read that signed integer overflow in C and C++ causes undefined behavior:
If during the evaluation of an expression, the result is not mathematically defined or not in the range of representable values for its type, the behavior is undefined.
I am currently trying to understand the reason of the undefined behavior here. I thought undefined behavior occurs here because the integer starts manipulating the memory around itself when it gets too big to fit the underlying type.
So I decided to write a little test program in Visual Studio 2015 to test that theory with the following code:
#include <stdio.h>
#include <limits.h>
struct TestStruct
{
char pad1[50];
int testVal;
char pad2[50];
};
int main()
{
TestStruct test;
memset(&test, 0, sizeof(test));
for (test.testVal = 0; ; test.testVal++)
{
if (test.testVal == INT_MAX)
printf("Overflowing\r\n");
}
return 0;
}
I used a structure here to prevent any protective matters of Visual Studio in debugging mode like the temporary padding of stack variables and so on.
The endless loop should cause several overflows of test.testVal
, and it does indeed, though without any consequences other than the overflow itself.
I took a look at the memory dump while running the overflow tests with the following result (test.testVal
had a memory address of 0x001CFAFC
):
0x001CFAE5 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x001CFAFC 94 53 ca d8 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
As you see, the memory around the int that is continuously overflowing remained "undamaged". I tested this several times with similar output. Never was any memory around the overflowing int damaged.
What happens here? Why is there no damage done to the memory around the variable test.testVal
? How can this cause undefined behavior?
I am trying to understand my mistake and why there is no memory corruption done during an integer overflow.