Lets say we have a bunch of numbers that increment in small values from a large offset
eg offset = 123456789
our numbers are: 123456790 123456791 123456793 123456796 123456799 123456804
if we subtract the offset from these numbers we get 1 2 4 7 10 15
The numbers will be stored with 8 bytes of other data making a total of 12 other bytes, then a group of 10000 of these will be compressed in one chunk
so if we are storing these numbers as 32bit integers and compressing them, if we use the second set of number will they compress any better? or because they contain the same amount of entropy they will compress exactly the same?
Because my work mates immediate response was the second set will compress better as there is going to be a lot of zeros in the 32 bit number in the second set, however the entropy is the same (I think) so will a typical compression algorithm not figure this out anyway and result in a similar compression ratio?
Ultimately I think I have to trial this to see what the results are, but I am curious about trying to figure it out before hand.