1
votes

Lets say we have a bunch of numbers that increment in small values from a large offset

eg offset = 123456789

our numbers are: 123456790 123456791 123456793 123456796 123456799 123456804

if we subtract the offset from these numbers we get 1 2 4 7 10 15

The numbers will be stored with 8 bytes of other data making a total of 12 other bytes, then a group of 10000 of these will be compressed in one chunk

so if we are storing these numbers as 32bit integers and compressing them, if we use the second set of number will they compress any better? or because they contain the same amount of entropy they will compress exactly the same?

Because my work mates immediate response was the second set will compress better as there is going to be a lot of zeros in the 32 bit number in the second set, however the entropy is the same (I think) so will a typical compression algorithm not figure this out anyway and result in a similar compression ratio?

Ultimately I think I have to trial this to see what the results are, but I am curious about trying to figure it out before hand.

1

1 Answers

3
votes

This is known as delta encoding. Depending on the specifics of your data, this may well give you better compression. It may also be possible to get a more direct savings: for example, if you know for sure that the difference between adjacent elements will never lie outside the range 0–255 you could store the deltas as single bytes rather than as 32-bit ints.