0
votes

I'm struggling with understanding CRC algorithm. I've been reading this tutorial and if I got it correctly a CRC value is just a remainder of a division where message serves as the dividend and the divisor is a predefined value - carried out in a special kind of polynomial arithmetic. It looked quote simple so I tried implementing CRC-32:

public static uint Crc32Naive(byte[] bytes)
{
    uint poly = 0x04c11db7; // (Poly)
    uint crc = 0xffffffff; // (Init)

    foreach (var it in bytes)
    {
        var b = (uint)it;
        for (var i = 0; i < 8; ++i)
        {
            var prevcrc = crc;

            // load LSB from current byte into LSB of crc (RefIn)
            crc = (crc << 1) | (b & 1); 
            b >>= 1;

            // subtract polynomial if we've just popped out 1 
            if ((prevcrc & 0x80000000) != 0) 
                crc ^= poly;
        }
    }

    return Reverse(crc ^ 0xffffffff); // (XorOut) (RefOut)
}

(where Reverese function reverses bit order)

It is supposed to be analogous to following method of division (with some additional adjustments):

            1100001010
       _______________
10011 ) 11010110110000
        10011,,.,,....
        -----,,.,,....
         10011,.,,....
         10011,.,,....
         -----,.,,....
          00001.,,....
          00000.,,....
          -----.,,....
           00010,,....
           00000,,....
           -----,,....
            00101,....
            00000,....
            -----,....
             01011....
             00000....
             -----....
              10110...
              10011...
              -----...
               01010..
               00000..
               -----..
                10100.
                10011.
                -----.
                 01110
                 00000
                 -----
                  1110 = Remainder

For: 0x00 function returns 0xd202ef8d which is correct, but for 0x01 - 0xd302ef8d instead of 0xa505df1b (I've been using this page to verify my results).

Solution from my implementation has more sense to me: incrementing dividend by 1 should only change reminder by 1, right? But it turns out that the result should look completely different. So apparently I am missing something obvious. What is it? How can changing the least significant number in a dividend influence the result this much?

1

1 Answers

0
votes

This is an example of a left shifting CRC that emulates division, with the CRC initialized = 0, and no complementing or reversing of the crc. The example code is emulating a division where 4 bytes of zeroes are appended to bytes[] ({bytes[],0,0,0,0} is the dividend, the divisor is 0x104c11db7, the quotient is not used, and the remainder is the CRC).

public static uint Crc32Naive(byte[] bytes)
{
    uint poly = 0x04c11db7; // (Poly is actually 0x104c11db7)
    uint crc = 0;           // (Init)

    foreach (var it in bytes)
    {
        crc ^= (((int)it)<<24);     // xor next byte to upper 8 bits of crc
        for (var i = 0; i < 8; ++i) // cycle the crc 8 times
        {
            var prevcrc = crc;
            crc = (crc << 1); 
            // subtract polynomial if we've just popped out 1 
            if ((prevcrc & 0x80000000) != 0) 
                crc ^= poly;
        }
    }
    return crc;
}

It's common to initialize the CRC to something other than zero, but it's not that common to post-complement the CRC, and I'm not aware of any CRC that does a post bit reversal of the CRC.

Another variations of CRC is one that right shifts, normally used to emulate hardware where data is sent in bytes least significant bit first.