I have been hanging on it too much time...
I've read «A Painless Guide to CRC Error Detection Algorithms» several times. May be I not completely understand theory, but practice seems as clear as sky, but something wrong.
I'm not about code and particular realization, but conceptual (a plain method).
I do this:
1. Take a single byte.
2. Take a uint and fill it with 0xffffffff.
3. Check if the highest bit is 1.
4. Shift one bit to the left.
5. Put the next bit from source byte.
6. It Step3 checking is true, then XOR it with 0x04C11DB7.
7. After data is end, reverse (reflect) working uint.
8. XOR it with 0xffffffff
And it works... but only with zeros (I've checked 1,2,3,4 bytes of zeros). But when I take a byte 0x01 it fails (online calculators show different result). I just can't catch what am I doing wrong.
Step by step (mine version with lowest bit first):
01.Initialization 0xffffffff
02.Shift<< 0fffffffe
03.Place that single 1 0xffffffff
04.XOR 0xfb3ee248
05.Shift<< 0xf67dc490
06.XOR 0xf2bcd927
07.Shift<< 0xe579b24e
08.XOR 0xe1b8aff9
09.Shift<< c3715ff2
10.XOR 0xc7b04245
11.Shift<< 0x8f60848a
12.XOR 8ba1993d
13.Shift<< 0x1743327a
14.XOR 0x13822fcd
15.Shift<< 0x27045f9a
16.Shift<< 0x4e08bf34
17.Reflect 0x2cfd1072
18.XOR (0xffffffff) 0xd302ef8d (the result)
Please help! What is wrong with it?