I'm working with a microcontroller with native HW functions to calculate CRC32 hashes from chunks of memory, where the polynomial can be freely defined. It turns out that the system has different data-links with different bit-lengths for CRC, like 16 and 8 bit, and I intend to use the hardware engine for it.
In simple tests with online tools I've concluded that it is possible to find a 32-bit polynomial that has the same result of a 8-bit CRC, example:
- hashing "a sample string" with 8-bit engine and poly 0xb7 yelds a result 0x97
- hashing "a sample string" with 16-bit engine and poly 0xb700 yelds a result 0x9700
- ...32-bit engine and poly 0xb7000000 yelds a result 0x97000000 (with zero initial value and zero final xor, no reflections)
So, padding the poly with zeros and right-shifting the results seems to work. But is it 'always' possible to find a set of parameters that make 32-bit engines to work as 16 or 8 bit ones? (including poly, final xor, init val and inversions)
To provide more context and prevent 'bypass answers' like 'dont't use the native engine': I have a scenario in a safety critical system where it's necessary to prevent a common design error from propagating to redundant processing nodes. One solution for that is having software-based CRC calculation in one node, and hardware-based in its pair.