I have encountered a what seems to be a difficult question to answer.
I have implemented a functionality that checks specific ranges of ROM against modified data (by unauthorized user mainly). This is additional to the already active ECC/EDC engines protecting the data from HW random faults (with different algorithms).
Basically at build time, a CRC32 is calculated over the whole Data ROM (defined explicitly in linker file) and stored at specific location. Then during startup of the device the function checks the calculated Data ROM CRC against the one stored at the known location.
The way it is checked is by calculating CRC32 over 4 bytes at a time using microcontroller specific instruction and keeping intermediate results which will be carried until the last 4 bytes are checked, when the final CRC value will be obtained.
I need to justify if the functionality detects enough errors.
How can I evaluate the error detection capability of the CRC32 IEEE 802.3 (with 0x04C11DB7 polynom for example) in this case since the memory block size can have any value with each build, which means that the error detection capability will be different each time the size changes?
Everything that I found until now on this subject takes into account the size of the Ethernet message, which in that case makes sense (since a received message can be marked as valid or not after it is received). But when calculating the CRC over a large memory block (let's assume 1MB or 1GB) I can only know if the data is not corrupted right at the end since the value to check against is obtained after parsing the complete block.
I am thinking of a scenario in which several bits are corrupted in a place of the block and further corruptions in different areas will lead to a "correct" CRC value at the end and thus to undetected errors.