How does ECC for burst error correction work?
By "burst error detection", I mean a technique that can detect (for example) any combination of bit errors within any one [or two] sequences of 64 consecutive bits.
I need a conceptual explanation, not math.
I studied several descriptions of the technique that are formulated in terms of endless math symbols, but I do not understand what they are saying (because I am not fluent those advanced math formulations).
I ask this question because I dreamed up a technique to detect one burst of 64-bits in a 4096-byte (32768-bit) data-stream (disk-sector/transmission/etc), and want someone to explain the following:
#1: whether my approach is different or equivalent to "cyclic error codes".
#2: how much less efficient is my technique (640-bits corrects any 64-bit burst in 32768-bit stream).
#3: whether anyone can see a way to make my approach detect two bursts instead of only one.
#4: whether my approach is substantially simpler for software implementation.
I posted a conceptual explanation of my technique recently, but several folks were annoyed by my detailed explanation and closed the question. However, it demonstrates that [at least] my technique can be described in conceptual terms, as hopelly someone can for conventional techniques, (cyclic codes). You will also need to read my explanation (to compare it with conventional techniques) at this page: