I am implementing Reed Solomon Decoding for QR Codes Decoding using C++. I have implemented the main part of Decoding and error detection so far. I have followed ISO/IEC 18004:2006 Manual. As I have seen in Annex B : Error Correction decoding steps, Syndromes S(i) are calculated as S(i) = R(a^i). Let's assume we have High Error Correction Level, so we have 9 Data Codewords and 17 Error Correction Codewords, which give us a total of 26 codewords when we are in QR Codes Version 1. So, I assume that the polynomial R(x) shown in Pg.76 of ISO/IEC 18004:2006 Manual will be a sequence of Data Codewords and Error Correction Codewords with correct power of x respectively. So, S(i) = R(a^j) , where i=0...15 and j=0...25 for High Error Correction Level. But, when I run my code and as I have a whole QR Code Matrix with no errors, I expect all syndromes to be equal to zero, I take as a result non-zero Syndromes. Have I understood something wrong about Syndromes calculation under Galois Field Arithmetic through Reed Solomon Decoding ?
1 Answers
After looking at QR Code references, for version 1, level H, with 9 data bytes and 17 error correction bytes, using generator polynomial g(x) = (x-1)(x-a)(x-a^2)...(x-a^(16)) you should be using syndromes S(i) = R(a^i) for i = 0 to 16. In a no error case, all 17 syndromes should be zero.
There's a decent wiki article for Reed Solomon error correction:
http://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction
The wiki article contains a link to a Nasa tech brief RSECC tutorial:
http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19900019023.pdf
Link to C source code for a console program that demonstrates RSECC methods for 8 bit field (user chooses from 29 possible fields). I use Microsoft compilers or Visual Studio to compile it and Windows to run it, but it should work on most systems.
Note - I updated the ecc demo program to handle erasures in addition to errors, just in case it could be useful. Also added code to calculate error value polynomial Omega in case Euclid method is not used. The link is the same as before:
http://rcgldr.net/misc/eccdemo8.zip
Update based on the questions in comments:
My question about which GF(2^8):
GF(2^8) is based on 9 bit polynomial
x^8 + x^4 + x^3 + x^2 + 1 = hex 11d
primitive is x + 0 (hex 2)
Looking up QR code references, different generator polynomials are used depending on the correction level: L (low), M (medium), Q (quality), H (high).
Question about decoding using matrices. Sklar paper shows decoding using linear equations and matrix inversion. This procedure has to assume a maximum error case t, which will be floor(e / 2) where e is the number of error correction bytes (also called parity bytes or redundant bytes). If the determinant is zero, then try t-1 errors, if that's zero, try t-2 errors and so on, until determinant is non-zero or t is reduced to zero.
The Euclid or Berlekamp Massey decoding methods will automatically determine the number of errors.
In all cases, if there are more than t errors, there's some chance that a mis-correction will occur, depending on the odds of producing t locations where none of them are out of range. If any of the t locations found from error correction are out of range, then an uncorrectable error has been detected.
Update #2
I did a quick overview of the ISO document.
The generator polynomial is (x - 1) (x - 2) (x - 2^2) ..., so the syndromes to check are S(0) to S(n-1) as you mentioned before, and in the case of zero errors, then all syndromes S(0) to S(n-1) should be zero.
The ISO document uses the term codewords to refer to bytes (or symbols), but in most ecc articles, the term codeword refers to an array of bytes including data and error correction bytes, and the error correction bytes are often called parity bytes, redundant bytes or remainder bytes. So keep this in mind if reading other ecc articles.
Page 37 of the ISO document mentions "erasures" and "errors", which is RSECC terminology. "Erasures" refer to bad (or potentially bad) data bytes at known locations, detected outside of RSECC. "Errors" refer to bad bytes not detected outside of RSECC, and only determined during RSECC decoding. The document then notes that there are no invalid data bit patterns, which would imply that there is no "erasure" detection. It then adds to the confusion by showing an equation that includes erasure and error counts.
If you're curious, the Nasa pdf file on RSECC explains erasure handling starting at page 86, but I don't think this applies to QR codes.
http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19900019023.pdf
Getting back to the ISO document, it uses p to note the number or error correction bytes used for misdecode protection, as opposed to being used for correction. This is shown in table 9 on page 38. For version 1, which seems to be what you're using, reinterpreting:
error correction level
| number of data bytes
| | number of ecc bytes used for correction
| | | number of ecc bytes used for misdecode protection (p)
| | | | correction capability
L 19 4 3 2/26 ~ 07.69%
M 16 8 2 4/26 ~ 15.38%
Q 13 12 1 6/26 ~ 23.08%
H 9 16 1 8/26 ~ 30.77%
Given that this table shows that the expected correction capability is met without the usage of erasures, then even if erasures could be detected, they are not needed.
With GF(2^8), there are 255 (not 256) possible error locations that can be generated by RSECC decoding, but in version 1, there are only 26 valid locations. Any generated location outside of the 26 valid locations would be a detection of an uncorrectable error. So for L level, the 3 p bytes translates into the odds of miscorrection by 1/(2^24), and location range muliplies this by (26/255)^2 for ~6.20E-10 probablity. For H level, the 1 p bytes translates into the odds of miscorrection by (1/2^8) and location range by (26/255)^8 for ~4.56E-11 probability.
Note that for version 2, p = 0 for levels M, Q, H, relying on the location range (44/255)^(8 or 11 or 14) for miscorrection probability of 7.87E-7, 4.04E-9, 2.07E-11.