CDROM data use a 3rd layer of error detection using Reed-Solomon and an EDC using a 32_bits CRC polynomial.
The ECMA 130 standard define the EDC CRC polynomial as follow (page 16, 14.3):
P(X) = (X^16 + x^15 + x^2 + 1).(x^16 + x^2 + x + 1)
and
The least significant bit of a data byte is used first.
Usually, translating the polynomial into its integer value form is pretty straightforward. Using modulo math, the extended polynomial must be P(X) = x^32 + x^31 + x^18 + x^17 + x^16 + x^15 + x^4 + x^3 + x^2 + x + 1
, thus the value being 0x8007801F
The last sentence means that the polynomial is reversed (if I get it right).
But I didn't managed to get the right value so far. The Cdrtools source code use 0x08001801 as polynomial value. Can someone explain how did they find that value?