Assuming the official DFT tutorial (using java bindings) https://docs.opencv.org/3.4/d8/d01/tutorial_discrete_fourier_transform.html I do the following:
Mat complexI = new Mat();
Core.merge(planes, complexI); // Add to the expanded another plane with zeros
Core.dft(complexI, complexI); // this way the result may fit in the source matrix
// compute the magnitude
Core.split(complexI, planes);
Mat magI = new Mat();
Core.magnitude(planes.get(0), planes.get(1), magI);
Mat phasI = new Mat();
Core.phase(planes.get(0), planes.get(1), phasI);
Mat newComplexI = new Mat();
Core.merge(magI, phasI, newComplexI);
complexI.get(0,0)[0] == newComplex.get(0,0)[0]; // FALSE
What is going on? Just breaking the complexI
into magnitude and phase matrices and recomposing them produces a different matrix. If I do this test before the Core.dft
call then it works fine. Is it because the Mat
post-dft has floating point numbers and we lose on precision? The differences between complexI
and newComplexI
however are much larger than a phew decimals, sometimes in the thousands.
How can I properly reconstruct the image from the mag and phase matrices using inverse dft?