1
votes

I am trying to perform deconvolution on an image, I which is n x m. The kernel used to do the convolution on it is K which is also n x m. Now I want to find the original image, O, by performing a deconvolution. I know that I can retrieve the image O by performing a Fourier Transform on I and K and dividing: I / K, since in Fourier domain the convolution is a product. (I got this information from here).

I saw another post on how to use Eigen FFT to perform a forward transform, here.

My code for forward Fourier transform is:

I = Input Image (time domain)

O = Output Image (frequency domain)

tempFreq = temporary matrix for calculations (frequency domain)

timevec1 = float vector

freqvec1, freqvec2, freqvec3 = complex vector

for (Int32 i = 0; i < I->uRowsCount; ++i)
{
    for (Int32 j = 0; j < I->uColumnsCount; ++j)
    {
        timevec1.push_back((*I)(i, j));
    }

    fft.fwd(freqvec1, timevec1);

    for (Int32 j = 0; j < I->uColumnsCount; ++j)
    {
        (tempFreq)(i, j) = freqvec1[j];
    }

    freqvec1.clear();
    timevec1.clear();
}

freqvec1.clear();
timevec1.clear();


for (Int32 j = 0; j < I->uColumnsCount; ++j)
{
    for (Int32 i = 0; i < I->uRowsCount; ++i)
    {
        freqvec2.push_back((tempFreq)(i, j));
    }

    fft.fwd(freqvec1, freqvec2);

    for (Int32 i = 0; i < I->uRowsCount; ++i)
    {
        (O)(i, j) = freqvec1[i];
    }

    freqvec2.clear();
    freqvec1.clear();
}

My code for inverse Fourier transform is:

I = Input Image (frequency domain)

O = Output Image (time domain)

tempTime = temporary matrix for calculations (time domain)

timevec1, timevec2 = float vector

freqvec1, freqvec2 = complex vector

    for (Int32 j = 0; j < O->uColumnsCount; ++j)
    {
        for (Int32 i = 0; i < O->uRowsCount; ++i)
        {
            freqvec1.push_back((I)(i, j));
        }

        fft.inv(timevec1, freqvec1);

        for (Int32 i = 0; i < O->uRowsCount; ++i)
        {
            (*tempTime)(i, j) = timevecCol[i];
        }

        freqvec1.clear();
        timevec1.clear();
    }

    freqvec1.clear();
    timevec1.clear();

    for (Int32 i = 0; i < O->uRowsCount; ++i)
    {
        for (Int32 j = 0; j < O->uColumnsCount; ++j)
        {
            freqvec2.push_back((*tempTime)(i, j));
        }

        fft.inv(timevec2, freqvec2);

        for (Int32 j = 0; j < O->uColumnsCount; ++j)
        {
            (*O)(i, j) = timevec2[j];
        }

        freqvec2.clear();
        timevec2.clear();
    }

For deconvolution, I am dividing the input image in frequency domain with the kernel in frequency domain:

freqDomainOutputImage = freqDomainInputImage.cwiseQuotient(freqDomainKernel);

And to get the original image, I perform the inverse Fourier transform on the freqDomainOutputImage.

The result that I get is: enter image description here

I believe the FFT is mirroring the top left corner to the other sides but I don't know why? I am not using halfSpectrum. Second, why is the image shifted? If I move the output image to the center by replacing the last loop with this:

(*O)((i + O->uRowsCount/2)%O->uRowsCount, (j + O->uColumnsCount/2)%O->uColumnsCount) = timevec2[j];

Then my output is: enter image description here

(You can see that the image is mirrored from the top left quadrant).

And last, why does it seem to have noise even though I added the blur myself without noise?

1
I might know why FFT has shifted the image. I read on this site: homepages.inf.ed.ac.uk/rbf/HIPR2/fourier.htm that usually Fourier shifts the DC value (value at index 0,0) to center. So we have to re-shift the values for our original image. If I am incorrect here, please do correct me.Saleem
Yes, this is the reason why the image is shifted if the convolution kernel is centered on (n/2,n/2). Indeed, in this case, the convolution also performs a translation by a vector (n/2,n/2). The high frequency noise could be due to the highest frequency, named Nyquist frequency. If the size of the image is even (say 4), the frequency n/2 could be the origin ot the issue. Index 0 is the average, index 1 is frequency 1/4, index 3 is frequency -1/4, but index 2 could correspond to either frequency 2/4 or -2/4. As a result, zeroing these high frequency components prior to backward dft could help...francis
I didn't quite understand why we need to zero the high frequencies. Because if I am getting high frequencies from my image, zeroing them would mean loss of data for that image, right?Saleem
Deconvolution generally is an ill posed inverse problem. Even noise added by numerical rounding can lead to artifacts. You need to do some kind of regularization.chtz
And why is the top left part of the image mirrored to the top right and bottom right is mirrored to the bottom left? Can I some how prevent this from happening?Saleem

1 Answers

2
votes

tempTime needs to be complex. You seem to be throwing away that imaginary component, hence you get the mirroring effect.

Starting from a complex frequency-domain image, with a complex-conjugate symmetry, you apply a 1D DFT (e.g. along rows). The result of this transformation is a complex image with a complex-conjugate symmetry only along columns. The next 1D DFT takes these columns and creates real-values signals out of them, yielding a real-valued image.

If after the first step you discard the imaginary component, then you remove the odd component of the spatial-domain image along that dimension, leaving a even (symmetric) image. This is why you see this symmetry in your result.

Think of it this way: the inverse DFT reverses the process of the DFT. If the DFT goes real->complex->complex, then the inverse must be complex->complex->real. The intermediate image has a spatial and a frequency dimension. The frequency components always need to be complex.


You already figured out that you need to swap quadrants. This is because the DFT uses the left-most pixel as the origin, both in the spatial domain and in the frequency domain. Thus, your spatial-domain convolution kernel must have its origin in the top-left corner for all of this to work correctly.


Finally, you wonder about the noise. This is caused by numerical instability. Some frequency components of your image simply have very small values. This is especially true for the low-pass filtered image. There you divide very small values by other very small values, yielding nonsense.

Deconvolution always needs regularization in practice. The Wiener filter is the simplest way to add regularization to the deconvolution. See this other answer for details.