3
votes

I'm trying to create a high pass filter in Matlab. I generate the Gaussian Kernel using

function kernel = compute_kernel(sigma,size)
[x,y] = meshgrid(-size/2:size/2,-size/2:size/2);
constant = 1/(2*pi*sigma*sigma);
kernel = constant*exp( -(y.^2 + x.^2 )/(2 * sigma * sigma));
kernel = (kernel - min(kernel(:)))./(max(kernel(:)) - min(kernel(:)));
end

Then after creating the Kernel I use it to create a low pass filter for the image(variable im2 ):

g = compute_kernel(9,101);
im2_low = conv2(im2,g,'same');

As I understand I can then use subtract the filtered image from the original image(in the frequency domain) to extract the high frequencies making it the equivalent of a high pass filter.

F = fft2(im2_low);
IM2 = fft2(im2);
IM2_high = IM2 - F;

figure; fftshow(IM2_high);
im2_high = ifft2(IM2_high);
figure; imshow(im2_high,[]);

There seems to be something wrong with this though. When I view the high pass filtered image it seems to a color inverted blurred image not the ones with the edges defined as I've seen online. I'm not sure if my process is wrong or whether I'm just using the wrong values for my Gaussian Kernel.

2

2 Answers

5
votes

Any kernel that you want to use for maintaining image features (i.e. you don't want a magnitude of something, but the image to look as human recognizable image) you need to make sure you do a thing to the kernel: normalize it.

You seem to have tried that, but you misinterpreted the meaning of normalizing in kernels. The don't need to be [0-1], their sum needs to be 1.

So, taking your code:

im2=imread('https://upload.wikimedia.org/wikipedia/en/2/24/Lenna.png');
im2=double(rgb2gray(im2));

sigma=9;
sizei=101;
[x,y] = meshgrid(-sizei/2:sizei/2,-sizei/2:sizei/2);
constant = 1/(2*pi*sigma*sigma);
kernel = constant*exp( -(y.^2 + x.^2 )/(2 * sigma * sigma));
%%%%%% NORMALIZATION
kernel=kernel/sum(kernel(:));
%%%%%%

im2_low = conv2(im2,kernel,'same');

F = fft2(im2_low);
IM2 = fft2(im2);
IM2_high = IM2 - F;


im2_high = ifft2(IM2_high);
figure; imshow(im2_high,[]);

enter image description here

But, as CrisLuengo mentions, subtraction is an operation that doesn't change in the Fourier domain, thus the answer is

im2_high=im2-im2_low
4
votes

This is a long answer to a short question. Read it if you want to learn something.

A low-pass filter and a high-pass filter are both linear filters. A linear filter can be applied in the spatial domain through a convolution or in the frequency domain (a.k.a. Fourier domain) as a multiplication.

It is true that, in the Fourier domain, the difference between a low-pass filter kernel and a identity filter (all-pass filter) is a high-pass filter:

high_pass_filter = identity_filter - low_pass_filter

The identity filter would be a kernel where every element is 1. The filter isapplied by multiplication, so

IM2 * high_pass_filter = IM2 * ( identity_filter - low_pass_filter )

which is the same as

IM2 * high_pass_filter = IM2 - IM2 * low_pass_filter

(here, as in the question, IM2 is the Fourier-domain representation of the image im2; all the stuff with the yellow borders are meant to be equations but are written in pseudo-code, with the * symbol used for multiplication).

Thus, the OP wants to apply a low-pass filter and subtract the input image in the Fourier domain to obtain a high-pass--filtered image.

However, one of the properties of the Fourier transform is that it is a linear transform. This means that

F(a*f + b*g) == a * F(f) + b * F(g)

(with F(.) the Fourier transform, a and b constants, and f and g functions). Setting a=1 and b=-1, and g the low-pass--filtered image and f the input image, we get

F(im2 - im2_low) == F(im2) - F(im2_low)

That is, subtraction in the spatial domain and in the Fourier domain are equivalent. Thus, if one computes im2_low in the spatial domain, there is no need to go to the Fourier domain for the subtraction. These two bits of code produce an identical result (up to numerical precision):

F = fft2(im2_low);
IM2 = fft2(im2);
IM2_high = IM2 - F;
im2_high = ifft2(IM2_high);

im2_high = im2 - im2_low;

Furthermore, the convolution is linear also. This means that, if you think of F(.) in the equations above as a convolution, those equations still hold. You can do manipulations like this:

conv(f, h) - f == conv(f, h) - conv(f, 1) == conv(f, h-1)

This directly leads to a definition of a high-pass filter in the spatial domain:

g = - compute_kernel(9,101);
g(51,51) = g(51,51) + 1;
im2_high2 = conv2(im2,g,'same');

You will see that max(max(abs(im2_high-im2_high2))) yields a value very close to 0.


A note regarding computing the Gaussian filter:

The compute_kernel function posted in the question computes a 2D filter kernel by directly evaluating a 2D Gaussian. The resulting filter kernel is 101x101 pixels, meaning that computing the convolution requires 101 * 101 * N multiplications and additions (MADs), with N the number of pixels in the filtered image. However, the Gaussian filter is separable, meaning that the same result can be obtained in only 101 * 2 * N MADs (50x fewer!). Additionally, for sigma = 9 one can get away with a smaller kernel too.

  1. Gaussian kernel size:

    The Gaussian function never reaches zero, but it reaches very close to zero quite quickly. When cutting it off at 3*sigma, very little of it is lost. I find 3 sigma to be a good balance. In the case of sigma = 9, the 3 sigma cutoff leads to a kernel with 55 pixels (3*sigma * 2 + 1).

  2. Gaussian separability:

    The multi-dimensional Gaussian can be obtained by multiplying 1D Gaussians together:

    exp(-(y.^2+x.^2)/(2*sigma*sigma)) == exp(-(x.^2)/(2*sigma*sigma)) * exp(-(y.^2)/(2*sigma*sigma))
    

    This leads to a much more efficient implementation of the convolution:

    conv(f,h1*h2) == conv( conv(f,h1), h2 )
    

    That is, convolving an image with a column filter h1 and then convolving the result with a row filter h2 is the same as convolving the image with a 2D filter h1*h2. In code:

    sigma = 9;
    sizei = ceil(3*sigma); % 3 sigma cutoff
    g = exp(-(-sizei:sizei).^2/(2*sigma.^2)); % 1D Gaussian kernel
    g = g/sum(g(:)); % normalize kernel
    im2_low = conv2(g,g,im2,'same');
    
    g2d = g' * g;
    im2_low2 = conv2(im2,g2d,'same');
    

    The difference is numerical imprecision:

    max(max(abs(im2_low-im2_low2)))
    
    ans =
       1.3927e-12
    

You'll find a more detailed description about Gaussian filtering on my blog, as well as some issues you can run into when using MATLAB's Image Processing Toolbox.