0
votes

I want to implement two dimensional matched filter for blood vessel extraction according to the paper "Detection of Blood Vessels in Retinal Images Using Two-Dimensional Matched Filters" by Chaudhuri et al., IEEE Trans. on Medical Imaging, 1989 (there's a PDF on the author's web site).

A brief discription is that blood vessel's cross-section has a gaussian distribution and therefore I want to use gaussian matched filter to increase SNR. Such a kernel may be mathematically expressed as:

K(x,y) = -exp(-x^2/2*sigma^2)  for |x|<3*sigma,  |y|<L/2

L here is the length of vessel with fixed orientation. Experimentally sigma=1.5 and L = 7.

My MATLAB code for this part is:

s = 1.5; %sigma
t = -3*s:3*s;
theta=0:15:165; %different rotations
%one dimensional kernel
x = 1/sqrt(6*s)*exp(-t.^2/(2*s.^2));

L=7;
%two dimensional gaussian kernel
x2 = repmat(x,L,1);

Consider the response of this filter for a pixel belonging to the background retina. Assuming the background to have constant intensity with zero mean additive Gaussian white noise, the expected value of the filter output should ideally be zero. The convolution kernel is, therefore, modified by subtracting the mean value of s(t) from the function itself. The mean value of the kernel is determined as: m = Sum(K(x,y))/(number of points).

Thus, the convolutional mask used in this algorithm is given by: K(x, y) = K(x,y) - m.

My MATLAB code:

m = sum(x2(:))/(size(x2,1)*size(x2,2));
x2 = x2-m; 

A vessel may be oriented at any angle 0<theta<180 and the matched filter response is maximum when when it is aligned at theta+- 90 (cross-section distribution is gaussian not the vessel itself).

Thus we need to rotate the matched filter 12 times with 15 degree increment.

My MATLAB code is attached here but I don't get a desirable result. Any help is appreciated.

%apply rotated matched filter on image
r = {};
for k = 1:12
    x3=imrotate(x2,theta(k),'crop');%figure;imagesc(x3);colormap gray;   
    r{k}=conv2(img,x3);
end
w=[];h = zeros(584,565);
for i = 1:565
    for j = 1:584
        for k = 1:32
            w= [w ,r{k}(j,i)];
        end
        h(j,i)=max(abs(w));
        w = [];
    end
end
%show result
figure('Name','after matched filter');imagesc(h);colormap gray

For rotation I used imrotate which seems more sensible to me but in the paper it is different: suppose p=[x,y] be a discrete point in the kernel. To compute coefficients in the rotated kernel we have [u,v] = p*Rotation_Matrix.

Rotation_Matrix=[cos(theta),sin(theta);-sin(theta),cos(theta)]

And the kernel is:

K(x,y) = -exp(-u^2/2*s^2)

But the new kernel doesn't have a gaussian shape anymore. Using imrotate preserves gaussian shape. So what is the benefit of using Rotation matrix?

Input image is:

Input Image before applying matched filtering

Output:

After applying Matched filter

Matched filtering helps increase SNR but background noise is amplified too. Am I right to use imrotate to rotate the kernel? My main problem is with rotation matrix that why and what is the right code to implement it.

1
Can you add some inputs / outputs and explain why your result is not desired?NateTheGrate
My reason to say the code is not correct is because I am implementing the paper "Detection of Blood Vessels in Retinal Images Using Two-Dimensional Matched Filters " with Chaushuri's algorithm and my final result is not right.F.smith
It is wrong because it is not right is not an explanation!Ander Biguri
How should I determine it is right?F.smith
@F.smith You are the one asking here because its wrong,no? What is your question?Ander Biguri

1 Answers

1
votes

The reason to build the filter from its analytic expression for each rotation, rather than using imrotate, is that the filter extent is not circular, and therefore rotating brings in "new" pixel values and pushes some other pixels out of the kernel. Furthermore, rotating a kernel constructed as here (smooth transition along one direction, step edge along the other dimension) requires different interpolation methods along each dimension, which imrotate cannot do. The resulting rotated kernel will always be wrong.

Both these issues can be easily seen when displaying the kernel you make together with two rotated versions:

kernels

This display brings an additional issues to the front: the kernel is not centered on a pixel, causing it to shift the output by half a pixel.

Note also that, when subtracting the mean, it is important that this mean be computed only over the original domain of the filter, and that any zeros used to pad this domain to a rectangular shape remain zero (these should not become negative).

The rotated kernels can be constructed as follows:

m = max(ceil(3*s),(L-1)/2);
[x,y] = meshgrid(-m:m,-m:m); % non-rotated coordinate system, contains (0,0)
t = pi/6;                    % angle in radian
u = cos(t)*x - sin(t)*y;     % rotated coordinate system
v = sin(t)*x + cos(t)*y;     % rotated coordinate system
N = (abs(u) <= 3*s) & (abs(v) <= L/2);   % domain
k = exp(-u.^2/(2*s.^2));     % kernel
k = k - mean(k(N));
k(~N) = 0;                   % set kernel outside of domain to 0

This is the result for the three rotations used in the example above (the grey around the edges of the kernel corresponds to the value 0, the black pixels have a negative value):

kernels

Another issue is that you use conv2 with the default 'full' output shape, you should be using 'same' here, so that the output of the filter matches the input.

Note that, instead of computing all filter responses, and computing the max afterwards, it is much easier to compute the max as you compute each filter response. All of the above leads to the following code:

img = im2double(rgb2gray(img));

s = 1.5; %sigma
L = 7;
theta = 0:15:165; %different rotations

out = zeros(size(img));

m = max(ceil(3*s),(L-1)/2);
[x,y] = meshgrid(-m:m,-m:m); % non-rotated coordinate system, contains (0,0)
for t = theta
   t = t / 180 * pi;        % angle in radian
   u = cos(t)*x - sin(t)*y; % rotated coordinate system
   v = sin(t)*x + cos(t)*y; % rotated coordinate system
   N = (abs(u) <= 3*s) & (abs(v) <= L/2); % domain
   k = exp(-u.^2/(2*s.^2)); % kernel
   k = k - mean(k(N));
   k(~N) = 0;               % set kernel outside of domain to 0

   res = conv2(img,k,'same');
   out = max(out,res);
end

out = out/max(out(:)); % force output to be in [0,1] interval that MATLAB likes
imwrite(out,'so_result.png')

I get the following output:

output of algorithm