1
votes

I have to import a greyscale image into matlab, convert the pixels it is composed off from unsigned 8 bit ints into doubles, plot a histogram, and finally improve the image quality using the transformation:

v'(x,y) = a(v(x,y)) + b 

where v(x,y) is the value of the pixel at point x,y and v'(x,y) is the new value of the pixel.

My primary issue is the value of the two constants, a and b that we have to choose for the transformation. My understanding is that an equalized histogram is desirable for a good image. Other code found on the Internet deals either with using the built-in MATLAB histeq function or calculating probability densities. I've found no reference anywhere to choosing constants for the transformation given above.

I'm wondering if anyone has any tips or ideas on how to go about choosing these constants. I think the rest of my code does what it's supposed to:

 image = imread('image.png');
 image_of_doubles = double(image);

 for i=1:1024
     for j=1:806
            pixel = image_of_doubles(i,j);
            pixel = 0.95*pixel + 5;
            image_of_doubles(i,j) = pixel;
    end
end

[n_elements,centers] = hist(image_of_doubles(:),20);
bar(centers,n_elements);

xlim([0 255]);

Edit: I also played a bit with different values of constants. The a constant when changed seems to be the one that stretches the histogram; however this only works for a values that are between 0.8 - 1.2 (and it doesn't stretch it enough - it equalizes the histogram only on the range 150 - 290). If you apply an a of let's say 0.5, the histogram is split into two blobs, with a lot of pixels at about 4 or 5 different intensities; again, not equalized in the least.

1
The duplicate that was marked is not the same question. The duplicate talked about implementing the calculation of an image histogram. This question talks about contrast stretching in order to spread out the intensities seen in a histogram. Two fundamentally different questions. I've reopened the question.rayryeng

1 Answers

7
votes

The operation that you are interested in is known as linear contrast stretching. Basically, you want to multiply each of the intensities by some gain and then shift the intensities by some number so you can manipulate the dynamic range of the histogram. This is not the same as histeq or using the probability density function of the image (a precursor to histogram equalization) to enhance the image. Histogram equalization seeks to flatten the image histogram so that the intensities are more or less equiprobable in being encountered in the image. If you'd like a more authoritative explanation on the topic, please see my answer on how histogram equalization works:

Explanation of the Histogram Equalization function in MATLAB


In any case, choosing the values of a and b is highly image dependent. However, one thing I can suggest if you want this to be adaptive is to use min-max normalization. Basically, you take your histogram and linearly map the intensities so that the lowest input intensity gets mapped to 0, and the highest input intensity gets mapped to the highest value of the associated data type for the image. For example, if your image was uint8, this means that the highest value gets mapped to 255.

Performing min/max normalization is very easy. It is simply the following transformation:

out(x,y) = max_type*(in(x,y) - min(in)) / (max(in) - min(in));

in and out are the input and output images. min(in) and max(in) are the overall minimum and maximum of the input image. max_type is the maximum value associated for the input image type.

For each location (x,y) of the input image, you substitute an input image intensity and run through the above equation to get your output. You can convince yourself that substituting min(in) and max(in) for in(x,y) will give you 0 and max_type respectively, and everything else is linearly scaled in between.

Now, with some algebraic manipulation, you can get this to be of the form out(x,y) = a*in(x,y) + b as mentioned in your problem statement. Concretely:

out(x,y) = max_type*(in(x,y) - min(in)) / (max(in) - min(in));
out(x,y) = (max_type*in(x,y)/(max(in) - min(in)) - (max_type*min(in)) / (max(in) - min(in)) // Distributing max_type in the summation
out(x,y) = (max_type/(max(in) - min(in)))*in(x,y) - (max_type*min(in))/(max(in) - min(in)) // Re-arranging first term

out(x,y) = a*in(x,y) + b

a is simply (max_type/(max(in) - min(in)) and b is -(max_type*min(in))/(max(in) - min(in)).

You would make these a and b and run through your code. BTW, if I may suggest something, please consider vectorizing your code. You can very easily use the equation and operate on the entire image data at once, rather than looping over each value in the image.

Simply put, your code would now look like this:

image = imread('image.png');
image_of_doubles = 0.95*double(image) + 5; %// New

[n_elements,centers] = hist(image_of_doubles(:),20);
bar(centers,n_elements);

.... isn't it much simpler?

Now, modify your code so that the new constants a and b are calculated in the way we discussed:

image = imread('image.png');

%// New - get maximum value for image type
max_type = double(intmax(class(image)));

%// Calculate a and b
min_val = double(min(image(:)));
max_val = double(max(image(:)));
a = max_type / (max_val - min_val);
b = -(max_type*min_val) / (max_val - min_val);

%// Compute transformation
image_of_doubles = a*double(image) + b;

%// Plot the histogram - before and after
figure;
subplot(2,1,1);
[n_elements1,centers1] = hist(double(image(:)),20);
bar(centers1,n_elements1);
%// Change
xlim([0 max_type]);

subplot(2,1,2);
[n_elements2,centers2] = hist(image_of_doubles(:),20);
bar(centers2,n_elements2);
%// Change
xlim([0 max_type]);

%// New - show both images
figure; subplot(1,2,1);
imshow(image);
subplot(1,2,2);
imshow(image_of_doubles,[]);

It's very important that you cast the maximum integer to double because what is returned is not only the integer, but the integer cast to that data type. I've also taken the liberty to change your code so that we can display the histogram before and after the transformation, as well as what the image looks like before and after you transform it.


As an example of seeing this work, let's use the pout.tif image that's part of the image processing toolbox. It looks like this:

enter image description here

You can definitely see that this requires some image contrast enhancement operation because the dynamic range of intensities is quite limited. This had the appearance of looking washed out.

Using this as the image, this is what the histogram looks like before and after.

enter image description here

We can certainly see the histogram being stretched. Now this is what the images look like:

enter image description here

You can definitely see more detail, even though it's more dark. This tells you that a simple linear scaling isn't enough. You may want to actually use histogram equalization, or use a gamma-law or power law transformation.


Hopefully this will be enough to get you started though. Good luck!