I´m doing a school assignment where we are supposed to do a sobel edge detection on an image. We should do a convolution with the sobel cores och then calculate the gradientmagnitude for each pixel. After that, we should use the threshold method to give a pixel the value 255 (white) or 0 (black), depending on the threshold value. The output image from the edge detection must be of the type BufferedImage.TYPE_BYTE_BINARY. I use a grayscale image as input but the endresult ends up looking very weird.. it definitely does not detect the edges.
I googled around and managed to find working code (here, see the marked correct answer), however, the output image here is of the type BufferedImage.TYPE_INT_RGB, which is not allowed... In this question, the also use a BufferedImage.TYPE.INT.RGB as input to the edge detection.
Help on resolving this matter is much appreciated!
Result when I execute the program. The edge detection result is on the far right.
What the edge detection result should look like.
My code:
/**
* turns an image to a grayscale version of the image
*/
public void alterImageGrayScale() throws IOException {
imageGrayScale = new BufferedImage(imageOriginal.getWidth(), imageOriginal.getHeight(), BufferedImage.TYPE_BYTE_GRAY);
for(int i = 0; i < imageOriginal.getWidth(); i++) {
for(int j = 0; j < imageOriginal.getHeight(); j++) {
Color c = new Color(imageOriginal.getRGB(i, j));
int red = c.getRed();
int green = c.getGreen();
int blue = c.getBlue();
int gray = (int) (0.2126*red + 0.7152*green + 0.0722*blue);
imageGrayScale.setRGB(i, j, new Color(gray, gray, gray).getRGB());
}
}
}
/**
* edge detection
* @throws IOException
*/
public void alterEdgeDetection() throws IOException {
imageBlackAndWhite = new BufferedImage(imageGrayScale.getWidth(), imageGrayScale.getHeight(), BufferedImage.TYPE_INT_RGB);
int x = imageGrayScale.getWidth();
int y = imageGrayScale.getHeight();
int threshold = 250;
for (int i = 1; i < x - 1; i++) {
for (int j = 1; j < y - 1; j++) {
int val00 = imageGrayScale.getRGB(i - 1, j - 1);
int val01 = imageGrayScale.getRGB(i - 1, j);
int val02 = imageGrayScale.getRGB(i - 1, j + 1);
int val10 = imageGrayScale.getRGB(i, j - 1);
int val11 = imageGrayScale.getRGB(i, j);
int val12 = imageGrayScale.getRGB(i, j + 1);
int val20 = imageGrayScale.getRGB(i + 1, j - 1);
int val21 = imageGrayScale.getRGB(i + 1, j);
int val22 = imageGrayScale.getRGB(i + 1, j + 1);
int gradientX = ((-1 * val00) + (0 * val01) + (1 * val02)) + ((-2 * val10) + (0 * val11) + (2 * val12))
+ ((-1 * val20) + (0 * val21) + (1 * val22));
int gradientY = ((-1 * val00) + (-2 * val01) + (-1 * val02)) + ((0 * val10) + (0 * val11) + (0 * val12))
+ ((1 * val20) + (2 * val21) + (1 * val22));
int gradientValue = (int) Math.sqrt(Math.pow(gradientX, 2) + Math.pow(gradientY, 2));
//???? feel like something should be done here, but dont know what
if(threshold > gradientValue) {
imageBlackAndWhite.setRGB(i, j, new Color(0, 0, 0).getRGB());
} else {
imageBlackAndWhite.setRGB(i, j, new Color(255, 255, 255).getRGB());
}
}
}
}
TYPE_BYTE_BINARY
? Then you could just use the existing edge detection code to generate the grayscale image, and only do a "grayscale to binary" conversion as a last step in a dedicated method. – Marco13TYPE_INT_RGB
. I don´t understand how I can change the code to make the end result aTYPE_BYTE_BINARY
. I don´t think I need to have a grayscale as input as I´ve written in the question, if that causes trouble. I thought the grayscaling happened "outside" the edge detection, and that I had to write code for that explicitly @Marco13 – Isus