I'm trying to implement a Gaussian Blur from scratch (using C++). In the code below I've hard-coded the Gaussian kernel I'm using. I only kept one dimension as I'm trying to use the optimization I've read about where you can do a horizontal convolution pass and a vertical one over that to make your blur more efficient. Unfortunately, I'm running into some issues. Here is my code:
float gKern[5] = {0.05448868, 0.24420134, 0.40261995, 0.24420134, 0.05448868};
int** gaussianBlur(int** image, int height, int width) {
int **ret = new int*[height];
for(int i = 0; i < height; i++) {
ret[i] = new int[width];
}
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
if (i == 0) {
ret[i][j] = (gKern[0] * image[2][j]) + (gKern[1] * image[1][j]) + (gKern[2] * image[0][j]) + (gKern[3] * image[1][j]) + (gKern[4] * image[2][j]);
} else if (i == 1) {
ret[i][j] = (gKern[0] * image[1][j]) + (gKern[1] * image[0][j]) + (gKern[2] * image[1][j]) + (gKern[3] * image[2][j]) + (gKern[4] * image[3][j]);
} else if (i == (height - 2)) {
ret[i][j] = (gKern[0] * image[i - 2][j]) + (gKern[1] * image[i - 1][j]) + (gKern[2] * image[i][j]) + (gKern[3] * image[i + 1][j]) + (gKern[4] * image[i][j]);
} else if (i == (height - 1)) {
ret[i][j] = (gKern[0] * image[i - 2][j]) + (gKern[1] * image[i - 1][j]) + (gKern[2] * image[i][j]) + (gKern[3] * image[i - 1][j]) + (gKern[4] * image[i - 2][j]);
} else {
ret[i][j] = (gKern[0] * image[i - 2][j]) + (gKern[1] * image[i - 1][j]) + (gKern[2] * image[i][j]) + (gKern[3] * image[i + 1][j]) + (gKern[4] * image[i + 2][j]);
}
}
}
int** temp = image;
image = ret;
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
if (j == 0) {
ret[i][j] = (gKern[0] * image[i][2]) + (gKern[1] * image[i][1]) + (gKern[2] * image[i][0]) + (gKern[3] * image[i][1]) + (gKern[4] * image[i][2]);
} else if (j == 1) {
ret[i][j] = (gKern[0] * image[i][1]) + (gKern[1] * image[i][0]) + (gKern[2] * image[i][1]) + (gKern[3] * image[i][2]) + (gKern[4] * image[i][3]);
} else if (j == (width - 2)) {
ret[i][j] = (gKern[0] * image[i][j - 2]) + (gKern[1] * image[i][j - 1]) + (gKern[2] * image[i][j]) + (gKern[3] * image[i][j + 1]) + (gKern[4] * image[i][j]);
} else if (j == (width - 1)) {
ret[i][j] = (gKern[0] * image[i][j - 2]) + (gKern[1] * image[i][j - 1]) + (gKern[2] * image[i][j]) + (gKern[3] * image[i][j - 1]) + (gKern[4] * image[i][j - 2]);
} else {
ret[i][j] = (gKern[0] * image[i][j - 2]) + (gKern[1] * image[i][j - 1]) + (gKern[2] * image[i][j]) + (gKern[3] * image[i][j + 1]) + (gKern[4] * image[i][j + 2]);
}
}
}
image = temp;
return ret;
}
The first pass (the first for block) seems to work fine as when I comment out the second block I do get a slightly blurred image. But when I use both I get a choppy "weird" image, as shown below (the first image is my grayscale input, the second is the choppy output):