0
votes

I have a little problem. I want to implement the Gaussian convolution in C#, but it has (for some purposes) to have the same result as the Gaussian blur of EmguCV (which also means the same as OpenCV). The computation of the kernel value should always be the same at a Gaussian convolution, the only thing I need to know is, how the OpenCV implementation computes the kernel size. I have had a closer look to the OpenCV source code, but I have only little experience in C++, which is why I haven't found this. Perhaps anyone else can help me with this. And are there any other changes of the OpenCV implementation to the "original"?

1
question should be clarified. are you asking 1. how the standard deviation is derived from the blur function parameter (i assume it's the same) or 2. given the standard deviation what kernel size is chosen? (i assume the smallest that won't affect the outcome)morishuz

1 Answers

2
votes

Gaussian blur is a convolution with a Gaussian kernel of a specific size (it is a parameter).

So the steps for convolving are:

make Gaussian kernel
convolve image

As I know OpenCV uses some optimisation if the kernel size is bigger than a predefined threshold (I think is 7). Then a convolution is not performed in a spatial domain but in a frequency domain.

The steps are:

calculate result image size (needs to be power of 2 -> look at FFT)
transform image to frequency domain (e.g. FFT)
transform kernel to frequency domain (e.g. FFT)
multiply those matrices (element by element)
transform result matrix in spatial domain (e.g. inverse FFT)
clip matrix (convolution in frequency domain is cyclic convolution)

If you want just basic version look at: GaussianBlur.cs class in AForge.NET library (first try to use the class to see that fits your needs) doc:

http://www.aforgenet.com/framework/docs/html/f074e0dd-865c-fd5f-ba0a-80e336a0eaea.htm