I want to make a sound simulation of a virtual scene. Therefore I want to convolute the impulse response my system calculates with an input signal from a simple .wav-file in a frequency dependant manner. As far as I understand DSP the best way is to use the FFT to convert the input signal into its frequency spectrum, somehow apply the impulse response function to it and iFFT it back.
My problem is that after doing the FFT on my signal and afterwards the iFFT, the signal differs from the original input signal. The original sound is kind of recognizable in the new signal, but it is very "blurred" due to false numbers after FFT and iFFT. I took the "first" (in-place, breadth-first, decimation-in-frequency) implementation example of the FFT in C++ from http://rosettacode.org/wiki/Fast_Fourier_transform#C.2B.2B.
Here is the code of my inner code usage of the FFT implementation:
CArray signal = CArray(output_size);
for (int i = 0; i < format.FrameCount; ++i) {
signal[i] = Complex((double)(is_8_bit ? sample_data_8[i] : sample_data_16[i]), 0);
}
fft(signal);
ifft(signal);
The following typedefs exist:
typedef std::complex<double> Complex;
typedef std::valarray<Complex> CArray;
Since I took the code from the above website, I assume that the mistake can't be within the implementation of the FFT. I assume that it has to do something with the data types of my input and/or the complex numbers.
Since my system does not implement "phases" and I read that they can be neglected and there can still be returned a useful value, I am initialising the complex numbers with the imaginary part of 0.
Is there a fundamental mistake I have made or is the fault in something like data types or rounding where it should not be?