I'm trying to calculate autocorrelation of sample windows in a time series using the code below. I'm applying FFT to that window, then computing magnitudes of real and imaginary parts and setting imaginary part to zero, lastly taking inverse transform of it to obtain autocorrelation:
DoubleFFT_1D fft = new DoubleFFT_1D(magCnt);
fft.realForward(magFFT);
magFFT[0] = (magFFT[0] * magFFT[0]);
for (int i = 1; i < (magCnt - (magCnt%2)) / 2; i++) {
magFFT[2*i] = magFFT[2*i] * magFFT[2*i] + magFFT[2*i + 1] * magFFT[2*i + 1];
magFFT[2*i + 1] = 0.0;
}
if (magCnt % 2 == 0) {
magFFT[1] = (magFFT[1] * magFFT[1]);
} else {
magFFT[magCnt/2] = (magFFT[magCnt-1] * magFFT[magCnt-1] + magFFT[1] * magFFT[1]);
}
autocorr = new double[magCnt];
System.arraycopy(magFFT, 0, autocorr, 0, magCnt);
DoubleFFT_1D ifft = new DoubleFFT_1D(magCnt);
ifft.realInverse(autocorr, false);
for (int i = 1; i < autocorr.length; i++)
autocorr[i] /= autocorr[0];
autocorr[0] = 1.0;
The first question is: It is seen that this code maps autocorrelation result to [0,1]
range, although correlation is supposed to be between -1 and 1. Of course it is easy to map the results to [-1,1]
range, but I'm not sure if this mapping is correct. How can we interpret the values in the resulting autocorr
array?
Secondly, with this code I'm geting good results for some periodic series, that is I get higher values for specific autocorrelation indices in accordance with the period of signal. However the result go weird when I apply it to non-periodic signals: all the values in autocorr
array appear to be very close to 1. What is the reason for that?