I have written a little script (MATLAB) using fast Fourier transform in order to filter messy time series.
Given a signal S(t) my function filter the n biggest amplitude components from the FFT transformation of S(t) and return's the filtered signal by inverse FFT.
The problem is that when I use a test signal the filtered signal is somehow shifted by approx. -1 in amplitude. I have tried to change the windowing but it doesn't appear to help (but the shift is moved toward +1).
Here is a sample code of my script:
% test Signal
t=0:0.1:1000;
f1=1/50; f2=1/300; f3=1/20;
Signal=100*sin(f1*2*pi*t)6x1+8*sin(f2*2*pi*t)+x3+x4+65*sin(f3*2*pi*t);
[FiltredSignal]=Filter(data,5,0); % call Filter function
figure
plot(t,real(a)*2,t,data)
figure
plot(t,real(a)*2-data) % ???? shift of -1 in ifft ????
function [FiltredSignal]=Filter(Signal,n)
% n= nbr of "most significative spectra components"
timelen=length(Signal);
FT=fft(Signal); % fast fourrier transform
FTcopy=FT/timelen; % Copy for spectral analysis
FTcopy(floor(timelen/2+1):end)=[]; % cut at nyquist point
FTcopy(1,1)=0;
% sort FT2 in descending amplitude order
sortedFT=sort(abs(FTcopy),'descend');
% spectral selection --> n biggest amplitudes
FiltredFT=zeros(1,timelen);
for i=1:n
freq=find(abs(FTcopy)==sortedFT(i));
FiltredFT(freq)=FT(1,freq);
%FiltredFT(freq-1)=FT(1,freq-1); % windowing
%FiltredFT(freq+1)=FT(1,freq+1); % windowing
end
FiltredData=ifft(FiltredFT); % return filtred signal time serie
% END
Could someone explain what is happening?
I surely miss some theoretical facts of FFT (I am self learning so everything is not that clear).
FTcopy(1,1)=0;
?), which will of course result in an overall shift in the Y axis in the time domain. – Paul R