I am currently doing some spectrum analysis for a piece of coursework, though we haven't been explicitly taught Fourier transforms yet. I have been playing around with the various fft algorithms in scipy and numpy on some data that I know what the answer should look like
In this case its an AM signal at 8kHz carrier frequency and 1kHz modulated sine wave on top so should have 3 clear peaks on an fft
When applying scipy.fftpack.rfft
and numpy.fft.rfft
I get the following plots respectively:
Scipy:
Numpy:
While the shape of the 2 FFTs are roughly the same with the correct ratios between the peaks, the numpy
one looks much smoother, whereas the scipy
one has slightly smaller max peaks, and has much more noise.
I'm assuming this is largely down to different applications of a Discrete Fourier Transform algorithm, and have seen other articles about how the scipy
implementation is faster in run time. But I was wandering what it is that specifically causes the difference, and which one is actually more accurate?
EDIT: Code used to generate plots:
data = pd.read_csv("./Waveforms/AM waveform Sine.csv", sep = ',', dtype = float)
data = data.as_matrix()
time = data[:,0]
voltage = data[:,1]/data[:,1].max() # normalise the values
#scipy plot:
plt.figure()
magnitude = scipy.fftpack.rfft(voltage)
freq = scipy.fftpack.rfftfreq(len(time),np.diff(time)[0])
plt.figure()
plt.plot(freq, np.absolute(magnitude), lw = 1)
plt.ylim(0,2500)
plt.xlim(0,15)
#numpy plot
magnitude = np.fft.rfft(voltage)
freq = np.fft.rfftfreq(len(time),np.diff(time)[0])
plt.figure()
plt.plot(freq, np.absolute(magnitude), lw = 1)
plt.ylim(0,2500)
plt.xlim(0,15)