My general aim is to calculate the power spectral density of an input signal exactly as what is seen in the QT GUI Frequency Sink block. I'll need to process the PSD values later on. Here is my current setup.
These graphs are produced when a signal (no transmission) is inputted.
Blue = signal, green = max of signal, pink = min of signal.
My implementation of PSD produces a graph that has the same height and shape as the actual PSD shown by the QT GUI Frequency Sink block, but wrong offset - the actual PSD is about 66 dB lower than mine.
I eyeballed the rough max/min/signal y-axis values:
Actual PSD has max = -76dB, min = -115dB, signal = -86dB.
My PSD has max = -10dB, min = -50dB, signal = -20dB.
I'm not really sure what I'm doing wrong; the offset of -66dB seems pretty arbitrary and I think I'm on the right track generally.