Lately I have been experimenting with audio and FFTs, specifically the Minim library in Processing (basically Java, not that its particularly important for this question). What I have come to understand is that with a buffer/sample size N and sample rate K, after performing a forward FFT, I will get N frequency bins (only N/2 usable data and in fact Minim only returns N/2 bins) linearly spaced representing the spectrum from 0 to K/2 HZ.
With Minim (as well as other typical FFT implementations) you wait to gather N samples, and then perform the forward transformation, then wait for N more samples, and so on. In order to get a reasonable frame-rate (for audio visualizations, beat detection, etc.), I must use a small sample size relative to the sampling frequency.
The problem with this, though, is that a small sample size results in a very low resolution for the low end of the spectrum when I compute logarithmically spaced averages (Since a bass octave is much narrower than a high pitched octave).
I was wondering if a possible way to squeeze more apparent resolution would be to perform FFTs more often than every N samples on a slightly larger sample size than I am currently using. (I.E. with input buffer of size 2048, every 100 samples, add those samples to the input buffer and remove the oldest 100 samples, and perform a FFT). It seems like this would possibly create a rolling-average type of affect (which I can live with) but I'm not too sure.
What would be the pros and cons of this approach? Are there any other ways I could increase my apparent resolution while still being able to do real-time visualization and analysis?