I'm a very experienced software engineer, and I've taken some EE classes in college. I'm programming on iPhone and Android, and I want to implement digital filters (e.g. low-pass, band-pass, band-stop, etc.) for real-time microphone and accelerometer data.
I know that there are multiple, equivalent ways to implement a digital filter on a window of time-domain samples. Two approaches I'm looking at are:
Implementing a difference equation directly in C/Java code (e.g. y[i] = y[i-1] + 2 * x[i]). I believe this can run in O(N) time, where N is the length of the sample window, e.g. N=512.
Implementing the convolution between the sample window and the time-domain representation of an FIR filter, typically some form of sinc function. I asked this question awhile ago. This can be done in O(N lg N) if you use fast-convolution involving FFT and IFFT.
Now, from reading various online resources, I've found that the preferred, conventional-wisdom approach for C/Java programming is (1) above, implementing a difference equation. Is this a correct conclusion?
Here is what I've found:
Apple's accelerometer filter code implements a difference equation.
This Stackoverflow question of How to implement a LowPass Filter? suggests the use of a difference equation.
The Wikipedia article on low-pass filter provides an algorithm using a difference equation.
So in summary, my questions really are:
Is implementing a difference equation (rather than through fast convolution) the way to go for writing filters in C/Java?
None of the references above say how to design a difference equation given specific cut-off frequencies or band-stop frequencies. I know I studied this awhile ago. Are there are any filter references for programmers with this kind of information?