There are multiple methods for find the average of a set of numbers.
First, the sum / count quotient. Add all values and divide them by the number of values.
Second, the moving average. The function I found in another Stack answer is:
New average = old average * (n-1)/n + new value /n
This works as long as each value is added to the average one value at a time.
My concern is that the second method is more calculatively complex for my processor to execute, but I also fear that the first method will result in a loss of resolution for data sets that result in large sums. In a 32 bit system, for example, the resolution of a float value stored is reduced automatically as the magnitude of the number grows.
Does a moving average preserve resolution?