Here's yet another answer offering commentary on how Muis, Abdullah Al-Ageel and Flip's answer are all mathematically the same thing except written differently.
Sure, we have José Manuel Ramos's analysis explaining how rounding errors affect each slightly differently, but that's implementation dependent and would change based on how each answer were applied to code.
There is however a rather big difference
It's in Muis's N
, Flip's k
, and Abdullah Al-Ageel's n
. Abdullah Al-Ageel doesn't quite explain what n
should be, but N
and k
differ in that N
is "the number of samples where you want to average over" while k
is the count of values sampled. (Although I have doubts to whether calling N
the number of samples is accurate.)
And here we come to the answer below. It's essentially the same old exponential weighted moving average as the others, so if you were looking for an alternative, stop right here.
Exponential weighted moving average
Initially:
average = 0
counter = 0
For each value:
counter += 1
average = average + (value - average) / min(counter, FACTOR)
The difference is the min(counter, FACTOR)
part. This is the same as saying min(Flip's k, Muis's N)
.
FACTOR
is a constant that affects how quickly the average "catches up" to the latest trend. Smaller the number the faster. (At 1
it's no longer an average and just becomes the latest value.)
This answer requires the running counter counter
. If problematic, the min(counter, FACTOR)
can be replaced with just FACTOR
, turning it into Muis's answer. The problem with doing this is the moving average is affected by whatever average
is initiallized to. If it was initialized to 0
, that zero can take a long time to work its way out of the average.
How it ends up looking