The AWS ALB Target Groups have a metric "RequestCountPerTarget" that seems at first sight very interesting. However, this metric only seems to be displayed accurately on the full detailed view of the metric, and it is completely screwed over when it appears along with other metrics on the CloudWatch dashboard.
When I configure the metric I have this, which is the correct that is the most useful for me, ie. the number of requests per minute received by a single server
Using this graph, I can quickly determine if my application is overloaded or not : from the average response rate of my servers, I can deduce a max RPM (Requests per Minute) a single server can tank (which happens to be around 200 RPM/server in my case)
However, on the CloudWatch Dashboard, this metrics appears like this
If my understanding is correct, The AWS CloudWatch dashboard uses interpolation in order to avoid requesting to many datapoints, but in this case, what the interpolation seems to be doing, isn't to make an average of "RequestCountPerTarget during 1min" over the dashboard period (1 week in the screenshots), but a sum of "RequestCountPerTarget during 1min" over the dashboard period, which completely destroys the purpose of the metric : I don't care about the total number of requests received over 1 week (since if those requests are distributed evenly during the time frame, this basically means nothing to my servers), but I do care about the average maximum number of requests received in 1 minute over 1 week (since this will reflect the actual request spikes).
Is there a way around this ?