I'm trying to gather some data on the performance of graphite and the carbon daemon. Luckily for me the carbon daemon reports to graphite every 60 seconds with some stats on its workings such as the number of metrics received.
I'm using statsd to aggregate stats and flush them to the carbon daemon every second, but noticed some weird behavior when setting up to show the number of metrics received in a certain time interval. I'm using grafana to connect to my Graphite instance and pull data out of it. Whenever statsd is not running and I inspect the number of metrics received it remains 0, which is not weird considering nothing is sending it anything. However, when I start statsd the number quickly rises to about 800/900 per minute without me sending any stats to it yet as can be seen in this image:
I'm at a loss as to where these metrics are coming from and why they happen at a rate of 15 per second. Also, the CPU increases the load by about 10% whenever I start statsd. What I did notice is that when I increase the flush interval of statsd the number of metrics received decreases.
This is my configuration file of statsd:
{
graphitePort: 2003,
graphiteHost: "127.0.0.1",
port: 8125,
backends: ["./backends/graphite"],
flushInterval: 1000, // Don't increase this past the lowest retention schema of graphite
prefixStats: "test",
graphite: {
legacyNamespace: false
}
}
And here's my storage schema for graphite:
[carbon]
pattern = ^carbon\.
retentions = 60s:90d,300s:365d
[stats]
pattern = ^stats\..*
retentions = 1:2160,10:2160,60:10080,600:262974
[system]
pattern = ^system\..*
retentions = 10:2160,60:10080,600:262974