5
votes

I'm running prometheus and telegraf on the same host.

I'm using a few inputs plugins:

  • inputs.cpu
  • inputs.ntpq

I've configured to the prometheus_client output plugin to send data to prometheus

Here's my config:

    [[outputs.prometheus_client]]
      ## Address to listen on.
      listen = ":9126"

      ## Use HTTP Basic Authentication.
      # basic_username = "Foo"
      # basic_password = "Bar"

      ## If set, the IP Ranges which are allowed to access metrics.
      ##   ex: ip_range = ["192.168.0.0/24", "192.168.1.0/30"]
      # ip_range = []

      ## Path to publish the metrics on.
      path = "/metrics"

      ## Expiration interval for each metric. 0 == no expiration
      #expiration_interval = "0s"

      ## Collectors to enable, valid entries are "gocollector" and "process".
      ## If unset, both are enabled.
      # collectors_exclude = ["gocollector", "process"]

      ## Send string metrics as Prometheus labels.
      ## Unless set to false all string metrics will be sent as labels.
      # string_as_label = true

      ## If set, enable TLS with the given certificate.
      # tls_cert = "/etc/ssl/telegraf.crt"
      # tls_key = "/etc/ssl/telegraf.key"

      ## Export metric collection time.
      #export_timestamp = true

Here's my prometheus config

# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets:
      # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ['localhost:9090']

#  - job_name: 'node_exporter'
#    scrape_interval: 5s
#    static_configs:
#      - targets: ['localhost:9100']

  - job_name: 'telegraf'
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9126']

If i'm going to http://localhost:9090/metrics i don't see any metrics which are coming from telegraf.

I've captured some logs from telegraf as well

/opt telegraf --config /etc/telegraf/telegraf.conf --input-filter filestat --test ➜ /opt tail -F /var/log/telegraf/telegraf.log 2019-02-11T17:34:20Z D! [outputs.prometheus_client] wrote batch of 28 metrics in 1.234869ms 2019-02-11T17:34:20Z D! [outputs.prometheus_client] buffer fullness: 0 / 10000 metrics. 2019-02-11T17:34:30Z D! [outputs.file] wrote batch of 28 metrics in 384.672µs 2019-02-11T17:34:30Z D! [outputs.file] buffer fullness: 0 / 10000 metrics. 2019-02-11T17:34:30Z D! [outputs.prometheus_client] wrote batch of 30 metrics in 1.250605ms 2019-02-11T17:34:30Z D! [outputs.prometheus_client] buffer fullness: 9 / 10000 metrics.

I don't see an issue in the logs.

1
did you fix it ? facing the same issuechetan dev

1 Answers

5
votes

The /metrics endpoint of your Prometheus server exports metrics about the server itself, not metrics that it scraped from targets like the telgraf exporter.

Go to http://localhost:9090/targets, you should see a list of targets that your Prometheus server is scraping. If configured correctly, the telegraf exporter should be one of them.

To query Prometheus for telegraf exporter generated metrics, navigate your browser to http://localhost:9090/graph and enter e.g. cpu_time_user in the query field. If the CPU plugin is enabled it should have that and more metrics.