25
votes

I have Prometheus configuration with many jobs where I am scraping metrics over HTTP. But I have one job where I need to scrape the metrics over HTTPS.

When I access:

https://ip-address:port/metrics

I can see the metrics. The job that I have added in the prometheus.yml configuration is:

- job_name: 'test-jvm-metrics'
    scheme: https
    static_configs:
      - targets: ['ip:port']

When I restart the Prometheus I can see an error on my target that says:

context deadline exceeded

I have read that maybe the scrape_timeout is the problem, but I have set it to 50 sec and still the same problem.

What can cause this problem and how to fix it? Thank you!

10

10 Answers

6
votes

I had a same problem in the past. In my case the problem was with the certificates and I fixed it with adding:

 tls_config:
      insecure_skip_verify: true

You can try it, maybe it will work.

15
votes

Probably the default scrape_timeout value is too short for you

[ scrape_timeout: <duration> | default = 10s ]

Set a bigger value for scrape_timeout.

scrape_configs:
  - job_name: 'prometheus'

    scrape_interval: 5m
    scrape_timeout: 1m

Take a look here https://github.com/prometheus/prometheus/issues/1438

1
votes

I had a similar problem, so I tried to extend my scrape_timeout but it didn't do anything - using promtool, however, explained the problem

My problematic job looked like this:

- job_name: 'slow_fella'
  scrape_interval: 10s
  scrape_timeout: 90s
  static_configs:
  - targets: ['192.168.1.152:9100']
    labels:
      alias: sloooow    

check your config like this:

/etc/prometheus $ promtool check config prometheus.yml

Result explains the problem and indicates how to solve it:

Checking prometheus.yml
  FAILED: parsing YAML file prometheus.yml: scrape timeout greater than scrape interval for scrape config with job name "slow_fella"

Just ensure that your scrape_timeout is long enough to accommodate your required scrape_interval.

0
votes

in my case it was issue with IPv6. I have blocked IPv6 with ip6tables, but it also blocked prometheus traffic. Correct IPv6 settings solved issue for me

0
votes

In my case I had accidentally put the wrong port on my Kubernetes Deployment manifest than what was defined in the service associated with it as well as the Prometheus target.

0
votes

disable selinux, then reboot server and test again.

0
votes

This can be happened when the prometheus server can't reach out to the scraping endpoints maybe of firewall denied rules. Just check hitting the url in a browser with <url>:9100 (here 9100 is the node_exporter service running port`) and check if you still can access?

0
votes

Increasing the timeout to 1m helped me to fix a similar issue

0
votes

We Started facing similar issue when we re-configured istio-system namespace and its istio-component. We also had prometheus install via prometheus-operator in monitoring namespace where istio-injection was enabled.

Restarting the promtheus components of the monitoring (istio-injection enabled) namespace resolved the issue.

0
votes

I was facing this issue due to max connections reached. I increased the max_connections parameter in database and released some connections. Then Prometheus was able to scrape metrics again.