0
votes

Google publishes a tutorial for using custom metrics to drive the HorizontalPodAutoscaler here, and this tutorial contains instructions for:

  1. Using a Kubernetes manifest to deploy the custom metrics adapter into a custom-metrics namespace.
  2. Deploying a dummy application to generate metrics.
  3. Configuring the HPA to use custom metrics.

We are deploying into a default cluster without any special VPC rules, and we have roughly followed the tutorial's guidance, with a few exceptions:

  • We're using Helm v2, and rather than grant cluster admin role to Tiller, we have granted all of the necessary cluster roles and role bindings to allow the custom-metrics-adapter-deploying Kubernetes manifest to work. We see no issues there; at least the custom metrics adapter spins up and runs.
  • We have defined some custom metrics that are based upon data extracted from a jsonPayload in Stackdriver logs.
  • We have deployed a minute-by-minute CronJob that reads the above metrics and publishes a derived metric, which is the value we want to use to drive the autoscaler. The CronJob is working, and we can see the metric in the derived metric, on a per-Pod basis, in the log metric explorer: log metrics

We're configuring the HPA to scale based on the average of the derived metric across all of the pods belonging to a stateful set (The HPA has a metrics entry with type Pods). However, the HPA is unable to read our derived metric. We see this error message:

failed to get object metric value: unable to get metric xxx_scaling_metric: no metrics returned from custom metrics API

Update

We were seeing DNS errors, but these were apparently false alarms, perhaps in the log while the cluster was spinning up.

We restarted the Stackdriver metrics adapter with the command line option --v=5 to get some more verbose debugging. We see log entries like these:

I0123 20:23:08.069406       1 wrap.go:47] GET /apis/custom.metrics.k8s.io/v1beta1/namespaces/defaults/pods/%2A/xxx_scaling_metric: (56.16652ms) 200 [kubectl/v1.13.11 (darwin/amd64) kubernetes/2e298c7 10.44.1.1:36286]
I0123 20:23:12.997569       1 translator.go:570] Metric 'xxx_scaling_metric' not found for pod 'xxx-0'
I0123 20:23:12.997775       1 wrap.go:47] GET /apis/custom.metrics.k8s.io/v1beta2/namespaces/default/pods/%2A/xxx_scaling_metric?labelSelector=app%3Dxxx: (98.101205ms) 200 [kube-controller-manager/v1.13.11 (linux/amd64) kubernetes/56d8986/system:serviceaccount:kube-system:horizontal-pod-autoscaler 10.44.1.1:36286]

So it looks to us as if the HPA is making the right query for pods-based custom metrics. If we ask the custom metrics API what data it has, and filter with jq to our metric of interest, we see:

{"kind":"MetricValueList",
 "apiVersion":"custom.metrics.k8s.io/v1beta1",
 "metadata: {"selfLink":"/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/xxx_scaling_metric"},
 "items":[]}

That the items array is empty is troubling. Again, we can see data in the metrics explorer, so we're left to wonder if our CronJob app that publishes our scaling metric is supplying the right fields in order for the data to be saved in Stackdriver or exposed through the metrics adapter.

For what it's worth the resource.labels map for the time series that we're publishing in our CronJob looks like:

{'cluster_name': 'test-gke', 
 'zone': 'us-central1-f', 
 'project_id': 'my-project-1234', 
 'container_name': '', 
 'instance_id': '1234567890123456789',
 'pod_id': 'xxx-0', 
 'namespace_id': 'default'}
1
Looks like DNS resolver is not available in local network or there is a forwarding loop. You should look at the networks your cluster uses, DNS resolver location and forwarding, and firewall rules.mebius99
Thanks for that pointer. It turns out that the DNS issues were transient--maybe leftover log entries from the point where the cluster was spinning up. I've redeployed the adapter and the HPA and my stateful set numerous times since, and am not seeing any DNS issues in the stackdriver adapter's logs. But neither am I able to retrieve custom metrics. Still investigating.Eric Schoen

1 Answers

1
votes

We finally solved this. Our CronJob that's publishing the derived metric we want to use is getting its raw data from two other metrics that are extracted from Stackdriver logs, and calculating a new value that it publishes back to Stackdriver.

We were using the resource labels that we saw from those metrics when publishing our derived metric. The POD_ID resource label value in the "input" Stackdriver metrics we are reading is the name of the pod. However, the stackdriver custom metrics adapter at gcr.io/google-containers/custom-metrics-stackdriver-adapter:v0.10.0 is enumerating pods in a namespace and asking stackdriver for data associated with pods' UIDs, not their names. (Read the adapter's source code to figure this out...)

So our CronJob now builds a map of pod names to pod UIDs (which requires it to have RBAC pod list and get roles), and publishes the derived metric we use for HPA with the POD_ID set to the pod's UID instead of its name.

The reason that published examples of custom metrics for HPA (like this) work is that they use the Downward API to get a pod's UID, and provide that value as "POD_ID". In retrospect, that should have been obvious, if we had looked at how the "dummy" metrics exporters got their pod id values, but there are certainly examples (as in Stackdriver logging metrics) where POD_ID ends up being a name and not a UID.