0
votes

I have:

  1. deployments of services A and B in k8s
  2. Prometheus stack

I wanna scale service A when metric m1 of service B is changed. Solutions which I found and not suitable more or less:

  1. I can define HPA for service A with the following part of spec:
  - type: Object
      object:
        metric:
          name: m1
        describedObject:
          apiVersion: api/v1
          kind: Pod
          name: certain-pod-of-service-B
        current:
          value: 10k

Technically, it will work. But it's not suitable for dynamic nature of k8s. Also I can't use pods metric (metrics: - type: Pods pods:) in HPA cause it will request m1 metric for pods of service A (which obviously doesn't have this)

  1. Define custom metric in prometheus-adapter which query m1 metric from pods of service B. It's more suitable, but looks like workaround cause I already have a metric m1

  2. The same for external metrics

I feel that I miss something cause it doesn't seem like a non realistic case :) So, advise me please how to scale one service by metric of another in k8s?

1
Why don't you want to use external metrics ? It seems to be a correct approach see: Kubernetes HPA using metrics from another deployment.matt_j
@matt_j, yeah, you're right, external metric works in my case as well as custom metrics (better than custom, actually). But both of my services are in k8s. External metrics by definition are intended for objects outside k8s. As for me, It looks like more of a workaround. And for using them - I need to define them. It's an additional procedure and looks like there is no dynamic rules updating way in prometheus adapter. During prometheus-adapter update, some mistakes in rules can cause problems for other applications in k8s.pingrulkin

1 Answers

1
votes

I decided to provide a Community Wiki answer that may help other people facing a similar issue.

The Horizontal Pod Autoscaler is a Kubernetes feature that allows to scale applications based on one or more monitored metrics.
As we can find in the Horizontal Pod Autoscaler documentation:

The Horizontal Pod Autoscaler automatically scales the number of Pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization (or, with custom metrics support, on some other application-provided metrics).

There are three groups of metrics that we can use with the Horizontal Pod Autoscaler:

  • resource metrics: predefined resource usage metrics (CPU and memory) of pods and nodes.
  • custom metrics: custom metrics associated with a Kubernetes object.
  • external metrics: custom metrics not associated with a Kubernetes object.

Any HPA target can be scaled based on the resource usage of the pods (or containers) in the scaling target. The CPU utilization metric is a resource metric, you can specify other resource metrics besides CPU (e.g. memory). This seems to be the easiest and most basic method of scaling, but we can use more specific metrics by using custom metrics or external metrics.

There is one major difference between custom metrics and external metrics (see: Custom and external metrics for autoscaling workloads):

Custom metrics and external metrics differ from each other:

A custom metric is reported from your application running in Kubernetes. An external metric is reported from an application or service not running on your cluster, but whose performance impacts your Kubernetes application.

All in all, in my opinion it is okay to use custom metrics in the case above, I did not find any other suitable way to accomplish this task.