14
votes

The Kubernetes Horizontal Pod Autoscaler walkthrough in https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/ explains that we can perform autoscaling on custom metrics. What I didn't understand is when to use the two API versions: v2beta1 and v2beta2. If anybody can explain, I would really appreciate it.

Thanks in advance.

4

4 Answers

13
votes

The first metrics autoscaling/V2beta1 doesn't allow you to scale your pods based on custom metrics. That only allows you to scale your application based on CPU and memory utilization of your application

The second metrics autoscaling/V2beta2 allows users to autoscale based on custom metrics. It allow autoscaling based on metrics coming from outside of Kubernetes. A new External metric source is added in this api.

metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50

It will identify a specific metric to autoscale on based on metric name and a label selector. Those metrics can come from anywhere like a stackdriver or prometheus monitoring application and based on some query from prometheus you want to scale your application.

It would always better to use V2beta2 api because it can do scaling on CPU and memory as well as on custom metrics, while V2beta1 API can scale only on internal metrics.

The snippet I mentioned in answer denotes how you can specify the target CPU utilisation in V2beta2 API

4
votes

UPDATE: v2beta1 is deprecated in 1.19 and you should use v2beta2 going forward.

Also, v2beta2 added the new api field spec.behavior in 1.18 which allows you to define how fast or slow pods are scaled up and down.


Originally, both versions were functionally identical but had different APIs.

autoscaling/v2beta2 was released in Kubernetes version 1.12 and the release notes state:

  • We released autoscaling/v2beta2, which cleans up and unifies the API

The "cleans up and unifies the API" is referring to that fact that v2beta2 consistently uses the MetricIdentifier and MetricTarget objects:

spec:
  metrics:
    external:
      metric: MetricIdentifier
      target: MetricTarget
    object:
      describedObject: CrossVersionObjectReference
      metric: MetricIdentifier
      target: MetricTarget
    pods:
      metric: MetricIdentifier
      target: MetricTarget
    resource:
      name: string
      target: MetricTarget
    type: string

In v2beta1, those fields have pretty different specs, making it (in my opinion) more difficult to figure out how to use.


Kubernetes 1.12 reference on the v2beta1 fields:

https://v1-16.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#metricspec-v2beta1-autoscaling

Kubernetes 1.12 reference on the v2beta2 fields:

https://v1-16.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#metricspec-v2beta2-autoscaling

0
votes

In case you need to drive the horizontal pod autoscaler with a custom external metric, and only v2beta1 is available to you (I think this is true of GKE still), we do this routinely in GKE. You need:

  1. A stackdriver monitoring metric, possibly one you create yourself,
  2. If the metric isn't derived from sampling Stackdriver logs, a way to publish data to the stackdriver monitoring metric, such as a cronjob that runs no more than once per minute (we use a little python script and Google's python library for monitoring_v3), and
  3. A custom metrics adapter to expose Stackdriver monitoring to the HPA (e.g., in Google, gcr.io/google-containers/custom-metrics-stackdriver-adapter:v0.10.0). There's a tutorial on how to deploy this adapter here. You'll need to ensure that you grant the required RBAC stuff to the service account running the adapter, as shown here. You may or may not want to grant the principal that deploys the configuration cluster-admin role as described in the tutorial; we use Helm 2 w/ Tiller and are careful to grant least privilege to Tiller to deploy.

Configure your HPA this way:

kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
 ...
spec:
   scaleTargetRef:
      kind: e.g., StatefulSet
      name: name-of-pod-to-scale
      apiVersion: e.g., apps/v1
   minReplicas: 1
   maxReplicas: ...
   metrics:
     type: External
     external: 
       metricName: "custom.googleapis.com|your_metric_name"
       metricSelector:
          matchLabels:
             resource.type: "generic_task"
             resource.labels.job: ...
             resource.labels.namespace: ...
             resource.labels.project_id: ...
             resourcel.labels.task_id: ...
       targetValue: e.g., 0.7 (i.e., if you publish a metric that measures the ratio between demand and current capacity)

If you ask kubectl for your HPA object, you won't see autoscaling/v2beta1 settings, but this works well:

kubectl get --raw /apis/autoscaling/v2beta1/namespaces/your-namespace/horizontalpodautoscalers/your-autoscaler | jq

So far, we've only exercised this on GKE. It's clearly Stackdriver-specific. To the extent that Stackdriver can be deployed on other public managed k8s platforms, it might actually be portable. Or you might end up with a different way to publish a custom metric for each platform, using a different metrics publishing library in your cronjob, and a different custom metrics adapter. We know that one exists for Azure, for example.

-8
votes

Just like any other software product, k8 is also release new version with new feature. In k8, every object is specified with api version. With each new api version, k8 object get new features or additional capabilities.

So in case of HPA, beta2 has some more features than beta1 which are mentioned in documentation. So always remember to use stable release(exp. V1) if not available use latest release ( beta2 in case of HPA) for k8 object.