0
votes

My Requirement is Scale up PODS on Custom metrics like pending messages from queue increases pods has to increase to process jobs. In kubernetes Scale up is working fine with prometheus adapter & prometheus operator.

I have long running process in pods, but HPA checks the custom metrics and try to scale down, Due to this process killing mid of operations and loosing that message. How i can control the HPA kill only free pods where no process is running.

AdapterService to collect custom metrics

  • seriesQuery: '{namespace="default",service="hpatest-service"}' resources: overrides: namespace: resource: "namespace" service: resource: "service" name: matches: "msg_consumergroup_lag" metricsQuery: 'avg_over_time(msg_consumergroup_lag{topic="test",consumergroup="test"}[1m])'

HPA Configuration

  • type: Object object: describedObject: kind: Service name: custommetric-service metric: name: msg_consumergroup_lag target: type: Value value: 2
4
Please share the Autoscaling configuration that you have used.dassum
i have updated configuration details. Scaling up is working fine. Scale down happening when it found messages are less in queue, msg are consumed by the service and under process, and how i can instruct HPA check the process is PODs and select the idle POD to scale down. Here one more thing is my service calls external process and waits to get response so i can't able to validate with CPU/Memory basedSanthoo Kumar

4 Answers

1
votes

At present the HPA cannot be configured to accommodate workloads of this nature. The HPA simply sets the replica count on the deployment to a desired value according to the scaling algorithm, and the deployment chooses one or more pods to terminate.

There is a lot of discussion on this topic in this Kubernetes issue that may be of interest to you. It is not solved by the HPA, and may never be. There may need to be a different kind of autoscaler for this type of workload. Some suggestions are given in the link that may help you in defining one of these.

If I was to take this on myself, I would create a new controller, with corresponding CRD containing a job definition and the scaling requirements. Instead of scaling deployments, I would have it launch jobs. I would have the jobs do their work (process the queue) until they became idle (no items in the queue) then exit. The controller would only scale up, by adding jobs, never down. The jobs themselves would scale down by exiting when the queue is empty.

This would require that your jobs be able to detect when they become idle, by checking the queue and exiting if there is nothing there. If your queue read blocks forever, this would not work and you would need a different solution.

The kubebuilder project has an excellent example of a job controller. I would start with that and extend it with the ability to check your published metrics and start the jobs accordingly.

Also see Fine Parallel Processing Using a Work Queue in the Kubernetes documentation.

0
votes

I will suggest and idea here , You can run a custom script to disable HPA as soon as it scales up and the script should keep checking the resource and process and when no process enable HPA and scale down , or kill the pods using kubectl command and enable HPA back.

0
votes

I had similar use case to scale the deployments based on the queue length, I used KEDA (keda.sh), it does exactly that. Just know that it will scale down the additional pods created for that deployment even if the pod is currently processing the data/input - you will have to configure the cooldown parameter to scale down appropriately.

0
votes

KEDA ScaledJobs are best for such scenarios and can be triggered through Queue, Storage, etc. (the currently available scalers can be found here). The ScaledJobs are not killed in between the execution and are recommended for long running executions.