2
votes

I want an application to pull an item off a queue, process the item on the queue and then destroy itself. Pull -> Process -> Destroy.

I've looked at using the job pattern Queue with Pod Per Work Item as that fits the usecase however it isn't appropriate when I need the job to autoscale aka 0/1 pods when queue is empty and scale to a point when items are added. The only way I can see doing this is via a deployment but that removes the pattern of Queue with Pod per Work Item. There must be a fresh container per item.

Is there a way to have the job pattern Queue with Pod Per Work Item but with auto-scaling?

1
The only way I can see doing this is via a deployment but that removes the pattern of Queue with Pod per Work Item. Is this your pain point? You want to ensure that a pod will process exactly one message then destroy itself? If so, then you can achieve that by fetching exactly one item and exiting accordingly in your code.Anas Tiour
Will the pod in deployment go to succeed (finish and destroyed) and a new pod created to take its place? Keeping within the number of replicas that I in turn could use to scale up and down?Softey
If the pod "successfully" exits, I see no reason why it would be restarted by K8s. I don't understand your second instance, sorry.Anas Tiour

1 Answers

2
votes

I am a bit confused, so I'll just say this: if you don't mind a failed pod, and you wish that a failed pod will not be recreated by Kubernetes, you can do that in your code by catching all errors and exiting gracefully (not advised). Please also note, that for deployments, the only accepted restartPolicy is always. So pods of a deployments who crash will always be restarted by Kubernetes, and will probably fails for the same reason, leading to a CrashLoopBackOff.

If you want to scale a deployment depending on the length of a RabbitMQ queue's length, check KEDA. It is an event-driven autoscaling platform. Make sure to also check their example with RabbitMQ

Another possibility is a job/deployment, that routinely checks the length of the queue in question and executes kubectl commands to scale your deployment. Here is the cleanest one I could find, at least for my taste