1
votes

I have several celery workers running in minikube, and they are working on tasks passed using the rabbitMQ. Recently I updated some of the code for the celery workers and changed the image. When I do helm upgrade release_name chart_path, all the existing worker pods are terminated and all the unfinished tasks are abandoned. I was wondering if there is a way to upgrade the helm chart without terminating the old pods?

  1. I know that helm install -n new_release_name chart_path will give me a new set of celery workers; however, due to some limitations, I am not allowed to deploy pods in a new release.
  2. I tried running helm upgrade release_name chart_path --set deployment.name=worker2 because I thought having a new deployment name will stop helm from deleting the old pods, but this won't work either.
1
What is your deployment strategy configured ? is it RollingUpdate or Recreate ?Bala
I tried configuring the yaml file for celery deployment and added the following lines: strategy: type: RollingUpdate rollingUpdate: maxSurge: 2 maxUnavailable: 0 But it is still terminating the old pods when I run update, could you provide more suggestions on what deployment strategy is going to work?Ron Zhang
Try to add a readinessProbe and livenessProbe. readinessProbe will tell kubernetes when your new pod is ready, even thought the container is ready, still you application may be initializing.Bala

1 Answers

1
votes

This is just how Kubernetes Deployments work. What you should do is to fix your Celery worker image so that it waits to try and complete whatever tasks are pending before actually shutting down. This should already probably be the case unless you did something funky such that the SIGTERM isn't making it to Celery? See https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods for details.