0
votes

I am running on prem kubernetes. I have a release that is running with 3 pods. At one time (I assume) I deployed the helm chart with 3 replicas. But I have since deployed an update that has 2 replicas.

When I run helm get manifest my-release-name -n my-namespace, it shows that the deployment yaml has replicas set to 2.

But it still has 3 pods when I run kubectl get pods -n my-namespace.

What is needed (from a helm point of view) to get the number of replicas down to the limit I set?

Update
I noticed this when I was debugging a crash loop backoff for the release.

This is an example of what a kubectl describe pod looks like on one of the three pods.

Name:         my-helm-release-7679dc8c79-knd9x
Namespace:    my-namespace
Priority:     0
Node:         my-kube-cluster-b178d4-k8s-worker-1/10.1.2.3
Start Time:   Wed, 05 May 2021 21:27:36 -0600
Labels:       app.kubernetes.io/instance=my-helm-release
              app.kubernetes.io/name=my-helm-release
              pod-template-hash=7679dc8c79
Annotations:  
Status:       Running
IP:           10.1.2.4
IPs:
  IP:           10.1.2.4
Controlled By:  ReplicaSet/my-helm-release-7679dc8c79
Containers:
  my-helm-release:
    Container ID:   docker://9a9f213efa63ba8fd5a9e0fad84eb0615996c768c236ae0045d1e7bec012eb02
    Image:          dockerrespository.mydomain.com/repository/runtime/my-helm-release:1.9.0-build.166
    Image ID:       docker-pullable://dockerrespository.mydomain.com/repository/runtime/my-helm-release@sha256:a11179795e7ebe3b9e57a35b0b27ec9577c5c3cd473cc0ecc393a874f03eed92
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    139
      Started:      Tue, 11 May 2021 12:24:04 -0600
      Finished:     Tue, 11 May 2021 12:24:15 -0600
    Ready:          False
    Restart Count:  2509
    Liveness:       http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-82gnm (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-82gnm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-82gnm
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                      From     Message
  ----     ------     ----                     ----     -------
  Warning  Unhealthy  10m (x3758 over 5d15h)   kubelet  Readiness probe failed: Get http://10.1.2.4:80/: dial tcp 10.1.2.4:80: connect: connection refused
  Warning  BackOff    15s (x35328 over 5d14h)  kubelet  Back-off restarting failed container
1
What is the status of your pods? Can you show some output from your commands?Jonas
@Jonas the pods are in a crash loop backoff. I was trying to debug that when I noticed that the replica count did not match what my recent deploys said they should be. I have added a kubectl describe pod for one of the pods in the release.Vaccano

1 Answers

4
votes

What is needed (from a helm point of view) to get the number of replicas down to the limit I set?

Your pods need to be in a "healthy" state. Then they are in your desired number of replicas.

First, you deployed 3 replicas. This is managed by a ReplicaSet.

Then you deployed a new revision, with 2 replicas. A "rolling deployment" will be performed. First pods with your new revision will be created, but replicas of your old ReplicaSet will only be scaled down when you have healthy instances of your new revision.