1
votes

I have a GKE cluster with an autoscale node pool.

After adding some pods, the cluster starts autoscale and creates a new node but the old running pods start to crash randomly:

WorkloadsError

1

1 Answers

3
votes

I don't think it's directly related to autoscaling unless some of your old nodes are being removed. The autoscaling is triggered by adding more pods but most likely, there is something with your application or connectivity to external services (db for example). I would check the what's going on in the pod logs:

$ kubectl logs <pod-id-that-is-crashing>

You can also check for any other event in the pods or deployment (if you are using a deployment)

$ kubectl describe deployment <deployment-name>
$ kubectl describe pod <pod-id> -c <container-name>

Hope it helps!