4
votes

I have a deployment with a defined number of replicas. I use readiness probe to communicate if my Pod is ready/ not ready to handle new connections – my Pods toggle between ready/ not ready state during their lifetime.

I want Kubernetes to scale the deployment up/ down to ensure that there is always the desired number of pods in a ready state.

Example:

  • If replicas is 4 and there are 4 Pods in ready state, then Kubernetes should keep the current replica count.
  • If replicas is 4 and there are 2 ready pods and 2 not ready pods, then Kubernetes should add 2 more pods.

How do I make Kubernetes scale my deployment based on the "ready"/ "not ready" status of my Pods?

3
Not ready for what reason?suren
Then you'd have 4 not-ready -- because the new pods will spin up in this state first, by which point the first not-ready pod will have become ready.Software Engineer
@EngineerDollery this is not just for spin-up, this is mainly for general lifecycleorirab
You can scale up / down the deployment based on CPU, memory, etc utilization of such resources.. not based on pod statussrc3369

3 Answers

0
votes

I don't think this is possible. If pod is not ready, k8 will not make it ready as It is something which releated to your application.Even if it create new pod, how readiness will be guaranted. So you have to resolve the reasons behind non ready status and then k8. Only thing k8 does it keep them away from taking world load to avoid request failure

-1
votes

Ensuring you always have 4 pods running can be done by specifying the replicas property in your deployment definition:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 4  #here we define a requirement for 4 replicas
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

Kubernetes will ensure that if any pods crash, replacement pods will be created so that a total of 4 are always available.

-3
votes

You cannot schedule deployments on unhealthy nodes in the cluster. The master api will only create pods on nodes which are healthy and meet the quota criteria to create any additional pods on the nodes which are schedulable.

Moreover, what you define is called an auto-heal concept of k8s which in basic terms will be taken care of.