0
votes

In Google Cloud blog they say that if Readiness probe fails, then traffic will not be routed to a pod. And if Liveliness probe fails, a pod will be restarted.

Kubernetes docs they say that the kubelet uses Liveness probes to know if a container needs to be restarted. And Readiness probes are used to check if a container is ready to start accepting requests from clients.

My current understanding is that a pod is considered Ready and Alive when all of its containers are ready. This in turn implies that if 1 out of 3 containers in a pod fails, then the entire pod will be considered as failed (not Ready / not Alive). And if 1 out of 3 containers was restarted, then it means that the entire pod was restarted. Is this correct?

2

2 Answers

4
votes

A Pod is ready only when all of its containers are ready. When a Pod is ready, it should be added to the load balancing pools of all matching Services because it means that this Pod is able to serve requests.
As you can see in the Readiness Probe documentation:

The kubelet uses readiness probes to know when a container is ready to start accepting traffic.

Using readiness probe can ensure that traffic does not reach a container that is not ready for it.
Using liveness probe can ensure that container is restarted when it fail ( the kubelet will kill and restart only the specific container).

Additionally, to answer your last question, I will use an example:

And if 1 out of 3 containers was restarted, then it means that the entire pod was restarted. Is this correct?

Let's have a simple Pod manifest file with livenessProbe for one container that always fails:

---
# web-app.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: web-app
  name: web-app
spec:
  containers:
  - image: nginx
    name: web

  - image: redis
    name: failed-container
    livenessProbe:
      httpGet:
        path: /healthz # I don't have this endpoint configured so it will always be failed.
        port: 8080

After creating web-app Pod and waiting some time, we can check how the livenessProbe works:

$ kubectl describe pod web-app
Name:         web-app
Namespace:    default
Containers:
  web:
    ...
    State:          Running
      Started:      Tue, 09 Mar 2021 09:56:59 +0000
    Ready:          True
    Restart Count:  0
    ...
  failed-container:
    ...
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
    Ready:          False
    Restart Count:  7
    ...
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  ...
  Normal   Killing    9m40s (x2 over 10m)   kubelet            Container failed-container failed liveness probe, will be restarted
  ...

As you can see, only the failed-container container was restarted (Restart Count: 7).

More information can be found in the Liveness, Readiness and Startup Probes documentation.

1
votes

For Pods with multiple containers, we do have an option to restart only single containers conditions applied it have required access.

Command :

kubectl exec POD_NAME -c CONTAINER_NAME  "Command used for restarting the container"

Such that required POD is not deleted and k8s doesn't need to recreate the POD.