A Pod
is ready only when all of its containers are ready.
When a Pod is ready, it should be added to the load balancing pools of all matching Services because it means that this Pod is able to serve requests.
As you can see in the Readiness Probe documentation:
The kubelet uses readiness probes to know when a container is ready to start accepting traffic.
Using readiness probe can ensure that traffic does not reach a container that is not ready for it.
Using liveness probe can ensure that container is restarted when it fail ( the kubelet will kill and restart only the specific container).
Additionally, to answer your last question, I will use an example:
And if 1 out of 3 containers was restarted, then it means that the entire pod was restarted. Is this correct?
Let's have a simple Pod
manifest file with livenessProbe
for one container that always fails:
---
# web-app.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: web-app
name: web-app
spec:
containers:
- image: nginx
name: web
- image: redis
name: failed-container
livenessProbe:
httpGet:
path: /healthz # I don't have this endpoint configured so it will always be failed.
port: 8080
After creating web-app
Pod
and waiting some time, we can check how the livenessProbe
works:
$ kubectl describe pod web-app
Name: web-app
Namespace: default
Containers:
web:
...
State: Running
Started: Tue, 09 Mar 2021 09:56:59 +0000
Ready: True
Restart Count: 0
...
failed-container:
...
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Ready: False
Restart Count: 7
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
...
Normal Killing 9m40s (x2 over 10m) kubelet Container failed-container failed liveness probe, will be restarted
...
As you can see, only the failed-container
container was restarted (Restart Count: 7
).
More information can be found in the Liveness, Readiness and Startup Probes documentation.