0
votes

I created a single master and 2 worker cluster using Kops on AWS. Now for my experiment, I need to kill the pod on a worker and check the kubelet logs to know:

When the pod was removed from service endpoint list?

When a new pod container got recreated?

When the new pod container was assigned the new IP Address?

While when I created an on-prem cluster using kubeadm, I could see all the information (like the one mentioned above) in the kubelet logs of the worker node (whose pod was killed).

I do not see detailed kubelet logs like this, specially logs related to assignment of IP address in Kops created K8s cluster.

How to get the information mentioned above in the cluster created using kops?

2
can you access the work nodes and ssh into it ? just try the journalctl -u kubelet or else if you can ssh you can directly access the logs file.Harsh Manvar

2 Answers

1
votes

On the machines with systemd both the kubelet and container runtime write to journald. If systemd is not present they write to .log in the /var/log location.

You can access the systemd logs with journalctl command:

journalctl -u kubelet

This information of course has to be collected after login into desired node.

0
votes

In Kops on AWS, the kubelet logs are not that descriptive as they are in a Kubernetes cluster created using kubeadm.