I have setup a kubernetes cluster using kubeadm.
Environment
- Master node installed in a PC with public IP.
- Worker node behind NAT address (the interface has local internal IP, but needs to be accessed using the public IP)
Status
The worker node is able to join the cluster and running
kubectl get nodes
the status of the node is ready.
Kubernetes can deploy and run pods on that node.
Problem
The problem that I have is that I'm not able to access the pods deployed on that node. For example, if I run
kubectl logs <pod-name>
where pod-name is the name of a pod deployed on the worker node, I have this error:
Error from server: Get https://192.168.0.17:10250/containerLogs/default/stage-bbcf4f47f-gtvrd/stage: dial tcp 192.168.0.17:10250: i/o timeout
because it is trying to use the local IP 192.168.0.17, which is not accessable externally.
I have seen that the node had this annotation:
flannel.alpha.coreos.com/public-ip: 192.168.0.17
So, I have tried to modify the annotation, setting the external IP, in this way:
flannel.alpha.coreos.com/public-ip: <my_externeal_ip>
and I see that the node is correctly annotated, but it is still using 192.168.0.17.
Is there something else that I have to setup in the worker node or in the cluster configuration?