0
votes

We're using k8s 1.9.3 managed via kops 1.9.3 in AWS with Gossip based DNS using the weave cni network plugin.

I was doing a rolling-update of the master IG's to enable a some additional admission controllers. (PodNodeSelector and PodTolerationRestriction) I did this in two other clusters with no problems. When the cluster got to rolling the third master (we run our cluster in a 3 master setup) it brought down the instance and tried to bring up the new master instance but the new master instance failed to join the cluster. Upon further research and subsequent attempts to roll the third master to bring it into the cluster I found that the third, failing to join master, keeps trying to join the cluster as the old masters ip address. Even though it's ip address is something different. Watching a kubectl get nodes | grep master shows that the cluster thinks it's the old ip address and it fails because it's not that ip anymore. It seems that for some reason the cluster gossip based DNS is not getting notified about the new master's ip address.

This is causing problems because the kubernetes svc still has the old master's ip address in it, which is causing any api requests that get directed to that non-existent backend master to fail. It is also causing problems for etcd which keeps trying to contact it on the old ip address. Lots of logs like this:

018-10-29 22:25:43.326966 W | etcdserver: failed to reach the peerURL(http://etcd-events-f.internal.kops-prod.k8s.local:2381) of member 3b7c45b923efd852 (Get http://etcd-events-f.internal.kops-prod.k8s.local:2381/version: dial tcp 10.34.6.51:2381: i/o timeout)
2018-10-29 22:25:43.327088 W | etcdserver: cannot get the version of member 3b7c45b923efd852 (Get http://etcd-events-f.internal.kops-prod.k8s.local:2381/version: dial tcp 10.34.6.51:2381: i/o timeout)

One odd thing is that if I run etcdctl cluster-health on the available masters etcd instances they all show the unhealthy member id as f90faf39a4c5d077 but when I look at the etcd-events logs I see that it sees the unhealth member id as 3b7c45b923efd852. So there seems to be some inconsistency with etcd.

Since we are running in a three node master setup with one master down we don't want to restart any of the other masters to try to fix the problem because we're afraid to lose quorum on the etcd cluster.

We use weave 2.3.0 as our network CNI provider.

Noticed on the failing master that the weave cni config /etc/cni/net.d/10-weave.conf isn't getting created and the /etc/hosts files on the working masters isn't properly getting updated with the new master ip address. It seems like kube-proxy isn't getting the update for some reason.

Running the default debian 8 (jessie) image that is provided with kops 1.9.

How can we get the master to properly update DNS with it's new ip address?

1
i'd like to advise you to spend more time for research to locate the reasons of of your problems, then edit your question to give concrete form to a one particular problem, because it looks very complicated at this time.Konstantin Vustin

1 Answers

0
votes

My co-worker found that the fix was restarting the kube-dns and kube-dns-autoscaler pods. We're still not sure why they were failing to update dns with the new master ip but after restarting them adding the new master to the cluster worked fine.