2
votes

error doing DNS lookup for NS records for "kubernetes.xxxx.xxx": lookup kubernetes.xxxxxxxx.xxx on 10.0.2.3:53: read udp 10.0.2.15:56154->10.0.2.3:53: i/o timeout

My inbound rules for master node

only my kops update cluster throws the following all other commands looks fine

here is my kops validate cluster

Using cluster from kubectl context: kubernetes.xxxx.xxx

Validating cluster kubernetes.xxxxxx.xxxx

INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-xxx-xxxx-1a Master t2.micro 1 1 xx-xxxxx-1a nodes Node t2.micro 2 2 xx-xxxxxx-1a

NODE STATUS NAME ROLE READY ip-xxxx-xx-xx-xxx.xxx-xxxxx-x.compute.internal master True

Validation Failed Ready Master(s) 1 out of 1. Ready Node(s) 0 out of 2.

your nodes are NOT ready kubernetes.xxxxxx.xxx

3
im okay to provide additional information tooGaudam Thiyagarajan
Are you call kops from the VM inside your VPC, or from somewhere else? I am just trying to understand where you got that error. DNS server 10.0.2.3:53 available only from a VPC network and you are using compute.internal DNS zone, which is available only from the same network with your cluster.Anton Kostenko
yes i'm trying it from vagrantGaudam Thiyagarajan
Yes from the VPC or yes from somewhere else?:)Anton Kostenko
It's from my local machine I created a vagrant instance and connecting kops from there. It's not a vpc networkGaudam Thiyagarajan

3 Answers

1
votes

You need just to add an entry to your /etc/resolv.conf

nameserver 8.8.8.8

NB : I suppose that you have well configured your NS record, otherwise you can follow this doc

0
votes

It's a dns issue. I did a nslookup to my name servers and added the IP's to /etc/resolv.conf file

nameserver 10.0.2.3
nameserver xxx.xxx.xxx.xxx
nameserver xxx.xxx.xxx.xxx
nameserver xxx.xxx.xxx.xxx
nameserver xxx.xxx.xxx.xxx
search xxxxxx
search kubernetes.xxxxxx.xxx

and when i ran kops update cluster now. It fixed the issue.

0
votes

As you wrote in the comment, you are trying to call the command from your VM which is outside of the VPC network.

Looks like in your system you set the DNS server address as 10.0.2.3:53 and it is unavailable, that's why you cannot resolve your zone from the VM.

To fix it, edit your /etc/hosts file and set the nameserver address to 8.8.8.8, for example. If your Kubernetes DNS zone is OK, you will be able to resolve it, of course (in case you are using a public DNS zone).

With a private DNS zone, it is pretty the same, but you should set the DNS server address of that zone instead of 8.8.8.8.