My K8 DNS isn't resolving, thus I did follow the debugging steps as mentioned here. As I am new to K8, can someone point me to the issue I am facing? I cant extract any useful information out of the debugging steps.
cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.2 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.2 LTS"
kubectl version
Client Version: version.Info{Major:"1", Minor:"20",
GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:10:43Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:02:01Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
kubectl get namespace
NAME STATUS AGE
default Active 7d4h
kubectl get pods dnsutils
NAME READY STATUS RESTARTS AGE
dnsutils 1/1 Running 18 18h
kubectl exec -i -t dnsutils -- nslookup kubernetes.default
;; connection timed out; no servers could be reached
kubectl exec -ti dnsutils -- cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local
cluster.local
options ndots:5
kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS
AGE
coredns-74ff55c5b-6vsml 1/1 Running 12 7d4h
coredns-74ff55c5b-mww7g 1/1 Running 12 7d4h
kubectl logs --namespace=kube-system -l k8s-app=kube-dns
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
[INFO] Reloading
[INFO] plugin/health: Going into lameduck mode for 5s
[INFO] plugin/reload: Running configuration MD5 = 3d3f6363f05ccd60e0f885f0eca6c5ff
[INFO] Reloading complete
[INFO] 10.244.0.1:16732 - 59651 "HINFO IN 6307445054232439722.7934820194057826263. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.006053527s
[INFO] 127.0.0.1:58672 - 59651 "HINFO IN 6307445054232439722.7934820194057826263. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.00658948s
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
[INFO] Reloading
[INFO] plugin/health: Going into lameduck mode for 5s
[INFO] plugin/reload: Running configuration MD5 = 3d3f6363f05ccd60e0f885f0eca6c5ff
[INFO] Reloading complete
[INFO] 10.244.0.62:56364 - 32900 "HINFO IN 2808379183970575835.6786373795048579500. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004922932s
[INFO] 127.0.0.1:48277 - 32900 "HINFO IN 2808379183970575835.6786373795048579500. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.007889024s
[INFO] 10.244.0.62:49106 - 59651 "HINFO IN 6307445054232439722.7934820194057826263. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.005058199s
kubectl get svc --namespace=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 7d4h
monitoring-influxdb ClusterIP 10.102.51.183 <none> 8086/TCP 4d21h
kubectl get endpoints kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 10.244.0.45:53,10.244.0.47:53,10.244.0.45:53 + 3 more... 7d4h
cat /run/systemd/resolve/resolv.conf
nameserver 8.8.8.8
nameserver 2001:4860:4860::8888
cat /etc/systemd/resolved.conf
[Resolve]
DNS=8.8.8.8 2001:4860:4860::8888
cat /etc/resolv.conf
nameserver 127.0.0.53
options edns0 trust-ad
It is kinda odd, that both resolv.conf have different values. Also, I have no clue (if I would have to set the DNS IP manually) which IP to choose.
kubeadm config view
apiServer:
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.20.5
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
Update
The dnsutils assigned pod's IP is 10.244.2.20 and not reachable from the single k8 master node.
ping 10.244.2.20
10.244.0.0/16
and flannel/Calico for CNI. I did purge and remove all nodes, re-joined them and then it worked like a charm. I guess I was messing up too much with the firewall-rules. Will document in more detail later on. – Thomas Christof