I installed a clean K8s cluster in virtual machines (Debian 10). After the installation and the integration into my landscape, I checked the connectivity inside my testing alpine image. As result the connection of outgoing traffic not working and no information was inside the coreDNS log. I used the workaround on my build image to overwrite my /etc/resolv.conf and replace the DNS entries (e.g. set 1.1.1.1 as Nameserver). After that temporary "hack" the connection to the internet works perfectly. But the workaround is not a long term solution and I want to use the official way. Inside the documentation of K8s coreDNS, I found the forward section and I interpret the flag like an option, to forward the inquiry to the predefined local resolver. I think the forwarding to the local resolv.conf and the resolve process works not correctly. Can anyone help me to solve that issue?
Basic setup:
- K8s version: 1.19.0
- K8s setup: 1 master + 2 worker nodes
- Based on: Debian 10 VM's
- CNI: Flannel
Status of CoreDNS Pods
kube-system coredns-xxxx 1/1 Running 1 26h
kube-system coredns-yyyy 1/1 Running 1 26h
CoreDNS Log:
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
CoreDNS config:
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: ""
name: coredns
namespace: kube-system
resourceVersion: "219"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: xxx
Ouput alpine image:
/ # nslookup -debug google.de
;; connection timed out; no servers could be reached
Output of pods resolv.conf
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search development.svc.cluster.local svc.cluster.local cluster.local invalid
options ndots:5
Output of host resolv.conf
cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 213.136.95.11
nameserver 213.136.95.10
search invalid
Output of host /run/flannel/subnet.env
cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
Output of kubectl get pods -n kube-system -o wide
coredns-54694b8f47-4sm4t 1/1 Running 0 14d 10.244.1.48 xxx3-node-1 <none> <none>
coredns-54694b8f47-6c7zh 1/1 Running 0 14d 10.244.0.43 xxx2-master <none> <none>
coredns-54694b8f47-lcthf 1/1 Running 0 14d 10.244.2.88 xxx4-node-2 <none> <none>
etcd-xxx2-master 1/1 Running 7 27d xxx.xx.xx.xxx xxx2-master <none> <none>
kube-apiserver-xxx2-master 1/1 Running 7 27d xxx.xx.xx.xxx xxx2-master <none> <none>
kube-controller-manager-xxx2-master 1/1 Running 7 27d xxx.xx.xx.xxx xxx2-master <none> <none>
kube-flannel-ds-amd64-4w8zl 1/1 Running 8 28d xxx.xx.xx.xxx xxx2-master <none> <none>
kube-flannel-ds-amd64-w7m44 1/1 Running 7 28d xxx.xx.xx.xxx xxx3-node-1 <none> <none>
kube-flannel-ds-amd64-xztqm 1/1 Running 6 28d xxx.xx.xx.xxx xxx4-node-2 <none> <none>
kube-proxy-dfs85 1/1 Running 4 28d xxx.xx.xx.xxx xxx4-node-2 <none> <none>
kube-proxy-m4hl2 1/1 Running 4 28d xxx.xx.xx.xxx xxx3-node-1 <none> <none>
kube-proxy-s7p4s 1/1 Running 8 28d xxx.xx.xx.xxx xxx2-master <none> <none>
kube-scheduler-xxx2-master 1/1 Running 7 27d xxx.xx.xx.xxx xxx2-master <none> <none>