In my pods I cannot reach external hosts. In my case this would be https://login.microsoftonline.com
.
I've been following the debugging DNS problems section on https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/, but the lack of knowledge about Kubernetes hinders me to apply the instructions given.
doing a local lookup works fine:
microk8s kubectl exec -i -t dnsutils -- nslookup kubernetes.default
Server: 10.152.183.10
Address: 10.152.183.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.152.183.1
However, trying to reach any external domain fails:
microk8s kubectl exec -i -t dnsutils -- nslookup stackoverflow.com
Server: 10.152.183.10
Address: 10.152.183.10#53
** server can't find stackoverflow.com.internal-domain.com: SERVFAIL
command terminated with exit code 1
The known issues section has the following paragraph:
Some Linux distributions (e.g. Ubuntu) use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces /etc/resolv.conf with a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet's --resolv-conf flag to point to the correct resolv.conf (With systemd-resolved, this is /run/systemd/resolve/resolv.conf). kubeadm automatically detects systemd-resolved, and adjusts the kubelet flags accordingly.
Given, that the microk8s instance is running on Ubuntu, this might be worth investigating, but I have no idea where and how to apply that --resolv-conf
flag.
I am grateful for any hints on how I can track down this issue, since DNS including nslookup, traceroute et al is working flawlessly on the host system.
Update /etc/resolv.conf
nameserver 127.0.0.53
options edns0 trust-ad
search internal-domain.com
And that is the /etc/resolv.conf from within the dnsutils pod:
search default.svc.cluster.local svc.cluster.local cluster.local internal-domain.com
nameserver 10.152.183.10
options ndots:5
configMap:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
log . {
class error
}
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 8.8.8.8 8.8.4.4
cache 30
loop
reload
loadbalance
}
sudo iptables -P FORWARD ACCEPT
and check if its working afterwards? You mentioned that you checked dns troubleshooting. Have you checked coredns logs? Can you paste them? – acid_fujimicrok8s inspect
command place the output? Have you perform any changes recently? Lastly, can you try to to restart microk8s (microk8s stop, then microk8s start). – acid_fujistackoverflow.com.internal-domain.com
. Try withstackoverflow.com.
with a dot at the end instead. But really, we have no idea what is at 127.0.0.53 or how, if at all, it is able to resolve external domains. – tripleee