1
votes

In my pods I cannot reach external hosts. In my case this would be https://login.microsoftonline.com.

I've been following the debugging DNS problems section on https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/, but the lack of knowledge about Kubernetes hinders me to apply the instructions given.

doing a local lookup works fine:

microk8s kubectl exec -i -t dnsutils -- nslookup kubernetes.default
Server:         10.152.183.10
Address:        10.152.183.10#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.152.183.1

However, trying to reach any external domain fails:

microk8s kubectl exec -i -t dnsutils -- nslookup stackoverflow.com
Server:         10.152.183.10
Address:        10.152.183.10#53

** server can't find stackoverflow.com.internal-domain.com: SERVFAIL

command terminated with exit code 1

The known issues section has the following paragraph:

Some Linux distributions (e.g. Ubuntu) use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces /etc/resolv.conf with a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet's --resolv-conf flag to point to the correct resolv.conf (With systemd-resolved, this is /run/systemd/resolve/resolv.conf). kubeadm automatically detects systemd-resolved, and adjusts the kubelet flags accordingly.

Given, that the microk8s instance is running on Ubuntu, this might be worth investigating, but I have no idea where and how to apply that --resolv-conf flag.

I am grateful for any hints on how I can track down this issue, since DNS including nslookup, traceroute et al is working flawlessly on the host system.


Update /etc/resolv.conf

nameserver 127.0.0.53
options edns0 trust-ad
search internal-domain.com

And that is the /etc/resolv.conf from within the dnsutils pod:

search default.svc.cluster.local svc.cluster.local cluster.local internal-domain.com
nameserver 10.152.183.10
options ndots:5

configMap:

 Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        log . {
          class error
        }
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . 8.8.8.8 8.8.4.4
        cache 30
        loop
        reload
        loadbalance
    }
2
Just to be sure do you have your dns addon enabled? Can you try for testing purposes use this command sudo iptables -P FORWARD ACCEPT and check if its working afterwards? You mentioned that you checked dns troubleshooting. Have you checked coredns logs? Can you paste them?acid_fuji
already done that and CoreDNS is enabledMarco
What about the rest? Does iptables changes anything? Can you also run microk8s inspect command place the output? Have you perform any changes recently? Lastly, can you try to to restart microk8s (microk8s stop, then microk8s start).acid_fuji
Yes I followed the instructions in the documentation. It did not result in any measureables differenceMarco
Notice how the error message says it can't resolve stackoverflow.com.internal-domain.com. Try with stackoverflow.com. with a dot at the end instead. But really, we have no idea what is at 127.0.0.53 or how, if at all, it is able to resolve external domains.tripleee

2 Answers

0
votes

In the end I could not figure out, what the reason was for this behaviour, so I did a full reset of the node:

microk8s reset
sudo snap remove microk8s
sudo snap install microk8s --classic --channel=1.19

Followed by the remaining instructions to configure secrets et al.

0
votes

Change forward . 8.8.8.8 8.8.4.4 to forward . /etc/resolv.conf