7
votes

I would like to resolve the kube-dns names from outside of the Kubernetes cluster by adding a stub zone to my DNS servers. This requires changing the cluster.local domain to something that fits into my DNS namespace.

The cluster DNS is working fine with cluster.local. To change the domain I have modified the line with KUBELET_DNS_ARGS on /etc/systemd/system/kubelet.service.d/10-kubeadm.conf to read:

Environment="KUBELET_DNS_ARGS=--cluster-dns=x.y.z --cluster-domain=cluster.mydomain.local --resolv-conf=/etc/resolv.conf.kubernetes"

After restarting kubelet external names are resolvable but kubernetes name resolution failed.

I can see that kube-dns is still running with:

/kube-dns --domain=cluster.local. --dns-port=10053 --config-dir=/kube-dns-config --v=2

The only place I was able to find cluster.local was within the pods yaml configuration which reads:

  containers:
  - args:
    - --domain=cluster.local.
    - --dns-port=10053
    - --config-dir=/kube-dns-config
    - --v=2

After modifying the yaml and recreating the pod using

kubectl replace --force -f kube-dns.yaml

I still see kube-dns gettings started with --domain=cluster.local.

What am I missing?

6
How'd you go with this? I'm running into the same issue, and this question is close to the top in Google results, so I suspect other people might be interested as well.HeWhoWas
Also interested in this, did you find a solution?simon

6 Answers

10
votes

I had a similar problem where I have been porting a microservices based application to Kubernetes. Changing the internal DNS zone to cluster.local was going to be a fairly complex task that we didn't really want to deal with.

In our case, we switched from KubeDNS to CoreDNS, and simply enabled the coreDNS rewrite plugin to translate our our.internal.domain to ourNamespace.svc.cluster.local.

After doing this, the corefile part of our CoreDNS configmap looks something like this:

data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          upstream
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        rewrite name substring our.internal.domain ourNamespace.svc.cluster.local
        proxy . /etc/resolv.conf
        cache 30

    }

This enables our kubernetes services to respond on both the default DNS zone and our own zone.

2
votes

I deployed internal instance of ingress controller, and added CNAME to coreDNS config. to deploy internal nginx-ingress

helm install int -f ./values.yml stable/nginx-ingress --namespace ingress-nginx

values.yaml:

controller:
  ingressClass: 'nginx-internal'
  reportNodeInternalIp: true
  service:
    enabled: true
    type: ClusterIP

to edit coreDNS config: KUBE_EDITOR=nano kubectl edit configmap coredns -n kube-system

My coredns file:

apiVersion: v1
data:
  Corefile: |
    .:53 {
        reload 5s
        log
        errors
        health {
          lameduck 5s
        }
        ready
        template ANY A int {
          match "^([^.]+)\.([^.]+)\.int\.$"
          answer "{{ .Name }} 60 IN CNAME int-nginx-ingress-controller.ingress-nginx.svc.cluster.local"
          upstream 127.0.0.1:53
        }
        template ANY CNAME int {
          match "^([^.]+)\.([^.]+)\.int\.$"
          answer "{{ .Name }} 60 IN CNAME int-nginx-ingress-controller.ingress-nginx.svc.cluster.local"
          upstream 127.0.0.1:53
        }
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          upstream
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . "/etc/resolv.conf"
        cache 30
        loop
        reload
        loadbalance
    }

kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"Corefile":".:53 {\n    errors\n    health {\n      lameduck 5s\n    }\n    ready\n    kubernetes >
  creationTimestamp: "2020-02-27T16:02:20Z"
  name: coredns
  namespace: kube-system
  resourceVersion: "16293672"
  selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
  uid: 8f0ebf84-6451-4f9b-a6e1-c386d44f2d43

If you now add to ingress resource ..int domain, and add proper annotation to use nginx-internal ingress, you can have shorter domain, for example you can configure it like this in jenkins helm chart:

master:
  ingress:
    annotations:
      kubernetes.io/ingress.class: nginx-internal

    enabled: true
    hostName: jenkins.devtools.int
1
votes

I assume you are using CoreDNS.

You can change the cluster base DNS by editing the kubelet config file on ALL Nodes, located here /var/lib/kubelet/config.yaml or set the clusterDomain during kubeadm init.

Change

clusterDomain: cluster.local

to:

clusterDomain: my.new.domain

Now you also need to change the CoreDNS configuration. CoreDNS uses a ConfigMap for this. You can get your current CoreDNS ConfigMap by running

kubectl get -n kube-system cm/coredns -o yaml

Then change

kubernetes cluster.local in-addr.arpa ip6.arpa {
    ...
}

to match your new domain like this:

kubernetes my.new.domain in-addr.arpa ip6.arpa {
    ...
}

Now apply the changes to the CoreDNS ConfigMap. If you restart kubelet and your CoreDNS pods then your cluster should use the new domain.

If you have for example a service called grafana-service, this can now be accessed with this address: grafana-service.default.svc.my.new.domain

# kubectl get service
NAME              TYPE         CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
grafana-service   ClusterIP    <Internal-IP>   <none>        3000/TCP   100m

# nslookup grafana-service.default.svc.my.new.domain
Server:    <Internal-IP>
Address 1: <Internal-IP> kube-dns.kube-system.svc.my.new.domain

Name:      grafana-service.default.svc.my.new.domain
Address 1: <Internal-IP> grafana-service.default.svc.my.new.domain
0
votes

In addition to changing /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, you should run kubeadm init with --service-dns-domain cluster.mydomain.local which will create correct manifest for kube-dns.

It's hard to tell why your mod didn't work without seeing what your current config is. Perhaps you can post the output of:

kubectl get pod -n kube-system -l k8s-app=kube-dns -o jsonpath={.items[0].spec.containers[0]}

so we can see what you got running.

-1
votes

If you have deployed k8s with kubeadm, then you can change cluster.local in /var/lib/kubelet/config.yaml on every node. Also change it in kubeadm-config and kubelet-config-1.17 configmaps (kube-system namespace) if you are planing to add more nodes to cluster. And don't forget to restart nodes.

-3
votes

When you modify the /etc/kubernetes/manifests/ yaml files then you would need to restart kubelet again.

Additionally, if that doesn't work, double check the kubelet logs to see that the proper yaml files are being loaded.