4
votes

I follow the example at https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/dns

But I cannot get the nslookup output as the example.

When execute

kubectl exec busybox -- nslookup kubernetes

It suppose to return

Server:    10.0.0.10
Address 1: 10.0.0.10

Name:      kubernetes
Address 1: 10.0.0.1

But I only get

nslookup: can't resolve 'kubernetes'
Server:    10.0.2.3
Address 1: 10.0.2.3

error: Error executing remote command: Error executing command in container: Error executing in Docker Container: 1

My Kubernetes is running on a VM, and its ifconfig output is as below:

docker0   Link encap:Ethernet  HWaddr 56:84:7a:fe:97:99  
          inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::5484:7aff:fefe:9799/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:50 errors:0 dropped:0 overruns:0 frame:0
          TX packets:34 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2899 (2.8 KB)  TX bytes:2343 (2.3 KB)

eth0      Link encap:Ethernet  HWaddr 08:00:27:ed:09:81  
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:feed:981/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4735 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2762 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:367445 (367.4 KB)  TX bytes:280749 (280.7 KB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:1f:0d:84  
          inet addr:192.168.144.17  Bcast:192.168.144.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe1f:d84/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3 errors:0 dropped:0 overruns:0 frame:0
          TX packets:19 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:330 (330.0 B)  TX bytes:1746 (1.7 KB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:127976 errors:0 dropped:0 overruns:0 frame:0
          TX packets:127976 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:13742978 (13.7 MB)  TX bytes:13742978 (13.7 MB)

veth142cdac Link encap:Ethernet  HWaddr e2:b6:29:d1:f5:dc  
          inet6 addr: fe80::e0b6:29ff:fed1:f5dc/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:18 errors:0 dropped:0 overruns:0 frame:0
          TX packets:18 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1336 (1.3 KB)  TX bytes:1336 (1.3 KB)

Here is the steps I tried to start the Kubernetes:

vagrant@kubernetes:~/kubernetes$ hack/local-up-cluster.sh 
+++ [0623 11:18:47] Building go targets for linux/amd64:
    cmd/kube-proxy
    cmd/kube-apiserver
    cmd/kube-controller-manager
    cmd/kubelet
    cmd/hyperkube
    cmd/kubernetes
    plugin/cmd/kube-scheduler
    cmd/kubectl
    cmd/integration
    cmd/gendocs
    cmd/genman
    cmd/genbashcomp
    cmd/genconversion
    cmd/gendeepcopy
    examples/k8petstore/web-server
    github.com/onsi/ginkgo/ginkgo
    test/e2e/e2e.test
+++ [0623 11:18:52] Placing binaries
curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused
API SERVER port is free, proceeding...
Starting etcd

etcd -data-dir /tmp/test-etcd.FcQ75s --bind-addr 127.0.0.1:4001 >/dev/null 2>/dev/null

Waiting for etcd to come up.
+++ [0623 11:18:53] etcd:
{"action":"set","node":{"key":"/_test","value":"","modifiedIndex":3,"createdIndex":3}}
Waiting for apiserver to come up
+++ [0623 11:18:55] apiserver:
    {
    "kind":
    "PodList",
    "apiVersion":
    "v1beta3",
    "metadata":
    {
    "selfLink":
    "/api/v1beta3/pods",
    "resourceVersion":
    "11"
    },
    "items":
    []
    }
Local Kubernetes cluster is running. Press Ctrl-C to shut it down.

Logs:
  /tmp/kube-apiserver.log
  /tmp/kube-controller-manager.log
  /tmp/kube-proxy.log
  /tmp/kube-scheduler.log
  /tmp/kubelet.log

To start using your cluster, open up another terminal/tab and run:

  cluster/kubectl.sh config set-cluster local --server=http://127.0.0.1:8080 --insecure-skip-tls-verify=true
  cluster/kubectl.sh config set-context local --cluster=local
  cluster/kubectl.sh config use-context local
  cluster/kubectl.sh

Then in a new terminal window, I executed:

cluster/kubectl.sh config set-cluster local --server=http://127.0.0.1:8080 --insecure-skip-tls-verify=true
cluster/kubectl.sh config set-context local --cluster=local
cluster/kubectl.sh config use-context local

After that, I created the busybox Pod as

kubectl create -f busybox.yaml

The content of the busybox.yaml is from https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/addons/dns/README.md

1
Which startup guide did you follow? Many of them (including GCE) launch DNS for you automatically.Robert Bailey
Thanks. I added the steps I start my Kubernetes in the post. Please help me check if anything missing. Thanks again.David

1 Answers

2
votes

It doesn't appear that local-cluster-up.sh supports DNS out of the box. For DNS to work, the kubelet needs to be passed the flags --cluster_dns=<ip-of-dns-service> and --cluster_domain=cluster.local at startup. This flag isn't included in the set of flags passed to the kubelet, so the kubelet won't try to contact the DNS pod that you've created for name resolution services.

To fix this, you can modify the script to add these two flags to the kubelet and then when you create a DNS service, you need to make sure that you set the same ip address that you passed to the --cluster_dns flag as the portalIP field of the service spec (see an example here).