1
votes

Issue in dns lookup for statefulsets srv records

My yaml file

kind: List
apiVersion: v1
items:
- apiVersion: v1
  kind: Service
  metadata:
    name: sfs-svc
    labels:
      app: sfs-app
  spec:
    ports:
    - port: 80
      name: web
    clusterIP: None
    selector:
      app: sfs-app
- apiVersion: apps/v1
  kind: StatefulSet
  metadata:
    name: web
  spec:
    selector:
      matchLabels:
        app: sfs-app # has to match .spec.template.metadata.labels
    serviceName: "sfs-svc"
    replicas: 3 
    template:
      metadata:
        labels:
          app: sfs-app # has to match .spec.selector.matchLabels
      spec:
        terminationGracePeriodSeconds: 10
        containers:
        - name: test-container
          image: nginx
          imagePullPolicy: IfNotPresent
          command: [ "sh", "-c"]
          args:
          - while true; do
              printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE >> /var/sl/output.txt;
              printenv MY_POD_IP >> /var/sl/output.txt;
              date >> var/sl/output.txt; 
              cat /var/sl/output.txt;
              sleep 999999;
            done;
          env:
            - name: MY_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: MY_POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          volumeMounts:
          - name: www
            mountPath: /var/sl
    volumeClaimTemplates:
    - metadata:
        name: www
      spec:
        accessModes: [ "ReadWriteOnce" ]
        #storageClassName: classNameIfAny
        resources:
          requests:
            storage: 1Mi

$ Kubectl cluster-info

Kubernetes master is running at https://192.168.99.100:8443 KubeDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

$ kubectl version

Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}

$ kubectl get po,svc,statefulset

> NAME        READY   STATUS    RESTARTS   AGE
> pod/web-0   1/1     Running   0          45m
> pod/web-1   1/1     Running   0          45m
> pod/web-2   1/1     Running   0          45m
> 
> NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
> service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   46m
> service/sfs-svc      ClusterIP   None         <none>        80/TCP    45m
> 
> NAME                   READY   AGE
> statefulset.apps/web   3/3     45m
> 

PROBLEM: I am not getting DNS address for statefulset headless service

When I try $ nslookup sfs-svc.default.svc.cluster.local

> Server:       127.0.0.53
> Address:  127.0.0.53#53
> 
**> ** server can't find sfs-svc.default.svc.cluster.local: SERVFAIL**
> 
1
It looks like you are using an address that is not your cluster's DNS server, as the kube-dns service's IP will reside in your cluster's Service CIDR (10.96.0.x) (and will not return SERVFAIL for a missing record)mdaniel
Also, while I firmly believe you querying the wrong address is the root cause of your woes, you must also ask specifically for SRV records, as nslookup doesn't return all record types by default: nslookup -type=srv sfs-svc.default.svc.cluster.localmdaniel
can you also elaborate on how you ran nslookup? is this any pod? This yaml works fine for me.Abdullah Al Maruf - Tuhin

1 Answers

1
votes

My first guess is, you are running nslookup from localhost instead of from inside of a pod.

I tried the yaml and I can only regenerate this problem when I run nslookup sfs-svc.default.svc.cluster.local from localhost.

Anyway, To check the DNS entries of a Service, run nslookup from inside of a pod. Here is an example,

~ $ kubectl run -it --rm --restart=Never dnsutils2 --image=tutum/dnsutils  --command -- bash

root@dnsutils2:/# nslookup sfs-svc.default.svc.cluster.local
Server:     10.96.0.10
Address:    10.96.0.10#53

Name:   sfs-svc.default.svc.cluster.local
Address: 172.17.0.6
Name:   sfs-svc.default.svc.cluster.local
Address: 172.17.0.5
Name:   sfs-svc.default.svc.cluster.local
Address: 172.17.0.4

root@dnsutils2:/# exit