1
votes

My cluster contains 1 master with 3 worker nodes in which 1 POD with 2 replica sets and 1 service are created. When I try to access the service via the command curl <ClusterIP>:<port> either from 2 worker nodes, sometimes it can feedback Nginx welcome, but sometimes it gets stuck and connection refused and timeout.

I checked the Kubernetes Service, POD and endpoints are fine, but no clue what is going on. Please advise.

vagrant@k8s-master:~/_projects/tmp1$ sudo kubectl get nodes -o wide
NAME          STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
k8s-master    Ready    master   23d   v1.12.2   192.168.205.10   <none>        Ubuntu 16.04.4 LTS   4.4.0-139-generic   docker://17.3.2
k8s-worker1   Ready    <none>   23d   v1.12.2   192.168.205.11   <none>        Ubuntu 16.04.4 LTS   4.4.0-139-generic   docker://17.3.2
k8s-worker2   Ready    <none>   23d   v1.12.2   192.168.205.12   <none>        Ubuntu 16.04.4 LTS   4.4.0-139-generic   docker://17.3.2

vagrant@k8s-master:~/_projects/tmp1$ sudo kubectl get pod -o wide --all-namespaces
NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE     IP               NODE          NOMINATED NODE
default       my-nginx-756f645cd7-pfdck               1/1     Running   0          5m23s   10.244.2.39      k8s-worker2   <none>
default       my-nginx-756f645cd7-xpbnp               1/1     Running   0          5m23s   10.244.1.40      k8s-worker1   <none>
kube-system   coredns-576cbf47c7-ljx68                1/1     Running   18         23d     10.244.0.38      k8s-master    <none>
kube-system   coredns-576cbf47c7-nwlph                1/1     Running   18         23d     10.244.0.39      k8s-master    <none>
kube-system   etcd-k8s-master                         1/1     Running   18         23d     192.168.205.10   k8s-master    <none>
kube-system   kube-apiserver-k8s-master               1/1     Running   18         23d     192.168.205.10   k8s-master    <none>
kube-system   kube-controller-manager-k8s-master      1/1     Running   18         23d     192.168.205.10   k8s-master    <none>
kube-system   kube-flannel-ds-54xnb                   1/1     Running   2          2d5h    192.168.205.12   k8s-worker2   <none>
kube-system   kube-flannel-ds-9q295                   1/1     Running   2          2d5h    192.168.205.11   k8s-worker1   <none>
kube-system   kube-flannel-ds-q25xw                   1/1     Running   2          2d5h    192.168.205.10   k8s-master    <none>
kube-system   kube-proxy-gkpwp                        1/1     Running   15         23d     192.168.205.11   k8s-worker1   <none>
kube-system   kube-proxy-gncjh                        1/1     Running   18         23d     192.168.205.10   k8s-master    <none>
kube-system   kube-proxy-m4jfm                        1/1     Running   15         23d     192.168.205.12   k8s-worker2   <none>
kube-system   kube-scheduler-k8s-master               1/1     Running   18         23d     192.168.205.10   k8s-master    <none>
kube-system   kubernetes-dashboard-77fd78f978-4r62r   1/1     Running   15         23d     10.244.1.38      k8s-worker1   <none>


vagrant@k8s-master:~/_projects/tmp1$ sudo kubectl get svc -o wide
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE   SELECTOR
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   23d   <none>
my-nginx     ClusterIP   10.98.9.75   <none>        80/TCP    75s   run=my-nginx

vagrant@k8s-master:~/_projects/tmp1$ sudo kubectl get endpoints
NAME         ENDPOINTS                       AGE
kubernetes   192.168.205.10:6443             23d
my-nginx     10.244.1.40:80,10.244.2.39:80   101s
1
You might be better asking this question on ServerFaultchrisis

1 Answers

1
votes

This sounds odd but it could be that one of your pods is serving traffic and the other is not. You can try shelling into the pods:

$ kubectl exec -it my-nginx-756f645cd7-rs2w2 sh
$ kubectl exec -it my-nginx-756f645cd7-vwzrl sh

You can see if they are listening on port 80:

$ curl localhost:80

You can also see if your service has the two endpoints 10.244.2.28:80 and 10.244.1.29:80.

$ kubectl get ep my-nginx
$ kubectl get ep my-nginx -o=yaml

Also, try to connect to each one of your endpoints from a node:

$ curl 10.244.2.28:80
$ curl 10.244.2.29:80