I created a 3-node kubernetes cluster (1 master + 2 workers) on VirtualBox using the instructions here. I am using Flannel for the overlay network.
I set sysctl -w net.bridge.bridge-nf-call-iptables=1
and sysctl -w net.bridge.bridge-nf-call-ip6tables=1
on the master during installation. I hadn't set them on the workers at that time, but I set them later and rebooted both nodes.
I have a trivial web app written in Go, listening on port 8080. I created a pod + replication controller thus:
kubectl run foo --image=<...> --port=8080 --generator=run/v1
I am able to access my service using the POD IP and port 8080.
I also created a ClusterIP service.
kubectl expose rc foo --name=foo-http --port=8081 --target-port=8080 # ClusterIP service # kubectl get svc foo-http NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE foo-http ClusterIP 10.106.88.24 <none> 8081/TCP 14m
When I run this from any cluster node, it hangs:
curl http://10.106.88.24:8081 # that's the ClusterIP
By running strace, I can see that curl initiates a non-blocking
connect
, and spins in a loop onpoll
, without the socket ever becoming ready for aread
- so the connection doesn't go through.If I create a NodePort service instead, I simply get connection refused.
# cat svc_nodeport.json apiVersion: v1 kind: Service metadata: name: foo-http spec: type: NodePort ports: - port: 8081 targetPort: 8080 nodePort: 31123 selector: app: foo # kubectl create -f svc_nodeport.json # kubectl get svc foo-http NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE foo-http NodePort 10.104.78.88 <none> 8081:31123/TCP 7m
When I try connecting via port 31123:
# curl http://<node-ip>:31123 # Tried on master and both workers curl: (7) Failed connect to <node-ip>:31123; Connection refused
How do debug this?