Cluster setup:
- OS: Ubuntu 18.04, w/ Kubernetes recommended install settings
- Cluster is bootstrapped with Kubespray
- CNI is Calico
Quick Facts (when redis service ip is 10.233.90.37
):
- Host machine:
psql 10.233.90.37:6379
=> success Host machine:
psql 10.233.90.37:80
=> successPods (in any namespace)
psql 10.233.90.37:6379
=> timeout- Pods (in any namespace)
psql redis:6379
=> timeout - Pods (in any namespace)
psql redis.namespace.svc.cluster.local
=> timeout - Pods (in any namespace)
psql redis:80
=> success - Pods (in any namespace)
psql redis.namespace.svc.cluster.local:80
=> success
Kubernetes Service (NodePort, LoadBalancer, ClusterIP) will not forward ports other than 80 and 443, for pods. The pod ports can be different, but the requests to the Service will time out if the Service port is not 80 or 443.
Requests from the host machine to a Kubernetes Service on ports other than 80 and 443 work. BUT requests from pods to these other ports fail.
Requests from pods to services on ports 80 and 443 do work.
user@host: curl 10.233.90.37:80
200 OK
user@host: curl 10.233.90.37:5432
200 OK
# ... exec into Pod
```bash
bash-4.4# curl 10.233.90.37:80
200 OK
bash-4.4# curl 10.233.90.37:5432
Error ... timeout ...
user@host: kubectl get NetworkPolicy -A
No resources found.
user@host: kubectl get PodSecurityPolicy -A
No resources found.
Example service:
apiVersion: v1
kind: Service
metadata:
labels:
app: redis
name: redis
namespace: namespace
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
name: redis
- port: 80
protocol: TCP
targetPort: 6379
name: http
selector:
app: redis
type: NodePort # I've tried ClusterIP, NodePort, and LoadBalancer
What's going on with this crazy Kubernetes Service port behavior!?
After debugging, I've found that it may be related to ufw and iptables config.
ufw settings (very permissive):
Status: enabled
80 ALLOW Anywhere
443 ALLOW Anywhere
6443 ALLOW Anywhere
2379 ALLOW Anywhere
2380 ALLOW Anywhere
10250/tcp ALLOW Anywhere
10251/tcp ALLOW Anywhere
10252/tcp ALLOW Anywhere
10255/tcp ALLOW Anywhere
179 ALLOW Anywhere
5473 ALLOW Anywhere
4789 ALLOW Anywhere
10248 ALLOW Anywhere
22 ALLOW Anywhere
80 (v6) ALLOW Anywhere (v6)
443 (v6) ALLOW Anywhere (v6)
6443 (v6) ALLOW Anywhere (v6)
2379 (v6) ALLOW Anywhere (v6)
2380 (v6) ALLOW Anywhere (v6)
10250/tcp (v6) ALLOW Anywhere (v6)
10251/tcp (v6) ALLOW Anywhere (v6)
10252/tcp (v6) ALLOW Anywhere (v6)
10255/tcp (v6) ALLOW Anywhere (v6)
179 (v6) ALLOW Anywhere (v6)
5473 (v6) ALLOW Anywhere (v6)
4789 (v6) ALLOW Anywhere (v6)
10248 (v6) ALLOW Anywhere (v6)
22 (v6) ALLOW Anywhere (v6)
Kubespray deployment fails with ufw disabled. Kubespray deployment succeeds with ufw enabled.
Once deployed, disabling ufw will allow pods to connect on ports other than 80, 443. However, the cluster crashes when ufw is disabled.
Any idea what's going on? Am I missing a port in ufw config.... ? Seems weird that ufw would be required for kubespray install to succeed.