0
votes

Cluster setup:

  • OS: Ubuntu 18.04, w/ Kubernetes recommended install settings
  • Cluster is bootstrapped with Kubespray
  • CNI is Calico

Quick Facts (when redis service ip is 10.233.90.37):

  • Host machine: psql 10.233.90.37:6379 => success
  • Host machine: psql 10.233.90.37:80 => success

  • Pods (in any namespace) psql 10.233.90.37:6379 => timeout

  • Pods (in any namespace) psql redis:6379 => timeout
  • Pods (in any namespace) psql redis.namespace.svc.cluster.local => timeout
  • Pods (in any namespace) psql redis:80 => success
  • Pods (in any namespace) psql redis.namespace.svc.cluster.local:80 => success

Kubernetes Service (NodePort, LoadBalancer, ClusterIP) will not forward ports other than 80 and 443, for pods. The pod ports can be different, but the requests to the Service will time out if the Service port is not 80 or 443.

Requests from the host machine to a Kubernetes Service on ports other than 80 and 443 work. BUT requests from pods to these other ports fail.

Requests from pods to services on ports 80 and 443 do work.

user@host: curl 10.233.90.37:80
200 OK
user@host: curl 10.233.90.37:5432
200 OK

# ... exec into Pod
```bash
bash-4.4# curl 10.233.90.37:80
200 OK
bash-4.4# curl 10.233.90.37:5432
Error ... timeout ...
user@host: kubectl get NetworkPolicy -A
No resources found.
user@host: kubectl get PodSecurityPolicy -A
No resources found.

Example service:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: redis
  name: redis
  namespace: namespace
spec:
  ports:
  - port: 6379
    protocol: TCP
    targetPort: 6379
    name: redis
  - port: 80
    protocol: TCP
    targetPort: 6379
    name: http
  selector:
    app: redis
  type: NodePort # I've tried ClusterIP, NodePort, and LoadBalancer

What's going on with this crazy Kubernetes Service port behavior!?

After debugging, I've found that it may be related to ufw and iptables config.

ufw settings (very permissive):

Status: enabled
80                         ALLOW       Anywhere
443                        ALLOW       Anywhere
6443                       ALLOW       Anywhere
2379                       ALLOW       Anywhere
2380                       ALLOW       Anywhere
10250/tcp                  ALLOW       Anywhere
10251/tcp                  ALLOW       Anywhere
10252/tcp                  ALLOW       Anywhere
10255/tcp                  ALLOW       Anywhere
179                        ALLOW       Anywhere
5473                       ALLOW       Anywhere
4789                       ALLOW       Anywhere
10248                      ALLOW       Anywhere
22                         ALLOW       Anywhere
80 (v6)                    ALLOW       Anywhere (v6)
443 (v6)                   ALLOW       Anywhere (v6)
6443 (v6)                  ALLOW       Anywhere (v6)
2379 (v6)                  ALLOW       Anywhere (v6)
2380 (v6)                  ALLOW       Anywhere (v6)
10250/tcp (v6)             ALLOW       Anywhere (v6)
10251/tcp (v6)             ALLOW       Anywhere (v6)
10252/tcp (v6)             ALLOW       Anywhere (v6)
10255/tcp (v6)             ALLOW       Anywhere (v6)
179 (v6)                   ALLOW       Anywhere (v6)
5473 (v6)                  ALLOW       Anywhere (v6)
4789 (v6)                  ALLOW       Anywhere (v6)
10248 (v6)                 ALLOW       Anywhere (v6)
22 (v6)                    ALLOW       Anywhere (v6)

Kubespray deployment fails with ufw disabled. Kubespray deployment succeeds with ufw enabled.

Once deployed, disabling ufw will allow pods to connect on ports other than 80, 443. However, the cluster crashes when ufw is disabled.

Any idea what's going on? Am I missing a port in ufw config.... ? Seems weird that ufw would be required for kubespray install to succeed.

1
can you share the yaml of serviceRajesh Gupta
Hi, thanks for the comment -- I've added it above.Shain Lafazan
From the behavior, it seems like a permissions issue for the Pod class, but I have no idea where to find it and the Kubernetes documentation doesn't seem to mention it.Shain Lafazan
the service you posted listens on port 80 and forwards to port 6379. could you post a service definition that does not listen on port 80?Markus Dresch
Hi @MarkusDresch -- I'm using the same definition, just changing the "port" from 80 to 6379. This service allows Pods to communicate through "port: 80", but not with "port: 6379". If you change the port number, it's exactly the config I've been trying. For example, this works: Service Config: port: 80 targetPort: 6379 ... connect to "redis:80" is successful This fails: Service Config: port: 6379 targetPort: 6379 ... connect to "redis:6379" timed outShain Lafazan

1 Answers

2
votes

LoadBalancer service exposes 1 external IP which external clients or users will use to connect with your app. In most cases, you would expect your LoadBalancer service to listen on port 80 for http traffic and port 443 for https. Because you would want your users to type http://yourapp.com or https://yourapp.com instead of http://yourapp.com:3000.

It looks like you are mixing different services in your Example service yaml for e.g. nodePort is used when service is of type NodePort. You may try:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: redis
    role: master
    tier: backend
  name: redis
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 6379    // service will target containers on port 6379
    name: someName
  selector:
    app: redis
    role: master
    tier: backend
  type: LoadBalancer