1
votes

I'm trying to expose kubernetes dashboard publicly via an ingress on a single master bare-metal cluster. The issue is that the LoadBalancer (nginx ingress controller) service I'm using is not opening the 80/443 ports which I would expect it to open/use. Instead it takes some random ports from the 30-32k range. I know I can set this range with --service-node-port-range but I'm quite certain I didn't have to do this a year ago on another server. Am I missing something here?

Currently this is my stack/setup (clean install of Ubuntu 16.04):

  • Nginx Ingress Controller (installed via helm)
  • MetalLB
  • Kubernetes Dashboard
  • Kubernetes Dashboard Ingress to deploy it publicly on <domain>
  • Cert-Manager (installed via helm)

k8s-dashboard-ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    # add an annotation indicating the issuer to use.
    cert-manager.io/cluster-issuer: letsencrypt-staging
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/secure-backends: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  name: kubernetes-dashboard-ingress
  namespace: kubernetes-dashboard
spec:
  rules:
  - host: <domain>
    http:
      paths:
      - backend:
          serviceName: kubernetes-dashboard
          servicePort: 443
        path: /
  tls:
  - hosts:
    - <domain>
    secretName: kubernetes-dashboard-staging-cert

This is what my kubectl get svc -A looks like:

NAMESPACE              NAME                            TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                      AGE
cert-manager           cert-manager                    ClusterIP      10.101.142.87    <none>          9402/TCP                     23h
cert-manager           cert-manager-webhook            ClusterIP      10.104.104.232   <none>          443/TCP                      23h
default                kubernetes                      ClusterIP      10.96.0.1        <none>          443/TCP                      6d6h
ingress-nginx          nginx-ingress-controller        LoadBalancer   10.100.64.210    10.65.106.240   80:31122/TCP,443:32697/TCP   16m
ingress-nginx          nginx-ingress-default-backend   ClusterIP      10.111.73.136    <none>          80/TCP                       16m
kube-system            kube-dns                        ClusterIP      10.96.0.10       <none>          53/UDP,53/TCP,9153/TCP       6d6h
kubernetes-dashboard   cm-acme-http-solver-kw8zn       NodePort       10.107.15.18     <none>          8089:30074/TCP               140m
kubernetes-dashboard   dashboard-metrics-scraper       ClusterIP      10.96.228.215    <none>          8000/TCP                     5d18h
kubernetes-dashboard   kubernetes-dashboard            ClusterIP      10.99.250.49     <none>          443/TCP                      4d6h

Here are some more examples of what's happening:

  1. curl -D- http://<public_ip>:31122 -H 'Host: <domain>'

    • returns 308, as the protocol is http not https. This is expected
  2. curl -D- http://<public_ip> -H 'Host: <domain>'

    • curl: (7) Failed to connect to <public_ip> port 80: Connection refused
    • port 80 is closed
  3. curl -D- --insecure https://10.65.106.240 -H "Host: <domain>"

    • reaching the dashboard through an internal IP obviously works and I get the correct k8s-dashboard html.
    • --insecure is due to the let's encrypt not working yet as the acme challenge on port 80 is unreachable.

So to recap, how do I get 2. working? E.g. reaching the service through 80/443?

EDIT: Nginx Ingress Controller .yaml

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2020-02-12T20:20:45Z"
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.30.1
    component: controller
    heritage: Helm
    release: nginx-ingress
  name: nginx-ingress-controller
  namespace: ingress-nginx
  resourceVersion: "1785264"
  selfLink: /api/v1/namespaces/ingress-nginx/services/nginx-ingress-controller
  uid: b3ce0ff2-ad3e-46f7-bb02-4dc45c1e3a62
spec:
  clusterIP: 10.100.64.210
  externalTrafficPolicy: Cluster
  ports:
  - name: http
    nodePort: 31122
    port: 80
    protocol: TCP
    targetPort: http
  - name: https
    nodePort: 32697
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app: nginx-ingress
    component: controller
    release: nginx-ingress
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 10.65.106.240

EDIT 2: metallb configmap yaml

kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      -  10.65.106.240-10.65.106.250
2
Which chart did you used? The default value should be 80:80/TCP,443:443/TCP with the official helm chart. Can you add the yaml of the nginx-ingress-controller service?Jean-Philippe Bond
@Jean-PhilippeBond helm install nginx-ingress --namespace ingress-nginx stable/nginx-ingress this is the exact command I've used for the ingress controller. How do I check the current yaml? With kubectl edit ?dvdblk
kubectl get svc nginx-ingress-controller -n ingress-nginx -o yamlJean-Philippe Bond
I'm going to reproduce that. That is why I've asked for a config.Nick
@dvdblk you have 2 entrypoints to your Ingress Controller. (1) through 10.65.106.240 and (2) through node_ip:31122|32697. Everythin else is going to fail. What is for you your public_ip; in your question?suren

2 Answers

1
votes

So, to solve the 2nd question, as I suggested, you can use hostNetwork: true parameter to map container port to the host it is running on. Note that this is not a recommended practice, and you should always avoid to do this, unless you have a reason.

Example:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  hostNetwork: true
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80
      hostPort: 80           # this parameter is optional, but recommended when using host network
      name: nginx

When I deploy this yaml, I can check where the pod is running and curl that host's port 80.

root@v1-16-master:~# kubectl get po -owide
NAME                     READY   STATUS    RESTARTS   AGE     IP                NODE             NOMINATED NODE   READINESS GATES
nginx                    1/1     Running   0          105s    10.132.0.50       v1-16-worker-2   <none>           <none>

Note: now I know the pod is running on worker node 2. I just need its IP address.

root@v1-16-master:~# kubectl get no -owide
NAME             STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
v1-16-master     Ready    master   52d   v1.16.4   10.132.0.48   xxxx        Ubuntu 16.04.6 LTS   4.15.0-1052-gcp   docker://19.3.5
v1-16-worker-1   Ready    <none>   52d   v1.16.4   10.132.0.49   xxxx        Ubuntu 16.04.6 LTS   4.15.0-1052-gcp   docker://19.3.5
v1-16-worker-2   Ready    <none>   52d   v1.16.4   10.132.0.50   xxxx        Ubuntu 16.04.6 LTS   4.15.0-1052-gcp   docker://19.3.5
v1-16-worker-3   Ready    <none>   20d   v1.16.4   10.132.0.51   xxxx        Ubuntu 16.04.6 LTS   4.15.0-1052-gcp   docker://19.3.5
root@v1-16-master:~# curl 10.132.0.50 2>/dev/null | grep title
<title>Welcome to nginx!</title>
root@v1-16-master:~# kubectl delete po nginx
pod "nginx" deleted
root@v1-16-master:~# curl 10.132.0.50
curl: (7) Failed to connect to 10.132.0.50 port 80: Connection refused

And of course it also works if I go to the public IP on my browser.

-2
votes

update:

i didn't see the edit part of the question when I was writing this answer. it doesn't make sense given the additional info provided. please disregard.

original:

apparently the cluster you are using now has its ingress controller setup over a node-port type service instead of a load-balancer. in order to get desired behavior you need to change configuration of ingress-controller. refer to nginx ingress controller documentation for metalLB cases how to do this.