4
votes

I have a kubernetes cluster on a private network(private server, not aws or google cloud) and I created a Service to be able to access, however, I need to be able to access from outside the cluster and for this I created an Ingress and added ingress-nginx in the cluster.

This is the YAML I'm using after making several attempts:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: demo-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - host: k8s.local
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx
          servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  type: ClusterIP
  selector:
    name: nginx
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
  # selector:
    # app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: echoserver
        image: nginx
        ports:
        - containerPort: 80

I ran yaml like this: kubectl create -f file.yaml

In the /etc/hosts file I added k8s.local to the ip of the master server.

When trying the command in or out of the master server a "Connection refused" message appears: $ curl http://172.16.0.18:80/ -H 'Host: k8s.local'

I do not know if it's important, but I'm using Flannel in the cluster.

My idea is just to create a 'hello world' and expose it out of the cluster!

Do I need to change anything in the configuration to allow this access?


YAML file edited:

    apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: demo-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    # nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: k8s.local
    http:
      paths:
      - path: /teste
        backend:
          serviceName: nginx
          servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  type: LoadBalancer # NodePort
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: echoserver
        image: nginx
        ports:
        - containerPort: 80
5
Are you running a minikube?prichrd
The Service manifest will not be able to find he pods due to wrong selector. You should use selector matching the labels not the deployment name. In your example it should be <code>selector: app: nginx </code>.Bal Chua
It's not minikube, it's written! On the 'app', I was in doubt and I changed because of an example I had seen in stackoverflow, thanks for showing @BalChua!user2831852

5 Answers

4
votes

You can deploy the ingress controller as a daemonset with host port 80. The service of the controller will not matter then. You can point your domain to every node in your cluster

You can do a NodePort type service but that will force you to use some port in the 30k vicinity, you will not be able to use port 80

Of course the best solution is to use a cloud provider with a load balancer

1
votes

You can make it work with a plain nginx pod but the recommended method is to install a Kubernetes ingress controller, in your case you are using nginx, so you can install an nginx ingress controller.

Here is some information on how to install it.

If you want to allow external access you can also expose the nginx ingress controller as a LoadBalancer service. You can also use NodePort but you will have to manually point a load balancer to the port on your Kubernetes nodes.

And yes the selector on the 'Service' needs to be:

selector: app: nginx

1
votes

In this case NodePort would work. It will open a high port number in every node (same port in every node) so you can use any of these nodes. Place a Load Balancer if you want, and point the backend pool to those instances you have running. Do not use ClusterIP, it is just for internal usage.

1
votes

If you run your cluster baremetal you need to tell the nginx-ingress controller to use hostNetwork: true, to be added in template/spec part of the mandatory.yml That way the pod running the ingress controller will listen to Port 80 and 443 of the host node.

1
votes

https://github.com/alexellis/inlets Is the easiest way of doing what you want.
Note: encryption requires wss:// which requires TLS certs, If you want fully automated encryption + the ability to use Inlets as a Layer 4 LB, you should use Inlets Pro, it's very cheap compared to other cloud alternatives.

I've also been able to setup the oss /non-kubernetes-operator version of Inlets with encryption / wss (web sockets secure), using the open source version of Inlets as a Layer 7 LB. (it just took some manual configuration/wasn't fully automated like the pro version)

https://blog.alexellis.io/https-inlets-local-endpoints/ I was able to get public internet HTTPS + nginx ingress controller to minikube + tested 2 sites routed using ingress objects. In ~3-4 hours with no good guide to doing it / being new to Caddy/Websockets, but expert on Kubernetes Ingress.
Basically:
Step 1.) Create a $0.007/hour or $5/month VPS on Digital Ocean with a public IP
Step 2.) Point mysite1.com, *.mysite1.com, mysite2.com, *.mysite2.com to the public IP of the VPS.
Step 3.) SSH into the machine and install Inlets + Caddy v1.0.3 + Caddyfile here's mine:

mysite1.com, *.mysite1.com, mysite2.com, *.mysite2.com

proxy / 127.0.0.1:8080 {
  transparent
}

proxy /tunnel 127.0.0.1:8080 {
  transparent
  websocket
}

tls {
    max_certs 10
}


Step 4.) deploy 1 inlets deployment on kubernetes cluster, use wss to your VPS, and point the inlets deployment to an ingress controller service of type Cluster IP.


The basics of what's happening are:
1.) Caddy leverages Lets Encrypt Free to auto get HTTPS certs for every website you point at the Caddy Server.
2.) Your inlets deployment starts a bidirection VPN tunnel using websockets with the VPS that has a public IP. (Warning the VPN tunnel will only be encrypted if you specify wss, and that requires the server have a TLS cert, which it gets from "LEF")
3.) Caddy is now a public L7 LB/Reverse Proxy that terminates HTTPS, and forwards to your ingress controller over an encrypted websockets VPN tunnel. Then it's normal-ish ingress.
4.) Traffic Flow: DNS -(resolves IP)-> (HTTPS)VPS/L7 ReverseProxy - encrypted VPNtunnel-> Inlets pod from Inlets Deployment -L7 cleartext in cluster network redirect to -> Ingress Controller Service -> Ingress Controller Pod -L7 redirect to-> Cluster IP services/sites defined by ingress objs.