15
votes

So I'm using Kubernetes for a side project and it's great. It's cheaper to run for a small project like the one I'm on (a small cluster of 3-5 instances gives me basically everything I need for ~$30/month on GCP).

The only area where I'm struggling is in trying to use the kubernetes Ingress resource to map into cluster and fan out to my microservices (they're small Go or Node backends). I have the configuration setup for the ingress to map to different services and there's no problem there.

I understand that you can really easily have GCP spin up a LoadBalancer when you create an ingress resource. This is fine, but it also represents another $20-ish/month that adds to the cost of the project. Once/if this thing gets some traction, that could be ignored, but for now and also for the sake of understanding Kubernetes better, I want to the do the following:

  • get a static IP from GCP,
  • use it w/ an ingress resource
  • host the load-balancer in the same cluster (using the nginx load balancer)
  • avoid paying for the external load balancer

Is there any way this can even be done using Kubernetes and ingress resources?

Thanks!

4
Happy to post my existing configs if needed — just curious first if this is even something you can do :)markthethomas
Not to mention many K8s tools leave inactive Load Balancers behind, for me it went up to $30 a month just for useless Load Balancers.Ray Foss

4 Answers

7
votes

Yes this is possible. Deploy your ingress controller, and deploy it with a NodePort service. Example:

---
apiVersion: v1
kind: Service
metadata:
  name: nginx-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: nginx-ingress-controller
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
    nodePort: 32080
    protocol: TCP
    name: http
  - port: 443
    targetPort: 443
    nodePort: 32443
    protocol: TCP
    name: https
  selector:
    k8s-app: nginx-ingress-controller

Now, create an ingress with a DNS entry:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress
spec:
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: my-app-service #obviously point this to a valid service + port
          servicePort: 80

Now, assuming your static IP is attached to any kubernetes node running kube-proxy, have DNS updated to point to the static IP, and you should be able to visit myapp.example.com:32080 and the ingress will map you back to your app.

A few additional things:

If you want to use a lower port than 32080, then bear in mind if you're using CNI networking, you'll have trouble with hostport. It's recommend to have a load balancer listening on port 80, I guess you could just have nginx set up to do proxy pass, but it becomes difficult. This is why a load balancer with your cloud provider is recommended :)

4
votes

TLDR: If you want to serve your website/webservice on ports below 3000, then no, it's not possible. If someone finds a way to do it, I'd be eager to know how.

The two main approaches I used while trying to serve on a port below 3000 included:

  • Installing the nginx-ingress controller service to be of type NodePort, listening on ports 80 and 443. However, this results in the following error:
    Error: UPGRADE FAILED: Service "nginx-ingress-controller" is invalid:
    spec.ports[0].nodePort: Invalid value: 80: provided port is not in the
    valid range. The range of valid ports is 30000-32767
    
    The way to work around this error is to change the --service-node-port-range flag used when starting kube-apiserver. However, this configuration cannot be accessed on GCP. If you'd like to try for yourself, you can check out the instructions here: Kubernetes service node port range
  • Following the steps in the thread Expose port 80 and 443 on Google Container Engine without load balancer. This relies on using an externalIP attribute attached to a service of type: ClusterIP. At first glance, this would seem to be an ideal solution. However, there is a bug in the way that the externalIP attribute works. It does not accept an external, static IP, but rather an internal, ephemeral IP. If you hardcode an internal, ephemeral IP in the externalIP field, and then attach an external, static IP to one of the nodes in your cluster through the GCP Console, requests are successfully routed. However, this is not a viable solution because you've now hardcoded an ephemeral IP in your service definition, so your website will inevitably go offline as the nodes' internal IPs change.

If you are okay with serving on ports above 3000, see my instructions below.


How to remove the LoadBalancer (only allows serving on ports > 3000)

I've tried removing my LoadBalancer, and this is the best solution I could come up with. It has the following flaws:

  • The ports used to access the webpage are not the usual 80 and 443 because exposing these ports from a node is not trivial. I'll update later if I figure it out.

And the following benefits:

  • There's no LoadBalancer.
  • The IP of the website/webservice is static.
  • It's relies on the popular nginx-ingress helm chart.
  • It uses an ingress, allowing complete control over how requests are routed to your services based on the paths of the requests.

1. Install the ingress service and controller

Assuming you already have Helm installed (if you don't follow the steps here: Installing Helm on GKE), create an nginx-ingress with a type of NodePort.

helm install \
  --name nginx-ingress \
  stable/nginx-ingress \
  --set rbac.create=true \
  --set controller.publishService.enabled=true \
  --set controller.service.type=NodePort \
  --set controller.service.nodePorts.http=30080 \
  --set controller.service.nodePorts.https=30443

2. Create the ingress resource

Create the ingress definition for your routing.

# my-ingress-resource.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: reverse-proxy
  namespace: production # Namespace must be the same as that of target services below.
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-redirect: "false" # Set to true once SSL is set up.
spec:
  rules:
  - http:
      paths:
      - path: /api
        backend:
          serviceName: backend
          servicePort: 3000
      - path: /
        backend:
          serviceName: frontend
          servicePort: 80

Then install it with

kubectl apply -f my-ingress-resource.yaml

3. Create a firewall rule

Find the tag of your cluster.

gcloud compute instances list

If your cluster instances have names like

gke-cluster-1-pool-1-fee097a3-n6c8
gke-cluster-1-pool-1-fee097a3-zssz

Then your cluster tag is gke-cluster-1-pool-1-fee097a3.

Go to the GCP firewall page. Verify that you have the right project selected in the navbar.

Click "Create Firewall Rule". Give the rule a decent name. You can leave most of the settings as defaults, but past your cluster tag under "Target tags". Set the Source IP Ranges to 0.0.0.0/0. Under Protocols and Ports, change "Allow all" to "Specified protocols and ports". Check the TCP box, and put 30080, 30443 in the input field. Click "Create".

4. Create a static IP

Go to https://console.cloud.google.com/networking/addresses/ and click "Reserve Static Address". Give it a descriptive name, and select the correct region. After selecting the correct region, you should be able to click the "Attached to" dropdown and select one of your Kubernetes nodes. Click "Reserve".

5. Test the configuration

After reserving the static IP, find out which static IP was granted by looking at the External IP Address list.

Copy it into your browser, then tack on a port (<your-ip>:30080 for HTTP or https://<your-ip>:30443 for HTTPS). You should see your webpage.

0
votes

You can also make an nginx-ingress chart, have it pull an ephemeral IP and then upgrade it to static. This would leave you with a L7 single zone load balancer.

This guide goes through it. You can ignore the TLS stuff if you use kube-lego, which works just as well with nginx-ingress

https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/static-ip

0
votes

The original source, but it contains Digital Ocean details that I never used. It honestly saved my life, and it is possible to use ports under 3000, but I honestly am not sure how it works.

My setup is using the this Nginx ingress controller. Install it using helm, and provide it a configuration file:

$ helm install my-nginx ingress-nginx/ingress-nginx -f config.yaml

The configuration file should contain:

controller:
  kind: DaemonSet
  hostNetwork: true
  dnsPolicy: ClusterFirstWithHostNet
  daemonset:
    useHostPort: true
  service:
    type: ClusterIP
rbac:
  create: true

You can find the default values here, but I have no idea how to make sense out of that config.

After that you can create your ingress yaml:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: web-app
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/proxy-body-size: "100m"
    cert-manager.io/cluster-issuer: "letsencrypt"
    nginx.ingress.kubernetes.io/server-alias: "example.com"
  labels:
    app: web-app
spec:
  tls:
    - hosts:
      - example.com
      secretName: prod-certs
  rules:
    - host: example.com
      http: 
        paths:
        - backend:
            serviceName: myservice
            servicePort: 443

This is mine, it could be that for you it won't work, but try it!

The service ingress rule it points to is NodePort type:

apiVersion: v1
kind: Service
metadata:
  name: myservice
  labels:
    app: myservice
spec:
  type: NodePort
  ports:
  - port: 443
    targetPort: 80

But I believe ClusterIP works as well.

Outside of that, one of the VMs has a public static IP, and we use that IP for our domain name.

So the process I believe is. The domain name translates to that static IP. Then the traffic hits the Ingress controller, I have no idea how this works, but there your traffic gets matched to some rule, and is redirected to the service. The ports are defined in the Ingress, so you can also use under 3000 ports, but I have no idea how this "solution" works performance-wise, and I also have no idea how can Ingress controller accept traffic if it is not exposed.

Setting up Ingress was possibly one of the worst experiences I had, and I actually went with this chaos approach because working with LoadBalancer service types was even worse. Best of luck!