7
votes

We are running an API server on GKE (google kubernetes engine). We handle our authorization using Google Cloud Endpoints and API keys. We whitelist certain IP addresses on every API key. In order to make this work we had to change over from a loadbalancer to a ingress controller for exposing our API server. The IP whitelisting does not work with the loadbalancer service. Now we have an ingress setup similar to this:

apiVersion: v1
kind: Service
metadata:
  name: echo-app-nodeport
spec:
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: http
  selector:
    app: esp-echo
  type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echo-app-ingress
  annotations:
    kubernetes.io/ingress.global-static-ip-name: "RESERVED_IP"
    kubernetes.io/ingress.allow-http: "false"
spec:
  tls:
  - secretName: SECRET_NAME
  backend:
    serviceName: echo-app-nodeport
    servicePort: 80

This setup functions fine and the IP whitelisting works. Now my concern lies primarily with the NodePort that seems needed in order to make the ingress controller work. I read multiple sources [1][2] that strongly advise against using NodePorts for exposing your application. Yet most examples I find use this NodePort + Ingress combo. Can we safely use this setup or should we migrate towards an other ingress controller (NGINX, Traefik,..) ?

2
are you using gce ingress controller?Arghya Sadhu
I have all my services running as type ClusterIP and have Ingresses using those services. Have you tried this? So my suggestion would be to just use NGINX Ingress Controller and services with type ClusterIP.Michael Johann
@ArghyaSadhu Not sure how to check, If this is the default when creating an ingress on GKE then yes. I applied a yaml very similar as my example.Georges Lorré

2 Answers

1
votes

You can have only ClusterIP type service for all your workload pods and have one LoadBalancer service to expose the ingress controller itself outside the cluster.That way you can completely avoid NodePort service.

1
votes

My suspicion is that the GKE ingress is actually outside of your GKE cluster and forwards the traffic to your cluster over the Nodeport. This is probably why the set-up of GKE ingress and services exposed over ClusterIP doesn't work.

If you deploy an NGINX Ingress Controller on your GKE cluster, it will create an ingress gateway from within your cluster (instead of forwarding to your cluster) and be able to communicate to services exposed over ClusterIP.