0
votes

To expose our apps on GKE we have been using the "gce" ingresses which utilize the google L7 load balancers. For various reasons we would like to switch to the Nginx controllers exposed by TCP load balancers. The setup is working but there are some strange artifacts.

$ kubectl get service ingress-nginx-static -n ingress-nginx -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2018-03-23T15:08:39Z
  labels:
    app: ingress-nginx
  name: ingress-nginx-static
  namespace: ingress-nginx
  resourceVersion: "101258"
  selfLink: /api/v1/namespaces/ingress-nginx/services/ingress-nginx-static
  uid: 110cf622-2eac-11e8-a8e3-42010a84011f
spec:
  clusterIP: 10.51.247.47
  externalTrafficPolicy: Local
  healthCheckNodePort: 32689
  loadBalancerIP: 99.99.99.99
  ports:
  - name: http
    nodePort: 32296
    port: 80
    protocol: TCP
    targetPort: http
  - name: https
    nodePort: 31228
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app: ingress-nginx
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 99.99.99.99

We have many ingresses configured on the cluster which are all available from the 99.99.99.99 ip address described above.

$ kubectl get ing -n staging-application -o yaml
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    creationTimestamp: 2018-03-23T15:15:56Z
    generation: 1
    name: staging-ingress
    namespace: staging-platform
    resourceVersion: "101788"
    selfLink: /apis/extensions/v1beta1/namespaces/staging-platform/ingresses/staging-ingress
    uid: 15b0f0f5-2ead-11e8-a8e3-42010a84011f
  spec:
    rules:
    - host: staging-application.foo.com
      http:
        paths:
        - backend:
            serviceName: staging-application
            servicePort: 80
          path: /
    tls:
    - hosts:
      - staging-application.foo.com
      secretName: wildcard-certificate
  status:
    loadBalancer:
      ingress:
      - ip: 35.189.220.150
      - ip: 35.195.151.243
      - ip: 35.195.156.166
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Whilst the setup is working correctly it's confusing me that the spec.status.loadBalancer.ingress ips are there. These are the external IP addresses of the cluster. Have I misconfigured something here? It seems that a TCP load balancer has been created in response to this loadBalancer object described in the yaml. I'm afraid that the objects that form a "load balancer" in google cloud confuse me. Here is some output that seems relevant.

$ gcloud compute forwarding-rules describe bertram-cluster --region europe-west1
IPAddress: 35.195.104.202
IPProtocol: TCP
creationTimestamp: '2018-03-22T14:01:09.768-07:00'
description: ''
id: '2098255211810891642'
kind: compute#forwardingRule
loadBalancingScheme: EXTERNAL
name: bertram-cluster
portRange: 80-443
region: https://www.googleapis.com/compute/v1/projects/sonorous-cacao-185213/regions/europe-west1
selfLink: https://www.googleapis.com/compute/v1/projects/sonorous-cacao-185213/regions/europe-west1/forwardingRules/bertram-cluster
target: https://www.googleapis.com/compute/v1/projects/sonorous-cacao-185213/regions/europe-west1/targetPools/a91e005bd2e1311e8a8e342010a84011


$ gcloud compute firewall-rules describe k8s-a91e005bd2e1311e8a8e342010a84011-http-hc
allowed:
- IPProtocol: tcp
  ports:
  - '30008'
creationTimestamp: '2018-03-22T13:57:18.452-07:00'
description: '{"kubernetes.io/service-name":"ingress-nginx/ingress-nginx", "kubernetes.io/service-ip":"104.199.14.53"}'
direction: INGRESS
id: '7902205773442819649'
kind: compute#firewall
name: k8s-a91e005bd2e1311e8a8e342010a84011-http-hc
network: https://www.googleapis.com/compute/v1/projects/sonorous-cacao-185213/global/networks/default
priority: 1000
selfLink: https://www.googleapis.com/compute/v1/projects/sonorous-cacao-185213/global/firewalls/k8s-a91e005bd2e1311e8a8e342010a84011-http-hc
sourceRanges:
- 130.211.0.0/22
- 35.191.0.0/16
- 209.85.152.0/22
- 209.85.204.0/22
targetTags:
- gke-cluster-bertram-2fdf26f5-node

This seems weird to me. Is this expected behavior? Am I doing something wrong?

1

1 Answers

1
votes

It looks like you may need to add the --publish-service flag to your ingress controller. In your nginx-ingress controller deployment, add an argument to pass this the pod namespace and pod name of the ingress controller.

For example:

args:
  - --publish-service=ingress-nginx/ingress-nginx

This will publish the IP of the ingress controller service to the ingresses data.

For more info, see: https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/static-ip/README.md#acquiring-an-ip