Architecture
- I have a namespace in my multi-node cluster in Google Kubernetes Engine (GKE), let's call it
project-namespace
- In the
project-namespace
there is a single frontend application (replicated), and some backend applications (each one replicated too)- frontend:
fe1
->fe1-r1
,fe1-r2
,fe1-r3
- backend1:
be1
->be1-r1
,be1-r2
,be1-r3
- backend2:
be2
->be2-r1
,be2-r2
,be2-r3
- backendN:
...
- frontend:
be1
is a an exception, it needs to be accessible from the outside so i exposed it via the google cloud default load balancer:GKE-BL-be1
->be1-r1
,be1-r2
,be1-r3
- Every other application, including
fe1
is backed byClusterIP
services
Objective
I exposed the fe1
via nginx ingress instead of the GKE default ingress
What i did
- I downloaded this operator file in the "getting started" documentation of nginx-ingress:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.43.0/deploy/static/provider/cloud/deploy.yaml \
> ingress.yaml
- I replaced the namespace used by the operator
sed -i 's/namespace: ingress-nginx/namespace: project-namespace/g'
- I deployed the operator file
kubectl apply -f ingress.yaml
- This created all the resources and the nginx ingress load balancer, making available port 80 and 443 from outside the cluster without k8s labels used by my other application pods.
What happened
From the browser i could access the fe1
using the external ip of the nginx ingress, and from the fe1
pod logs i saw that the requests were actually served by those pods.
I scaled down to 0 replicas the fe1
pods and no one responded anymore to my browser requests.
So actually the objective is achieved but i had no control on it, i didn't configure anything in the nginx ingress, i only used the operator file.
Moreover the gui is exposed only through ClusterIP
and i didn't configure any proxying rules between the nginx ingress and the fe1
ClusterIP
.
Why is this working? Which configuration am i missing?
EDIT:
After some troubleshooting i found out that in the "Ingress" section i already deployed a GKE Ingress pointing to the gui pods, the issue is that the nginx ingress service took the same ip address as the GKE ingress, so actually my browser was sending request to the old ingress. Now my doubt is: how is that possible that the new nginx ingress controller took the same ip of the GKE old one?