3
votes

I have ALB on AWS running on EKS cluster. I'm trying to apply change in Ingress resource on routing so it points to different backend.

The only difference in Ingresses below is spec for backend.

Why is update not working? How to update routing on ALB?

Original ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress
  namespace: default
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/backend-protocol: HTTP
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
  labels:
    app: api    
    type: ingress
spec:  
  backend:
    serviceName: api-service
    servicePort: 80 

Update ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress
  namespace: default
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/backend-protocol: HTTP
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
  labels:
    app: api    
    type: ingress
spec:  
  backend:
    serviceName: offline-service
    servicePort: 9001 

Controller:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/name: alb-ingress-controller
  name: alb-ingress-controller
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: alb-ingress-controller
  template:
    metadata:
      labels:
        app.kubernetes.io/name: alb-ingress-controller
    spec:
      containers:
        - name: alb-ingress-controller
          args:           
            - --ingress-class=alb
            - --cluster-name=cluster-22           
          env:           
            - name: AWS_ACCESS_KEY_ID
              value: key           
            - name: AWS_SECRET_ACCESS_KEY
              value: key          
          image: docker.io/amazon/aws-alb-ingress-controller:v1.1.3
      serviceAccountName: alb-ingress-controller 
1
The Ingress updates logs can be found by describing the Ingress service (kubectl describe service my-ingress-controller-service) and by checking the kube-controller-manager logs (kubectl logs --namespace=kube-system kube-controller-manager-...). Take a look to figure out what's happening when you update your configs, and if possible update your question with the info.Eduardo Baitello
Yeo @EduardoBaitello is right. What often happens is that one of the services defined in the ingress is unreachable, at which point the ALB-ingress controller decides that it will not update any of the rules in the AWS ALB.Blokje5
yeah, I still didn't had offline-service service deployed. I'll try doing that.Andrija
Most likely that is the issue then.Blokje5
Yes, that was an issue. Thanks guys.Andrija

1 Answers

1
votes

Posting info from the comments as an answer (community wiki):

What often happens is that one of the services defined in the ingress is unreachable, at which point the ALB-ingress controller decides that it will not update any of the rules in the AWS ALB.

You have to deploy an offline-service.