2
votes

Updated

So, I followed the AWS docs on how to setup an EKS cluster with Fargate using the eksctl tool. That all went smoothly but when I get to the part where I deploy my actual app, I get no endpoints and the ingress controller has no address associated with it. As seen here:

NAME                     HOSTS   ADDRESS   PORTS   AGE
testapp-ingress           *                 80      129m

So, I can't hit it externally. But the test app (2048 game) had an address from the elb associated with the ingress. I thought it might be the subnet-tags as suggested here and my subnets weren't tagged the right way so I tagged them the way suggested in that article. Still no luck.

This is the initial article I followed to get set up. I've performed all the steps and only hit a wall with the alb: https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-next-steps

This is the alb article I've followed: https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html

I followed the steps to deploy the sample app 2048 and that works just fine. I've made my configs very similar and it should work. I've followed all of the steps. Here are my old configs, new config below:

deployment yaml>>>
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: "testapp-deployment"
  namespace: "testapp-qa"
spec:
  selector:
    matchLabels:
      app: "testapp"
  replicas: 5
  template:
    metadata:
      labels:
        app: "testapp"
    spec:
      containers:
      - image: xxxxxxxxxxxxxxxxxxxxxxxxtestapp:latest
        imagePullPolicy: Always
        name: "testapp"
        ports:
        - containerPort: 80
---
service yaml>>>
apiVersion: v1
kind: Service
metadata:
  name: "testapp-service"
  namespace: "testapp-qa"
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
      name: http
  type: NodePort
  selector:
    app: "testapp"
---
ingress yaml >>>
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: "testapp-ingress"
  namespace: "testapp-qa"
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
  labels:
    app: testapp-ingress
spec:
  rules:
    - http:
        paths:
          - path: /*
            backend:
              serviceName: "testapp-service"
              servicePort: 80
---
namespace yaml>>>
apiVersion: v1
kind: Namespace
metadata:
  name: "testapp-qa"

Here are some of the logs from the ingress controller>>

E0316 22:32:39.776535       1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to Unable to DescribeInstanceStatus on fargate-ip-xxxxxxxxxxxx.ec2.internal: InvalidInstanceID.Malformed: Invalid id: \"fargate-ip-xxxxxxxxxxxx.ec2.internal\"\n\tstatus code: 400, request id: xxxxxxxxxxxx"  "controller"="alb-ingress-controller" "request"={"Namespace":"testapp-qa","Name":"testapp-ingress"}
E0316 22:36:28.222391       1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to Unable to DescribeInstanceStatus on fargate-ip-xxxxxxxxxxxx.ec2.internal: InvalidInstanceID.Malformed: Invalid id: \"fargate-ip-xxxxxxxxxxxx.ec2.internal\"\n\tstatus code: 400, request id: xxxxxxxxxxxx"  "controller"="alb-ingress-controller" "request"={"Namespace":"testapp-qa","Name":"testapp-ingress"}

Per the suggestion in the comments from @Michael Hausenblas, I've added an annotation to my service for the alb ingress.

Now that my ingress controller is using the correct ELB, I checked the logs because I still can't hit my app's /healthcheck. The logs:

E0317 16:00:45.643937       1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to Unable to DescribeInstanceStatus on fargate-ip-xxxxxxxxxxx.ec2.internal: InvalidInstanceID.Malformed: Invalid id: \"fargate-ip-xxxxxxxxxxx.ec2.internal\"\n\tstatus code: 400, request id: xxxxxxxxxxx-3a7d-4794-95fb-a18835abe0d3"  "controller"="alb-ingress-controller" "request"={"Namespace":"testapp-qa","Name":"testapp"}
I0317 16:00:47.868939       1 rules.go:82] testapp-qa/testapp-ingress: modifying rule 1 on arn:aws:elasticloadbalancing:us-east-1:xxxxxxxxxxx:listener/app/xxxxxxxxxxx-testappqa-testappin-b879/xxxxxxxxxxx/6b41c0d3ce97ae6b
I0317 16:00:47.890674       1 rules.go:98] testapp-qa/testapp-ingress: rule 1 modified with conditions [{    Field: "path-pattern",    Values: ["/*"]  }]

Update

I've updated my config. I don't have any more errors but still unable to hit my endpoints to test if my app is accepting traffic. It might have something to do with fargate or on the AWS side I'm not seeing. Here's my updated config:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: "testapp"
  namespace: "testapp-qa"
spec:
  selector:
    matchLabels:
      app: "testapp"
  replicas: 5
  template:
    metadata:
      labels:
        app: "testapp"
    spec:
      containers:
      - image: 673312057223.dkr.ecr.us-east-1.amazonaws.com/wood-testapp:latest
        imagePullPolicy: Always
        name: "testapp"
        ports:
        - containerPort: 9898
---
apiVersion: v1
kind: Service
metadata:
  name: "testapp"
  namespace: "testapp-qa"
  annotations: 
    alb.ingress.kubernetes.io/target-type: ip
spec:
  ports:
    - port: 80
      targetPort: 9898
      protocol: TCP
      name: http
  type: NodePort
  selector:
    app: "testapp"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: "testapp-ingress"
  namespace: "testapp-qa"
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/healthcheck-path: /healthcheck
  labels:
    app: testapp
spec:
  rules:
    - http:
        paths:
          - path: /*
            backend:
              serviceName: "testapp"
              servicePort: 80
---
apiVersion: v1
kind: Namespace
metadata:
  name: "testapp-qa"
1
Try adding alb.ingress.kubernetes.io/target-type: ip as an annotation to the service. Might also want to check out what I did in github.com/mhausenblas/noteless/tree/master/listings (both re ALB IC setup and overall config).Michael Hausenblas
Thank you very much for the suggestion. I'll try that soon and let you know if that worked.Kryten
Did you make sure to point the ALB IC to the right health check URL? See github.com/mhausenblas/noteless/blob/master/listings/… for an example.Michael Hausenblas
Thanks @MichaelHausenblas, I updated my config with your suggestions and they work. I just don't see any traffic getting routed to my app and can't get my healthcheck page to come up. Still working on it.Kryten
Gotcha. In order to verify if your service runs fine in-cluster you could try to call it from within: kubectl run -i -t --rm curljump --restart=Never --image=quay.io/mhausenblas/jump:0.2 -- curl testapp.testapp-qaMichael Hausenblas

1 Answers

2
votes

In your service, try adding the following annotation:

  annotations:
    alb.ingress.kubernetes.io/target-type: ip

And also you'd need to explicitly tell the Ingress resource via the alb.ingress.kubernetes.io/healthcheck-path annotation where/how to perform the health checks for the target group. See the ALB Ingress controller docs for the annotation semantics.