3
votes

I created a simple EKS cluster on aws as described in https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/eks-getting-started.

I this cluster I created an nginx deployment and a service of type Loadbalancer as described below. The configuration works locally on minikube.

On AWS I can see that pod and service are started, the service has an external ip, I can access the pod with kubectl port-forward and I can ping the LoadBalancer.

However I cannot access the Loadbalancer via the browser via http://a53439687c6d511e8837b02b7cab13e7-935938560.eu-west-1.elb.amazonaws.com:3001
I'm getting a This site can’t be reached

Any idea where I should look into?

NGinx Deployment

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  labels:
    run: nginx
  name: nginx
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 2
  selector:
    matchLabels:
      run: nginx
  template:
    metadata:
      creationTimestamp: null
      labels:
        run: nginx
    spec:
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: nginx
        ports:
          - containerPort: 80
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30

NGinx Service

{
   "kind":"Service",
   "apiVersion":"v1",
   "metadata":{
      "name":"nginx",
      "labels":{
         "app":"nginx"
      }
   },
   "spec":{
      "ports": [
         {
           "port":3001,
           "targetPort":80
         }
      ],
      "selector":{
         "run":"nginx"
      },
      "type": "LoadBalancer"
   }
}

Checks

kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP                                                              PORT(S)          AGE
kubernetes   ClusterIP      172.20.0.1      <none>                                                                   443/TCP          1h
nginx        LoadBalancer   172.20.48.112   a53439687c6d511e8837b02b7cab13e7-935938560.eu-west-1.elb.amazonaws.com   3001:31468/TCP   45m

kubectl get pod
NAME                     READY     STATUS    RESTARTS   AGE
nginx-768979984b-vqz94   1/1       Running   0          49m

kubectl port-forward pod/nginx-768979984b-vqz94 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

ping a53439687c6d511e8837b02b7cab13e7-935938560.eu-west-1.elb.amazonaws.com
PING a53439687c6d511e8837b02b7cab13e7-935938560.eu-west-1.elb.amazonaws.com (62.138.238.45) 56(84) bytes of data.
64 bytes from 62.138.238.45 (62.138.238.45): icmp_seq=1 ttl=250 time=7.21 ms

Service description

Name:                     nginx
Namespace:                default
Labels:                   app=nginx
Annotations:              kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"nginx","namespace":"default"},"spec":{"ports":[{"port...
Selector:                 run=nginx
Type:                     LoadBalancer
IP:                       172.20.48.112
LoadBalancer Ingress:     a53439687c6d511e8837b02b7cab13e7-935938560.eu-west-1.elb.amazonaws.com
Port:                     <unset>  3001/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31468/TCP
Endpoints:                10.0.0.181:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason                Age   From                Message
  ----    ------                ----  ----                -------
  Normal  EnsuringLoadBalancer  57m   service-controller  Ensuring load balancer
  Normal  EnsuredLoadBalancer   57m   service-controller  Ensured load balancer
1
Not sure if it was a timing issue. It worker after some time without any changes.christian
It always takes some minutes before the LoadBalancer is reachable.nicor88
I'm facing the same issue. I thought it was a timing issue as @nicor88 said, but after waiting for hours the Load Balancer was still unavailable. I also tried creating everything manually but the result was the same.Eva FP
I recently add the same issue. It was due to a mismatch between the labels.nicor88
It Takes some time to be availableAkj

1 Answers

1
votes

Please try the 3 steps below:

  1. Check again that the selectors and labels were set correctly between the Service and the Deployment.

  2. Inside AWS, Go to "Instances" tab of the Load Balancer (Probably Classic) that was created, and check the Status and Healty state of all the instances which are related to the LB:

enter image description here

If the status is not "InService" or State is not "Healthy" - Check the security group of those instances:
The NodePort (31468 in your case) should be open to accept traffic.

  1. View the pod logs with kubectl logs <pod-name>.