0
votes

So I've been struggling with the fact that I'm unable to expose any deployment in my eks cluster.

I got down to this:

  1. My LoadBalancer service public IP never responds
  2. Went to the load balancer section in my aws console
  3. Load balancer is no working because my cluster node is not passing the heath checks
  4. SSHd to my cluster node and found out that containers do not have ports associated to them:

enter image description here

This makes the cluster node fail the health checks, so no traffic is forwarded that way.

enter image description here

I tried running a simple nginx container manually, without kubectl directly in my cluster node:

docker run -p 80:80 nginx

and pasting the node public IP in my browser. No luck:

enter image description here

then I tried curling to the nginx container directly from the cluster node via ssh:

 curl localhost

And I'm getting this response: "curl: (7) Failed to connect to localhost port 80: Connection refused"

  • Why are containers in the cluster node not showing ports?
  • How can I make the cluster node pass the load balancer health checks?
  • Could it have something to do with the fact that I created a single node cluster with eksctl?
  • What other options do I have to easily run a kubernetes cluster in AWS?
1

1 Answers

2
votes

This is something in the middle between answer and question, but I hope it will help you.

Im using Deploying a Kubernetes Cluster with Amazon EKS guide for years when it comes to create EKS cluster.

For test purposes, I just spinned up new cluster and it works as expected, including accessing test application using external LB ip and passing health checks...

In short you need:

1. create EKS role

2. create VPC to use in EKS

3. create stack (Cloudformation) from https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-01-09/amazon-eks-vpc-sample.yaml

4. Export variables to simplify further cli command usage

export EKS_CLUSTER_REGION=
export EKS_CLUSTER_NAME=
export EKS_ROLE_ARN=
export EKS_SUBNETS_ID=
export EKS_SECURITY_GROUP_ID=

5. Create cluster, verify its creation and generating appropriate config.

#Create
aws eks --region ${EKS_CLUSTER_REGION} create-cluster --name ${EKS_CLUSTER_NAME} --role-arn ${EKS_ROLE_ARN} --resources-vpc-config subnetIds=${EKS_SUBNETS_ID},securityGroupIds=${EKS_SECURITY_GROUP_ID}

#Verify
watch aws eks --region ${EKS_CLUSTER_REGION} describe-cluster --name ${EKS_CLUSTER_NAME} --query cluster.status

#Create .kube/config entry
aws eks --region ${EKS_CLUSTER_REGION} update-kubeconfig --name ${EKS_CLUSTER_NAME}

Can you please check article and confirm you havent missed any steps during installation?