2
votes

Objective

My objective is to be able to deploy on AWS EKS using Fargate. I have successfully made the deployment work with a node_group. However, when I shifted to using Fargate, it seems that the pods are all stuck in the pending state.

How my current code looks like

I am provisioning using Terraform (not necessarily looking for a Terraform answer). This is how I create my EKS Cluster:

module "eks_cluster" {
  source                            = "terraform-aws-modules/eks/aws"
  version                           = "13.2.1"
  cluster_name                      = "${var.project_name}-${var.env_name}"
  cluster_version                   = var.cluster_version
  vpc_id                            = var.vpc_id
  cluster_enabled_log_types         = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
  enable_irsa                       = true
  subnets                           = concat(var.private_subnet_ids, var.public_subnet_ids)
  create_fargate_pod_execution_role = false

  node_groups = {
    my_nodes = {
      desired_capacity = 1
      max_capacity     = 2
      min_capacity     = 1
      instance_type    = var.nodes_instance_type
      subnets          = var.private_subnet_ids
    }
  }
}

And this is how I provision the Fargate profile:

resource "aws_eks_fargate_profile" "airflow" {
  cluster_name           = module.eks_cluster.cluster_id
  fargate_profile_name   = "${var.project_name}-fargate-${var.env_name}"
  pod_execution_role_arn = aws_iam_role.fargate_iam_role.arn
  subnet_ids             = var.private_subnet_ids

  selector {
    namespace = "airflow"
  }
}

And this is how I created and attach the required policies:

resource "aws_iam_role" "fargate_iam_role" {
  name                  = "${var.project_name}-fargate-${var.env_name}"
  force_detach_policies = true
  assume_role_policy    = jsonencode({
    Statement = [{
      Action    = "sts:AssumeRole"
      Effect    = "Allow"
      Principal = {
        Service = ["eks-fargate-pods.amazonaws.com", "eks.amazonaws.com"]
      }
    }]
    Version   = "2012-10-17"
  })
}

# Attach IAM Policy for Fargate
resource "aws_iam_role_policy_attachment" "fargate_pod_execution" {
  role       = aws_iam_role.fargate_iam_role.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy"
}

What I have tried and does not work

I have tried deploying the pods (I am using a Helm chart) in the same namespace where the Fargate Profile Exists. When I run kubectl get pods -n airflow I see all my pods pending like:

NAME                                 READY   STATUS    RESTARTS   AGE
airflow-flower-79b5948677-vww5d      0/1     Pending   0          40s
airflow-redis-master-0               0/1     Pending   0          40s
airflow-scheduler-6b6bd4b6f6-j9qzg   0/2     Pending   0          41s
airflow-web-567b55fbbf-z8dsg         0/2     Pending   0          41s
airflow-worker-0                     0/2     Pending   0          40s
airflow-worker-1                     0/2     Pending   0          40s

Then I look at the events by kubectl get events -n airflow, to which I get:

LAST SEEN   TYPE     REASON              OBJECT                                    MESSAGE
2m15s       Normal   LoggingEnabled      pod/airflow-flower-79b5948677-vww5d       Successfully enabled logging for pod
2m16s       Normal   SuccessfulCreate    replicaset/airflow-flower-79b5948677      Created pod: airflow-flower-79b5948677-vww5d
2m17s       Normal   ScalingReplicaSet   deployment/airflow-flower                 Scaled up replica set airflow-flower-79b5948677 to 1
2m15s       Normal   LoggingEnabled      pod/airflow-redis-master-0                Successfully enabled logging for pod
2m16s       Normal   SuccessfulCreate    statefulset/airflow-redis-master          create Pod airflow-redis-master-0 in StatefulSet airflow-redis-master successful
2m15s       Normal   LoggingEnabled      pod/airflow-scheduler-6b6bd4b6f6-j9qzg    Successfully enabled logging for pod
2m16s       Normal   SuccessfulCreate    replicaset/airflow-scheduler-6b6bd4b6f6   Created pod: airflow-scheduler-6b6bd4b6f6-j9qzg
2m17s       Normal   NoPods              poddisruptionbudget/airflow-scheduler     No matching pods found
2m17s       Normal   ScalingReplicaSet   deployment/airflow-scheduler              Scaled up replica set airflow-scheduler-6b6bd4b6f6 to 1
2m15s       Normal   LoggingEnabled      pod/airflow-web-567b55fbbf-z8dsg          Successfully enabled logging for pod
2m16s       Normal   SuccessfulCreate    replicaset/airflow-web-567b55fbbf         Created pod: airflow-web-567b55fbbf-z8dsg
2m17s       Normal   ScalingReplicaSet   deployment/airflow-web                    Scaled up replica set airflow-web-567b55fbbf to 1
2m15s       Normal   LoggingEnabled      pod/airflow-worker-0                      Successfully enabled logging for pod
2m15s       Normal   LoggingEnabled      pod/airflow-worker-1                      Successfully enabled logging for pod
2m16s       Normal   SuccessfulCreate    statefulset/airflow-worker                create Pod airflow-worker-0 in StatefulSet airflow-worker successful
2m16s       Normal   SuccessfulCreate    statefulset/airflow-worker                create Pod airflow-worker-1 in StatefulSet airflow-worker successful

I then try to describe one of the pods (via kubectl describe pod), and I get:

Name:                 airflow-redis-master-0
Namespace:            airflow
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 <none>
Labels:               app=redis
                      chart=redis-10.5.7
                      controller-revision-hash=airflow-redis-master-588d57785d
                      eks.amazonaws.com/fargate-profile=airflow-fargate-airflow-dev
                      release=airflow
                      role=master
                      statefulset.kubernetes.io/pod-name=airflow-redis-master-0
Annotations:          CapacityProvisioned: 0.25vCPU 0.5GB
                      Logging: LoggingEnabled
                      checksum/configmap: 2b82c78fd9186045e6e2b44cfbb38460310697cf2f2f175c9d8618dd4d42e1ca
                      checksum/health: a5073935c8eb985cf8f3128ba7abbc4121cef628a9a1b0924c95cf97d33323bf
                      checksum/secret: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
                      cluster-autoscaler.kubernetes.io/safe-to-evict: true
                      kubernetes.io/psp: eks.privileged
Status:               Pending
IP:
IPs:                  <none>
Controlled By:        StatefulSet/airflow-redis-master
NominatedNodeName:    6f344dfd11-000a9c54e4e240a2a8b3dfceb5f8227e
Containers:
  airflow-redis:
    Image:      docker.io/bitnami/redis:5.0.7-debian-10-r32
    Port:       6379/TCP
    Host Port:  0/TCP
    Command:
      /bin/bash
      -c
      if [[ -n $REDIS_PASSWORD_FILE ]]; then
        password_aux=`cat ${REDIS_PASSWORD_FILE}`
        export REDIS_PASSWORD=$password_aux
      fi
      if [[ ! -f /opt/bitnami/redis/etc/master.conf ]];then
        cp /opt/bitnami/redis/mounted-etc/master.conf /opt/bitnami/redis/etc/master.conf
      fi
      if [[ ! -f /opt/bitnami/redis/etc/redis.conf ]];then
        cp /opt/bitnami/redis/mounted-etc/redis.conf /opt/bitnami/redis/etc/redis.conf
      fi
      ARGS=("--port" "${REDIS_PORT}")
      ARGS+=("--requirepass" "${REDIS_PASSWORD}")
      ARGS+=("--masterauth" "${REDIS_PASSWORD}")
      ARGS+=("--include" "/opt/bitnami/redis/etc/redis.conf")
      ARGS+=("--include" "/opt/bitnami/redis/etc/master.conf")
      /run.sh ${ARGS[@]}

    Liveness:   exec [sh -c /health/ping_liveness_local.sh 5] delay=5s timeout=5s period=5s #success=1 #failure=5
    Readiness:  exec [sh -c /health/ping_readiness_local.sh 5] delay=5s timeout=1s period=5s #success=1 #failure=5
    Environment:
      REDIS_REPLICATION_MODE:  master
      REDIS_PASSWORD:          <set to the key 'redis-password' in secret 'my-creds'>  Optional: false
      REDIS_PORT:              6379
    Mounts:
      /data from redis-data (rw)
      /health from health (rw)
      /opt/bitnami/redis/etc/ from redis-tmp-conf (rw)
      /opt/bitnami/redis/mounted-etc from config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-dmwvn (ro)
Volumes:
  health:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      airflow-redis-health
    Optional:  false
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      airflow-redis
    Optional:  false
  redis-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  redis-tmp-conf:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  default-token-dmwvn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-dmwvn
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Normal   LoggingEnabled    3m12s  fargate-scheduler  Successfully enabled logging for pod
  Warning  FailedScheduling  12s    fargate-scheduler  Pod provisioning timed out (will retry) for pod: airflow/airflow-redis-master-0

Other things I have tried

  • Tagging my subnets with the appropriate tag (conditional based on public/private subnets):
    kubernetes_tags = map(
        "kubernetes.io/role/${var.type == "Public" ? "elb" : "internal-elb"}", 1,
        "kubernetes.io/cluster/${var.kubernetes_cluster_name}", "shared"
      )
  • Annotating my pods with the Fargate profile (like infrastructure:fargate)
  • Debug VPC settings. To my understanding, the following settings need to be described for Fargate (source here):
  single_nat_gateway = true # needed for fargate (https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf#page=135&zoom=100,96,764)
  enable_nat_gateway = true # needed for fargate (https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf#page=135&zoom=100,96,764)
  enable_vpn_gateway = false
  enable_dns_hostnames = true # needed for fargate (https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf#page=135&zoom=100,96,764)
  enable_dns_support = true # needed for fargate (https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf#page=135&zoom=100,96,764)

However, I have been provided a readily created VPC, and I am not sure how to check if these settings have been already turned on/off.

What are the steps that I need to take to debug this issue?

1
Are your private subnets had outbound internet connection through nat gateway?Asri Badlah
Hi @AsriBadlah. How can I check that, and if not, how can I configure it?alt-f4
Go one of your private subnets and check their associated Routing table if that looks like this: 0.0.0.0/0 nat-xxxxxxxxAsri Badlah
@AsriBadlah I have checked, but it seems that there is no NAT Gateway defined. The routing seems to link 0.0.0.0/0 to a firewall instance onlyalt-f4
Please check my answerAsri Badlah

1 Answers

3
votes

For test purposes I think you need to enable connectivity from the vpc private subnets to the outside world using NAT gateway. So you may create NAT gateway in Public and add to the private subnets additional entry in their associated Routing table that looks like this:

0.0.0.0/0 nat-xxxxxxxx

if this worked and you want to keep your outbound restricted through your firewall instance which is more secure, I think you need to contact firewall provider support to ask how you can whitelist farget outound traffic.