3
votes

Summary: Create EKS cluster. Attempt to run commands from docker container. Get error.

Set up.

Follow AWS tutorial on setting up an EKS cluster

  1. Create VPC and supporting infra via CFT
  2. Create IAM role and policy: myAmazonEKSClusterRole/AmazonEKSClusterPolicy
  3. Create EKS cluster via the console logged in as SSO
  4. Wait for cluster to be ready
  5. From laptop/CLI authenticated as SSO, execute

aws eks update-kubeconfig --name my-cluster

  1. Execute kubectl get svc, get good result

  2. Create identity provider in IAM and associate with EKS cluster OpenID connect provider URL

  3. Create CNI role and policy: myAmazonEKSCNIRole

  4. Associate role to cluster via aws eks update-addon command

  5. Create node role myAmazonEKSNodeRole and attach policies: AmazonEKSWorkerNodePolicy and AmazonEC2ContainerRegistryReadOnly

  6. Create key pair in AWS

  7. Add Node Group to cluster configured with above role and key pair

  8. Wait until node group is active

At this point, if I test with kubectl or helm I can manipulate the cluster. I’m still authenticating as the SSO user however, and running from my laptop.

Moving on. I want to manipulate the cluster from within a docker container. So I continue with the following steps.

EKS cluster is in AWS account B.

  1. Create role in AWS account B (RoleInAccountB). Role has admin access policy in account B.
  2. Establish trust between account A and account B so that user in Account A can assume role in account B

On local computer, outside of container (SSO authentication)

  1. Download aws-auth-cm.yaml and customize it to add new role:
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - groups:
      - system:masters
      rolearn: arn:aws:iam::<Account B>:role/RoleInAccountB
      username: DOESNTMATTER
  1. Execute kubectl apply -f aws-auth-cm.yaml
  2. Watch nodes to ensure they are ready kubectl get nodes –watch
  3. Verify config kubectl describe configmap -n kube-system aws-auth Seems fine.
  4. SSH to EC2 in Account A
  5. Run docker container on EC2 (image has prerequisite dependencies installed such as aws cli, kubectl etc.)

From within the container

  1. Assume Account B role
  2. Add role to kube config via aws cli

aws eks update-kubeconfig --name my-cluster --role-arn arn:aws:iam::accountB:role/RoleInAccountB

  1. Execute test to check permission to cluster kubectl get svc

Receive error “error: You must be logged in to the server (Unauthorized)"

Update: I tried this from the ec2 (outside the container). I get the same result.

Wondering if the following guidance is what I have to try next. https://github.com/kubernetes-sigs/aws-iam-authenticator/blob/master/README.md

Update 3/19/2021 Still no real solution to the problem. I found that if I assume a role (console or cli) prior to creating the cluster, then I can assume that role later on in the container/ec2 and manipulate the cluster just fine. This was expected, but it has become my work around. Still looking for the correct way to change the cluster permissions to allow some other role (that didn't create the cluster) to perform commands.

1
I've added a bonus to this because I'm interested in the answer.Software Engineer
Is there anything in ~/.kube/config on the container? It needs to contain authentication data for kubectl.Erik
Yes. ~/.kube/config contains, among other things user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 args: - --region - us-east-1 - eks - get-token - --cluster-name - my-cluster - --role - arn:aws:iam::ACCOUNT B:role/RoleInAccountB command: awsHuInTa

1 Answers

0
votes

It would be better to understand your current identity context first before running aws eks update-kubeconfig, what will be the output of aws sts get-caller-identity in Account A before and after step 9?

Another thing worth to try out is to open resulting KUBECONFIG file, look up for the helper command (like aws-iam-authenticator or aws eks get-token) and execute it directly bypassing kubectl authentication handling logic

That's how helpers may look like

aws eks get-token --cluster-name my-cluster --role arn:aws:iam::accountB:role/RoleInAccountB

or

aws-iam-authenticator token -i my-cluster -r arn:aws:iam::accountB:role/RoleInAccountB

What will be the output?

Third thing, you may use AWS named profiles to get AWS IAM roles transparently assumed between accounts using role_arn/source_arn combination.

https://docs.aws.amazon.com/cli/latest/topic/config-vars.html#using-aws-iam-roles