Summary: Create EKS cluster. Attempt to run commands from docker container. Get error.
Set up.
Follow AWS tutorial on setting up an EKS cluster
- Create VPC and supporting infra via CFT
- Create IAM role and policy: myAmazonEKSClusterRole/AmazonEKSClusterPolicy
- Create EKS cluster via the console logged in as SSO
- Wait for cluster to be ready
- From laptop/CLI authenticated as SSO, execute
aws eks update-kubeconfig --name my-cluster
Execute
kubectl get svc
, get good resultCreate identity provider in IAM and associate with EKS cluster OpenID connect provider URL
Create CNI role and policy: myAmazonEKSCNIRole
Associate role to cluster via
aws eks update-addon
commandCreate node role myAmazonEKSNodeRole and attach policies: AmazonEKSWorkerNodePolicy and AmazonEC2ContainerRegistryReadOnly
Create key pair in AWS
Add Node Group to cluster configured with above role and key pair
Wait until node group is active
At this point, if I test with kubectl or helm I can manipulate the cluster. I’m still authenticating as the SSO user however, and running from my laptop.
Moving on. I want to manipulate the cluster from within a docker container. So I continue with the following steps.
EKS cluster is in AWS account B.
- Create role in AWS account B (RoleInAccountB). Role has admin access policy in account B.
- Establish trust between account A and account B so that user in Account A can assume role in account B
On local computer, outside of container (SSO authentication)
- Download aws-auth-cm.yaml and customize it to add new role:
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- groups:
- system:masters
rolearn: arn:aws:iam::<Account B>:role/RoleInAccountB
username: DOESNTMATTER
- Execute
kubectl apply -f aws-auth-cm.yaml
- Watch nodes to ensure they are ready
kubectl get nodes –watch
- Verify config
kubectl describe configmap -n kube-system aws-auth
Seems fine. - SSH to EC2 in Account A
- Run docker container on EC2 (image has prerequisite dependencies installed such as aws cli, kubectl etc.)
From within the container
- Assume Account B role
- Add role to kube config via aws cli
aws eks update-kubeconfig --name my-cluster --role-arn arn:aws:iam::accountB:role/RoleInAccountB
- Execute test to check permission to cluster
kubectl get svc
Receive error “error: You must be logged in to the server (Unauthorized)"
Update: I tried this from the ec2 (outside the container). I get the same result.
Wondering if the following guidance is what I have to try next. https://github.com/kubernetes-sigs/aws-iam-authenticator/blob/master/README.md
Update 3/19/2021 Still no real solution to the problem. I found that if I assume a role (console or cli) prior to creating the cluster, then I can assume that role later on in the container/ec2 and manipulate the cluster just fine. This was expected, but it has become my work around. Still looking for the correct way to change the cluster permissions to allow some other role (that didn't create the cluster) to perform commands.
~/.kube/config
on the container? It needs to contain authentication data forkubectl
. – Erik