3
votes

Our system has EKS kubernetes clusters. We manage them from EC2 instance "bastion hosts" where we ssh in, configure our credentials, and use kubectl to manage the cluster.

We do not want to continue having to manually configure our credentials on the EC2 host.

So, I am trying to associate an EC2 Instance profile with the EC2 host with appropriate permissions to manage the cluster. However, it does not work.

The EC2 instance has an IAM instance profile that is associated with a role that contains this policy (full EKS permissions)

{
    "Sid": "2",
    "Effect": "Allow",
    "Action": [
        "eks:*"
    ],
    "Resource": "*"
}

So, when I run the following commands, I expect to be able to list the active services on EKS:

[ec2-user@ip-10-0-0-72 ~]$ aws eks update-kubeconfig --name my_eks_cluster_name
[ec2-user@ip-10-0-0-72 ~]$ kubectl get svc
error: the server doesn't have a resource type "svc"

The error implies that I don't have permission. I proved this by configuring some AWS credentials on the host:

[ec2-user@ip-10-0-0-72 ~]$ export AWS_ACCESS_KEY_ID=...
[ec2-user@ip-10-0-0-72 ~]$ export AWS_SECRET_ACCESS_KEY=...
[ec2-user@ip-10-0-0-72 ~]$ export AWS_SESSION_TOKEN=...

now, I try to list svc:

[ec2-user@ip-10-0-0-72 ~]$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   172.20.0.1   <none>        443/TCP   85d

it works

Why is the IAM instance role not sufficient to allow me to manage my EKS cluster via kubectl?

2
That IAM policy is not actually what the cluster uses for authorizing API calls. It uses a configmap that maps IAM roles to k8s cluster roles. docs.aws.amazon.com/eks/latest/userguide/add-user-role.htmljordanm
@jordanm thank you that link helped me figure out to add a role mapping (EC2 instance profile role <--> systems:masters k8s permission) and now the ec2 can talk to EKSJames Wierzba

2 Answers

2
votes

Thanks to @jordanm for pointing out that EKS uses a configmap to handle authorization.

I was able to edit the configmap, adding an entry to map the EC2 instance profile's IAM role, arn:aws:iam::111111111111:role/eks_management_host_jw_jw, to the kubernetes system:masters group (admins)

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  mapRoles: |
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::111111111111:role/eks_node_jw_jw
      username: system:node:{{EC2PrivateDNSName}}
    - groups:  # <-- I added this
      - system:masters
      rolearn: arn:aws:iam::111111111111:role/eks_management_host_jw_jw
      username: system:admin
kind: ConfigMap
metadata:
  # ...
2
votes

By default, only the cluster creator has permissions to access resources inside a cluster, and not any other users or roles. In order to give cluster permission to any other user/roles, EKS has a configmap named aws-ath in kube-system namespace. You have to define mapRoles inside it to grant permission to other roles. Similarly to grant permissions to other users mapUsers should be defined.

Example

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: <ARN of instance role (not instance profile)>
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
        - system:masters

Reference - https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html