I hope that subject makes sense.
I'm trying to setup RBAC on my EKS cluster, and am using this excellent walkthrough as a guide, Kubernetes Authentication.
So I created an IAM role, called EKSClusterAdminRole
that has the AmazonEKSClusterPolicy
managed policy, to allow the role to manage an EKS cluster. The role has this trust relationship
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Then I created an EKSAdminGroup
that has an inline policy that can assume that role, like this
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAssumeOrganizationAccountRole",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::0123456789:role/EKSClusterAdminRole"
}
]
}
and I added my existing jenkins
user to that group, as shown here
$ aws iam get-group --group-name EKSAdminGroup
{
"Group": {
"Path": "/",
"CreateDate": "2021-02-27T18:31:34Z",
"GroupId": "QRSTUVWXYZ",
"Arn": "arn:aws:iam::0123456789:group/EKSAdminGroup",
"GroupName": "EKSAdminGroup"
},
"Users": [
{
"UserName": "jenkins",
"Path": "/",
"CreateDate": "2014-11-04T14:03:17Z",
"UserId": "ABCEDFGHIJKLMNOP",
"Arn": "arn:aws:iam::0123456789:user/jenkins"
}
]
}
Here's my ClusterRole and ClusterRoleBinding manifest
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-admin-role
rules:
- apiGroups:
- ""
- "apps"
- "batch"
- "extensions"
resources:
- "configmaps"
- "cronjobs"
- "deployments"
- "events"
- "ingresses"
- "jobs"
- "pods"
- "pods/attach"
- "pods/exec"
- "pods/log"
- "pods/portforward"
- "secrets"
- "services"
verbs:
- "create"
- "delete"
- "describe"
- "get"
- "list"
- "patch"
- "update"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-admin-rolebinding
subjects:
- kind: Group
name: cluster-admins
roleRef:
kind: ClusterRole
name: cluster-admin-role
apiGroup: rbac.authorization.k8s.io
Now I'm on a machine that has the above jenkins
user credentials in ~/.aws/credentials
. I want to execute kubectl
commands there. So I do
$ cat ~/.aws/credentials
[default]
aws_access_key_id = ABCEDFGHIJKLMNOP
aws_secret_access_key = *****
$ aws eks update-kubeconfig --name sandbox --region us-east-1 --role-arn arn:aws:iam::0123456789:role/EKSClusterAdminRole
Updated context arn:aws:eks:us-east-1:0123456789:cluster/sandbox in /home/ubuntu/.kube/config
$ kubectl config view
apiVersion: v1
kind: Config
...
users:
- name: arn:aws:eks:us-east-1:0123456789:cluster/sandbox
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- sandbox
- --role
- arn:aws:iam::0123456789:role/EKSClusterAdminRole
command: aws
env: null
provideClusterInfo: false
Here's (part of) my EKS cluster's aws-auth ConfigMap
apiVersion: v1
kind: ConfigMap
data:
mapRoles: |
- rolearn: arn:aws:iam::0123456789:role/EKSClusterAdminRole
username: cluster-admins
But I get, for example
$ kubectl get ns
An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:iam::0123456789:user/jenkins is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::0123456789:role/EKSClusterAdminRole
Unable to connect to the server: getting credentials: exec: executable aws failed with exit code 255
What's the deal pickle? It seems like I did everything in Trouubleshooting IAM roles that's pertinent to my issue.