Our system has EKS kubernetes clusters. We manage them from EC2 instance "bastion hosts" where we ssh in, configure our credentials, and use kubectl to manage the cluster.
We do not want to continue having to manually configure our credentials on the EC2 host.
So, I am trying to associate an EC2 Instance profile with the EC2 host with appropriate permissions to manage the cluster. However, it does not work.
The EC2 instance has an IAM instance profile that is associated with a role that contains this policy (full EKS permissions)
{
"Sid": "2",
"Effect": "Allow",
"Action": [
"eks:*"
],
"Resource": "*"
}
So, when I run the following commands, I expect to be able to list the active services on EKS:
[ec2-user@ip-10-0-0-72 ~]$ aws eks update-kubeconfig --name my_eks_cluster_name
[ec2-user@ip-10-0-0-72 ~]$ kubectl get svc
error: the server doesn't have a resource type "svc"
The error implies that I don't have permission. I proved this by configuring some AWS credentials on the host:
[ec2-user@ip-10-0-0-72 ~]$ export AWS_ACCESS_KEY_ID=...
[ec2-user@ip-10-0-0-72 ~]$ export AWS_SECRET_ACCESS_KEY=...
[ec2-user@ip-10-0-0-72 ~]$ export AWS_SESSION_TOKEN=...
now, I try to list svc:
[ec2-user@ip-10-0-0-72 ~]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 85d
it works
Why is the IAM instance role not sufficient to allow me to manage my EKS cluster via kubectl?