I've gone over this guide to allow one of the pods running on my EKS cluster to access a remote EKS cluster using kubectl.
I'm currently running a pod using amazon/aws-cli inside my cluster, mounting a service account token which allows me to assume an IAM role configured with kubernetes RBAC according to the guide above. I've made sure that the role is correctly assumed by running aws sts get-caller-identity
and this is indeed the case.
I've now installed kubectl and configured kube/config like so -
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <redacted>
server: <redacted>
name: <cluster-arn>
contexts:
- context:
cluster: <cluster-arn>
user: <cluster-arn>
name: <cluster-arn>
current-context: <cluster-arn>
kind: Config
preferences: {}
users:
- name: <cluster-arn>
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-2
- eks
- get-token
- --cluster-name
- <cluster-name>
- --role
- <role-arn>
command: aws
env: null
However, every operation I try to carry out using kubectl results in this error -error: You must be logged in to the server (Unauthorized)
I've no idea what did I misconfigure, and would appreciate any idea on how to get a more verbose error message.