0
votes

I've gone over this guide to allow one of the pods running on my EKS cluster to access a remote EKS cluster using kubectl.

I'm currently running a pod using amazon/aws-cli inside my cluster, mounting a service account token which allows me to assume an IAM role configured with kubernetes RBAC according to the guide above. I've made sure that the role is correctly assumed by running aws sts get-caller-identity and this is indeed the case.

I've now installed kubectl and configured kube/config like so -

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <redacted>
    server: <redacted>
  name: <cluster-arn>
contexts:
- context:
    cluster: <cluster-arn>
    user: <cluster-arn>
  name: <cluster-arn>
current-context: <cluster-arn>
kind: Config
preferences: {}
users:
- name: <cluster-arn>
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - us-east-2
      - eks
      - get-token
      - --cluster-name
      - <cluster-name>
      - --role
      - <role-arn>
      command: aws
      env: null

However, every operation I try to carry out using kubectl results in this error -
error: You must be logged in to the server (Unauthorized)

I've no idea what did I misconfigure, and would appreciate any idea on how to get a more verbose error message.

1

1 Answers

1
votes

If the AWS CLI is already using the identity of the role you want, then it's not needed to specify the --role & <role-arn> in the kubeconfig args.

By leaving them in, your role from aws sts get-caller-identity will need to have sts:AssumeRole permissions for the role <role-arn>. If they are the same, then the role needs to be able to assume itself - which is redundant.

So I'd try remove those args from the kubeconfig.yml and see if it helps.