5
votes

Trying to setup a EKS cluster. An error occurred (AccessDeniedException) when calling the DescribeCluster operation: Account xxx is not authorized to use this service. This error came form the CLI, on the console I was able to crate the cluster and everything successfully. I am logged in as the root user (its just my personal account).

It says Account so sounds like its not a user/permissions issue? Do I have to enable my account for this service? I don't see any such option.

Also if login as a user (rather than root) - will I be able to see everything that was earlier created as root. I have now created a user and assigned admin and eks* permissions. I checked this when I sign in as the user - I can see everything.

The aws cli was setup with root credentials (I think) - so do I have to go back and undo fix all this and just use this user.

Update 1
I redid/restarted everything including user and awscli configure - just to make sure. But still the issue did not get resolved.

There is an option to create the file manually - that finally worked.

And I was able to : kubectl get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.100.0.1 443/TCP 48m

KUBECONFIG: I had setup the env:KUBECONFIG

$env:KUBECONFIG="C:\Users\sbaha\.kube\config-EKS-nginixClstr"
$Env:KUBECONFIG
C:\Users\sbaha\.kube\config-EKS-nginixClstr
kubectl config get-contexts
CURRENT   NAME   CLUSTER      AUTHINFO   NAMESPACE
*         aws    kubernetes   aws
kubectl config current-context
aws

My understanding is is I should see both the aws and my EKS-nginixClstr contexts but I only see aws - is this (also) a issue?

Next Step is to create and add worker nodes. I updated the node arn correctly in the .yaml file: kubectl apply -f ~\.kube\aws-auth-cm.yaml
configmap/aws-auth configured So this perhaps worked.

But next it fails:

kubectl get nodes No resources found in default namespace.

On AWS Console NodeGrp shows- Create Completed. Also on CLI kubectl get nodes --watch - it does not even return.

So this this has to be debugged next- (it never ends)

aws-auth-cm.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::arn:aws:iam::xxxxx:role/Nginix-NodeGrpClstr-NodeInstanceRole-1SK61JHT0JE4
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
1
Is this from the console, from the CLI, or from code (SDK)? If CLI or SDK how are you setting your AWS credentials? (for example if CLI did you use aws configure?)Seth E
It is from the Powershell console CLI, Yes I have used aws configure to setup the cli - likely would have used my root credentials (it was a while ago). But on the AWS Mgmt console - I can do everything with EKS.Sam-T
If your eks cluster is created by other root and using CLI of different user (though with admin permission) you will get this error. create eks cluster from cli itself with same IAM user.Mahattam
@ Mahattam - is the issue different user or different tool or both? I d have to use both console and CLI for this - per the instructions. My different user also has admin permissions. Also the error sounds like 'Account permission' rather than user permission?Sam-T
This is probably not what you're seeing, but I just ran into this problem when trying to create an EKS cluster in ca-central-1, which doesn't support EKS. -_-Jason Walton

1 Answers

2
votes

This problem was related to not having the correct version of eksctl - it must be at least 0.7.0. The documentation states this and I knew this, but initially whatever I did could not get beyond 0.6.0. The way you get it is to configure your AWS CLI to a region that supports EKS. Once you get 0.7.0 this issue gets resolved.
Overall to make EKS work - you must have the same user both on console and CLI, and you must work on a region that supports EKS, and have correct eksctl version 0.7.0.