1
votes

We're trying to setup logging using an EKS cluster. We're currently running a cluster with a single EC2 instance. AWS has provided kubectl commands that create the necessary kubernetes config.

When applying this to our existing EKS setup, we cannot properly define the instance role that has the permissions to write to CloudWatch.

the cloudwatch agent shows these errors in its pod logs:

2020-01-07T10:20:14Z E! CreateLogStream / CreateLogGroup with log group name /aws/containerinsights/eks-test-EKS/performance stream name ip-10-0-2-156.eu-central-1.compute.internal has errors. Will retry the request: AccessDeniedException: User: arn:aws:sts::662458865874:assumed-role/eks-test-ekstestEKSDefaultCapacityInstanceRole9446-7MD87BB2AA51/i-01e94e054f3c34383 is not authorized to perform: logs:CreateLogStream on resource: arn:aws:logs:eu-central-1:662458865874:log-group:/aws/containerinsights/eks-test-EKS/performance:log-stream:ip-10-0-2-156.eu-central-1.compute.internal
    status code: 400, request id: 645166c7-34fb-4616-9f27-830986e27469

The problem seems to me that CDK creates this cloudformation IAM instance role without the permission to write to CloudWatch:

  ekstestEKSDefaultCapacityInstanceRole9446FBA6:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Statement:
          - Action: sts:AssumeRole
            Effect: Allow
            Principal:
              Service: ec2.amazonaws.com
        Version: "2012-10-17"
      ManagedPolicyArns:
        - Fn::Join:
            - ""
            - - "arn:"
              - Ref: AWS::Partition
              - :iam::aws:policy/AmazonEKSWorkerNodePolicy
        - Fn::Join:
            - ""
            - - "arn:"
              - Ref: AWS::Partition
              - :iam::aws:policy/AmazonEKS_CNI_Policy
        - Fn::Join:
            - ""
            - - "arn:"
              - Ref: AWS::Partition
              - :iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
      Tags:
        - Key: Name
          Value: eks-test/eks-test-EKS/DefaultCapacity
        - Key:
            Fn::Join:
              - ""
              - - kubernetes.io/cluster/
                - Ref: ekstestEKSBA2E781A
          Value: owned
    Metadata:
      aws:cdk:path: eks-test/eks-test-EKS/DefaultCapacity/InstanceRole/Resource

AWS also mentions that one should "check the IAM role", but the thing is that I don't know how to modify it.

In the EKS Cluster construct I can set a cluster role, but this is not the same and the instance role is still created with the missing permissions.

Is there some other CDK method I can modify the instance role or a workaround if there isn't?

UPDATE: I've linked back to this question from an issue on CDK's github page. Someone mentioned the upcoming IAM roles for service accounts feature, but I'm wondering if "regular" IAM role mapping to kubernetes roles is the way to go here...

UPDATE: The role seems to be created based on the AutoScalingGroup according to the CDK code. I assume this can be overwritten to add different roles...

1
try to add .addToRolePolicy(new PolicyStatement({ effect: Effect.ALLOW, resources: ['*'], actions: ['*'] })) FOR TEST ONLY: this will add administraror permissions for the EKS,Amit Baranes
@AmitBaranes this seems to be a function for lambdas, but I'm trying to change/create the role of an EC2 instance underlying the EKS cluster.kossmoboleat

1 Answers

1
votes

You can set the policy after creating the cluster on the DefaultCapacity

const cluster = new Cluster(this, name, clusterProps);
cluster.defaultCapacity?.role.addManagedPolicy(ManagedPolicy.fromAwsManagedPolicyName('CloudWatchFullAccess'));