1
votes

So, I'm using boto3 s3 with a python script just to list the bucket.

s3_client = boto3.client('s3')

It works fine when I run it on my desktop because I setup the aws_access_key_id and aws_secret_access_key with my aws configure command.

When I run it on AWS as a container, am I going to need to run my container with environmental variables for the aws_access_key_id and aws_secret_access_key when I use boto3 or if I have my setup for my container user having access to s3, will that work and I don't need to use the keys?

1
There are many ways to run docker on AWS. How exactly are you doing it? ECS, Fargate, EKS, Ec2 instance, AWS Bash, Lambda and probably more? - Marcin
ECS and Fargate is what I plan on using. The question is, when I run my container on those on my aws account, if the user I'm running as has access to S3, would I still need to setup the aws key and secret key in boto3 or will it work off the permissions I give to my task execute user? - Jimmy Chen
How did it go? Still unclear how to pass permissions to ecs? - Marcin

1 Answers

0
votes

would I still need to setup the aws key and secret key in boto3 or will it work off the permissions I give to my task execute user

You don't have to hard code anything. Instead you provide the permissions using IAM Roles for Tasks. So S3 permissions are granted through this role. Boto3 will automatically inherit the permissions.