0
votes

I want to create a bucket that can be accessed by users in certain IP range without having to login. So these users should be able to freely upload files to that bucket without logging in. And I want to access these files from a lambda using the S3 file link provided by my users.

I am trying to first allow anyone to access the bucket without logging in before adding IP restrictions.


I made the bucket public with this policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicRead",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::my-public-bucket/*"
        }
    ]
}

I also gave public access via ACL to write to the bucket:

S3 ACL ui

Right now if I try to access the link (https://s3.console.aws.amazon.com/s3/buckets/my-public-bucket/?region=us-west-2&tab=overview) to the bucket in incognito I get this:

I thought I could use the static web hosting url (http://my-public-bucket.s3-website-us-west-2.amazonaws.com) but that's only to host websites.

Is my only option creating a new IAM role and giving its credentials to the users? This is a very bad user experience and I want to avoid it.

1
Can you have them use an FTP type application like Cloudberry or WinSCP instead? You enter credentials once, then use a file-oriented UI for file transfers. - Dave S
I really don't want to move outside aws. Is this not possibe using aws? - Parth Tamane
Are you suggesting anonymous file uploads, or that you have a frontend app that should be able to upload without a user having IAM credentials? - Chris Williams
I want to support anonymous file uploads. I thought this was supported but I could not see any documentation on enabling it. - Parth Tamane
Don't use ACLs, just use an S3 policy. Your bucket policy, as written, allows unauthenticated users to get all objects in the rqa-testing bucket but it doesn't currently allow them to list the bucket. Yes, you can support unauthenticated uploads to S3 - it's not a great security practice, of course, but you can do it if it's absolutely needed. Note that doing that allows user B to overwrite an object that user A just uploaded. - jarmod

1 Answers

1
votes

The Amazon S3 management console requires a lot of permissions to work "correctly", such as the ability to list all buckets, the ability to list buckets, etc.

If your goal is to give specific users access to a bucket, first consider HOW they will be accessing the bucket. If they will be automating the process, then using the AWS CLI is a good option since it can be easily scripted. Also, only specific permissions are required (eg PutObject).

Using the AWS CLI does require AWS permissions, which can either come from an IAM User (not recommended for people outside your organization unless you have an on-going relationship with them), or you can use temporary credentials generated by your own back-end app (which would authenticate them, then generate temporary AWS credentials).

If the AWS CLI is too "unfriendly" for these users, then utilities such as CyberDuck can provide a familiar drag & drop interface to S3. However, it needs the same credentials as the AWS CLI would use.

You could provide anonymous access to the bucket restricted to an IP address range, but they would need to interact via direct POSTs to the bucket, presumably via a website you provide to them. This is because all API calls need to be authenticated.