60
votes

I've recently inherited a Rails app that uses S3 for storage of assets. I have transferred all assets to my S3 bucket with no issues. However, when I alter the app to point to the new bucket I get 403 Forbidden Status.

My S3 bucket is set up with the following settings:

Permissions

Everyone can list

Bucket Policy

{
 "Version": "2012-10-17",
 "Statement": [
    {
        "Sid": "PublicReadGetObject",
        "Effect": "Allow",
        "Principal": "*",
        "Action": "s3:GetObject",
        "Resource": "arn:aws:s3:::bucketname/*"
    }
 ]
}

CORS Configuration

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
    </CORSRule>
    <CORSRule>
        <AllowedOrigin>https://www.appdomain.com</AllowedOrigin>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>DELETE</AllowedMethod>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

Static Web Hosting

Enabled.

What else can I do to allow the public to reach these assets?

11

11 Answers

44
votes

I know this is an old thread, but I just encountered the same problem. I had everything working for months and it just suddenly stopped working giving me a 403 Forbidden error. It turns out the system clock was the real culprit. I think s3 uses some sort of time-based token that has a very short lifespan. And in my case I just ran:

ntpdate pool.ntp.org

And the problem went away. I'm running CentOS 6 if it's of any relevance. This was the sample output:

19 Aug 20:57:15 ntpdate[63275]: step time server ip_address offset 438.080758 sec

Hope in helps!

27
votes

The issue is that the transfer was done according to this thread, which by itself is not an issue. The issue came from the previous developer not changing permissions on the files before transferring. This meant I could not manage any of the files, even though they were in my bucket.

Issue was solved by re-downloading the files cleanly from the previous bucket, deleting the old phantom files, re-uploading the fresh files and setting their permissions to allow public reading of the files.

26
votes

It could also be that a proper policy needs to be set according to the AWS docs.

Give the bucket in question this policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PublicReadGetObject",
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "s3:GetObject"
      ],
      "Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*"
    }
  ]
}
11
votes

I had same problem just adding * at end of policy bucket resource solved it

{
  "Version":"2012-10-17",
  "Statement":[{
    "Sid":"PublicReadGetObject",
        "Effect":"Allow",
      "Principal": "*",
      "Action":["s3:GetObject"],
      "Resource":["arn:aws:s3:::example-bucket/*"
      ]
    }
  ]
}
3
votes

Here's the Bucket Policy I used to make index.html file inside my S3 Bucket accessible from the internet:

enter image description here

I also needed to go to Permissions -> "Block Public Access" and remove the block public access rules for the bucket. Like so:

enter image description here

Also make sure the access permissions for the individual Objects inside each bucket is open to the public. Check that here: enter image description here

1
votes

One weird thing that fixed this for me after already setting up the correct permissions, was I removed the extension from the filename. So I had many items in the bucket all with the same permissions and some worked find and some returned 403. The only difference was the ones that didn't work had .png at the end of the filename. When I removed that they worked fine. No idea why.

0
votes

For me, none of the other answers worked. File permissions, bucket policies, and clock were all fine. For me, the issue was intermittent, and while it may sound trite, the following have both worked for me previously:

  1. Log out and log back in.
  2. If you are trying to upload a single file, try to do a bulk upload. Conversely, if trying to upload a single file, try to do a bulk upload.
0
votes

Another "solution" here: I was using Buddy to automate uploading a github repo to an s3 bucket, which requires programmatic write access to the bucket. The access policy for the IAM user first looked like the following: (Only allowing those 6 actions to be performed in the target bucket).

    {
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:ListAllMyBuckets",
                "s3:ListBucket",
                "s3:DeleteObject",
                "s3:PutObjectAcl"
            ],
            "Resource": ""arn:aws:s3:::<bucket_name>/*"
        }
    ]
}

My bucket access policy was the following: (allowing read/write access for the IAM user).

{
"Version": "2012-10-17",
"Id": "1234",
"Statement": [
    {
        "Sid": "5678",
        "Effect": "Allow",
        "Principal": {
            "AWS": "arn:aws:iam::<IAM_user_arn>"
        },
        "Action": [
            "s3:DeleteObject",
            "s3:GetObject",
            "s3:GetObjectAcl",
            "s3:PutObject"
        ],
        "Resource": "arn:aws:s3:::<bucket_name>/*"
    }

However, this kept giving me the 403 error.

My workaround solution was to give the IAM user access to all s3 resources:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:ListAllMyBuckets",
                "s3:ListBucket",
                "s3:DeleteObject",
                "s3:PutObjectAcl"
            ],
            "Resource": "*"
        }
    ]
}

This got me around the 403 error, although clearly it doesn't sound like how it should be.

0
votes

Just found the same issue on my side on my iPhone app. It was working completely fine with Android with same configuration and S3 setup but iPhone app was throwing an error. I reached Amazon support team with this issue, after checking logs on their end; they told me your iPhone has date and time. Then I went to settings of my iPhone and just adjusted correct date and time. Then I tried to upload new image and it worked as expected.

If you are having same issue and you have wrong date or time in your iphone or simulator; this may help you.

Thanks!

0
votes

For me it was the Public access under Access Control tab.

just ensure the read and write permission under public access is Yes by default its - which means No.

Happy coding.

JFYI: am using flutter for my android development.

enter image description here

0
votes

Make sure you use the correct AWS Profile!!!! (dev \ prod etc...)