4
votes

When I try to deploy Java web app to Elastic Beanstalk Tomcat container it was failed with following error:

Service:AmazonCloudFormation, Message:TemplateURL must reference a valid S3 object to which you have access.

Please note the following points:

  • Deployment was automated via Jenkins running on EC2 server.
  • This error is not a continuous issue. Sometimes it was deployed successfully but sometimes it was failed with above error.
5
I do have the same issue, but for me it happens as soon as I add the .ebextensions folder. Still researching what it's going on...Aldo Reyes
I'm seeing the exact same thing, soon after I added the ebextensions folder this started happening. No problems deploying manually though.Rick

5 Answers

5
votes

I had this exact problem, from what I could tell it was completely random but it turned out to be linked to IAM roles. Everything worked perfectly until I added .ebextensions with a database migration script, after that I couldn't get my Bamboo builder to work again. However I managed to figure it out (No thanks to Amazon's non-existing documentation on what permissions are needed for EB).

I based my IAM policy on this Gist: https://gist.github.com/magnetikonline/5034bdbb049181a96ac9

However I had to make some modifications. This specific issue was caused by a too restrictive policy on S3 gets, so I simply replaced the one provided with

{
    "Action": [
        "s3:Get*"
    ],
    "Effect": "Allow",
    "Resource": [
        "arn:aws:s3:::elasticbeanstalk-*/*"
    ]
},

This allows users with the policy to perform all kinds of Get operations on the bucket, since I couldn't be bothered to find out which specific one was required.

1
votes

Uploading to beanstalk involves sending a zipped artifact into S3 along with modifying the cloudformation templates (this part is hands off).

Likely the IAM role that is attached to the jenkins runner (or access credentials) does not have access to the relevant S3 buckets. Ensure this via IAM. See: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.iam.html

0
votes

This is an edge-case, but I wanted to capture it here for posterity. This error message can be returned as a generic error message sometimes. I spent many weeks working through this error with AWS to find out that it was related to Security Token Service (STS) credentials expiring. When you generate STS credentials the maximum duration of the session is 36 hours. If you generate a 36 hour key some services used by Elastic Beanstalk don't respect this session length and consider the session expired. To work around this we no longer allow STS credentials with a session length longer then 2 hours.

0
votes

I have also struggled with this and, as in Rick's case, it turned out to be a permissions problem. But his solution hasn't worked for me.

I have fixed the

Service:AmazonCloudFormation, Message:TemplateURL must reference a valid S3 object to which you have access.

Adding "s3:Get*" alone wasn't enough, I needed also "s3:List*".

The interesting thing is that I was getting this issue just for one EB environments out of three. It turned out that the other environments did deploy to all nodes at once, while the problematic one had enabled Rolling updates (which, obviously, perform other actions, adding new instances etc.).

Here is the final IAM policy that works: gist: IAM policy to allow Continuous Integration user to deploy to AWS Elastic Beanstalk

-1
votes

I had the same issue. Based on what I gathered from AWS support, an IAM user requires full access to S3 to perform some actions like deployment. This is because EB uses CloudFormation which is using S3 to store templates. You need to attach the managed policy "AWSElasticBeanstalkFullAccess" to the IAM user performing deployment, or create a policy like the following and attach it to the user.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "*"
        }
    ]
}

Ideally amazon should have a way to restrict the Resource to specific buckets, but it doesn't look like that it is doable right now!