In a VPC, I have two Subnets, one is a public subnet with an EC2 instance, the other is a private subnet with 2 EC2 instances. All 3 EC2 instances have the same IAM role to access S3.
The EC2 instance in the public subnet can access S3 directly if I login and run aws s3 ls
. However, both of the EC2 instances in the private subnet cannot. What can be the reasons?
EC2 in the private subnet uses a Security Group that accepts traffic from the whole VPC.
EC2 in the public subnet use a Security Group that accepts traffic from Anywhere.
All 3 EC2 instances use the same routing table, same NACLs, use the same IAM role, with the policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
If I manually create a credential profile in the EC2 in the private Subnet, then it can login and access "aws s3 ls".
Update: The routing table of the private subnets does have a VPC Endpoint. The routing table is:
- dest=10.1.0.0/16 (myVPC)->target=local
- dest=0.0.0.0/0, target=iGW
- dest=pl-68a54001 (com.amazonaws.us-west-2.s3)->target=vpce-26cf344f
Among them, #3 means the EC2 can access S3 via VPC endpoint. #2 is added because an ELB is in front of the EC2 instance and has to access internet.
Another observation: If I enable Assign Public IP in the private subnet and then launch a new EC2 instance, this EC2 instance can access S3. If I disable Assign Public IP in the private subnet and then launch a new EC2 instance, the new EC2 instance cannot access S3.
BTW, I already have region set as us-west-2 before running terraform:
[ec2-user@ip-XXXXX]$ echo $AWS_DEFAULT_PROFILE
abcdefg
[ec2-user@XXXXX]$ aws configure --profile abcdefg
AWS Access Key ID [****************TRM]:
AWS Secret Access Key [****************nt+]:
Default region name [us-west-2]:
Default output format [None]: