0
votes

I have a simple basic AWS architecture which I have not been able to make work.

VPC1

  • CIDR 192.168.0.0/16
  • 3 subnets:
    • 192.168.0.0/26
    • 192.168.80.0/26
    • 192.168.160.0/26
  • All are public subnets with Internet Gateway attached
  • Security Group: SG1
  • One EC2 instance running here, private IP: 192.168.0.54

VPC2

  • CIDR: 192.170.0.0/16
  • 3 subnets:
    • 192.170.0.0/26
    • 192.170.80.0/26
    • 192.170.160.0/26
  • All are private subnets and without any NAT Gateway
  • Security Group inbound rules:
    • SSH Port 22 Source 192.168.0.0/16, so that I can ssh from the instance in VPC1
  • One EC2 instance running here, private IP: 192.170.0.49
  • Also tried adding Network ACLs in these subnets as:
    • SSH Port 22 Source 192.168.0.0/16 Allow

However I am unable to reach (ssh) from the first instance (in VPC1) to the second (in VPC2). Even tried adding ICMP inbound rule but ping also doesn't work.

Am I missing anything obvious here? Will these routing rules alone not be able to route traffic between instances in different subnet and VPC?

2
Thanks John for the edit and making it more readable.Rajesh

2 Answers

2
votes

These are private IP addresses. They cannot route over the internet. The VPCs themselves are isolated from each other and all other VPCs (that's the whole point of VPC).

If you want to connect from a private IP of an instance in one VPC to the private IP of an instance in a second VPC, then you need to peer the two VPCs. Note that the two VPCs cannot have overlapping IP ranges.

Alternatively, you'll need to start using public IPs on the instances that you want to connect from and connect to along with appropriate Security Group rules on the ingress path for the destination instance, and an IGW in both VPCs. Or use other options for outbound, such as NAT.

0
votes

I could resolve the routing issues by introducing a Transit Gateway ( instead of VPC peering ). The best part is, it worked perfectly with private IPs (which are in my control) and provides little more granular control ( sub-net level ) as compared to VPC peering. Although ideally VPC peering should also work. Here are things which I have done ( infact using terraform )

  • Added a transit gateway -- resource "aws_ec2_transit_gateway" "transit-gwy" { ... }
  • Defined vpc attachment for both VPCs i) resource "aws_ec2_transit_gateway_vpc_attachment" "to-VPC1" and ii) resource "aws_ec2_transit_gateway_vpc_attachment" "to-VPC2"
  • Added route from VPC1 towards VPC2 -- resource "aws_route" "VPC1-2-VPC2" { destination_cidr_block = "<>" }
  • Added route from VPC2 towards VPC1 -- resource "aws_route" "VPC2-2-VPC1" { destination_cidr_block = "<>" } . This may be optional, depending on use case
  • Added ingress rule for security group attached to VPC2, to allow ssh traffic from VPC1 subnet

This worked for me.