6
votes

I have two (independent) Kubernetes clusters, one setup as a GKE/GCE cluster and the other setup in an AWS environment created using the kube-up.sh script. Both clusters are working properly and I can start / stop pods, services, and everything else between.

I want pods located in these clusters to communicate with each other, but without exposing them as services. In order to achieve this, I have setup a VPN connection between the two clusters, and also a couple of routing / firewall rules to make sure the VMs / pods can see each other.

I can confirm that the following scenarios are working properly:

VM in GCE -> VM in AWS (OK)

Pod in GCE -> VM in AWS (OK)

VM in AWS -> VM in GCE (OK)

VM in AWS -> Pod in GCE (OK)

Pod in AWS -> Pod in GCE (OK)

However, I can't make a VM or Pod in GCE communicate with a Pod in AWS.

I was wondering if there is any way of making this work with current AWS VPC capabilities. It seems that when the AWS end of the VPN tunnel receives packets addressed to a pod, it doesn't really know what to do with them. On the other hand, GCE networking is automatically configured with routes that associate pod IPs to a GKE cluster. In this case, when a packet addressed to a pod reaches the GCE end of the VPN tunnel, it is correctly forwarded to its destination.

That's my configuration:

GKE/GCE in us-east1

Network: 10.142.0.0/20

VM1 IP: 10.142.0.2

Pod range (for VM1) : 10.52.4.0/24

Pod1 IP: 10.52.4.4 (running busybox)

Firewall rule: Allows any traffic from 172.16.0.0/12

Route: Sends everything with destination 172.16.0.0/12 to the VPN tunnel (automatically added when the VPN is created)

AWS in ap-northeast-1

VPC: 172.24.0.0/16

Subnet1: 172.24.1.0/24

VM3 IP (in Subnet1): 172.24.1.5

Kubernetes cluster network (NON_MASQUERADE_CIDR): 172.16.0.0/16

Pod range (CLUSTER_IP_RANGE): 172.16.128.0/17

Pod range (for VM3) : 172.16.129.0/24

Pod3 IP: 172.16.129.5

Security Group: Allows any traffic from 10.0.0.0/8

Routes:

  1. Destination 10.0.0.0/8 to the VPN tunnel
  2. Destination 172.16.129.0/24 to VM3

Has anyone tried to do something similar? Is there any way to configure AWS VPC VPN Gateway to ensure the packets destined to Pods are correctly sent to the VMs that host them? Any suggestions?

1

1 Answers

2
votes

What you are asking in kubernetes federation.

Federation makes it easy to manage multiple clusters. It does so by providing 2 major building blocks:

  • Sync resources across clusters: Federation provides the ability to keep resources in multiple clusters in sync. This can be used, for example, to ensure that the same deployment exists in multiple clusters.

  • Cross cluster discovery: It provides the ability to auto-configure DNS servers and load balancers with backends from all clusters. This can be used, for example, to ensure that a global VIP or DNS record can be used to access backends from multiple clusters.

Also,this one might help you https://kubernetes.io/docs/admin/multiple-zones/