1
votes

We have a Shared VPC used across our organisation. In order to preserve IP space, teams create GKE clusters in separate VPCs (in their own GCP projects) with some fixed CIDR range (that does not overlap with the Shared VPC) and then create proxy instances in their own VPC with a second interface coming from a compute address in the Shared VPC.

GKE clients in the Shared VPC can then do export HTTPS_PROXY=<compute-address>:<proxy-port> and kubectl commands will be proxied through the proxy to reach the GKE cluster.

A improvement of this would be to run this proxy on the GKE clusters themselves and use a GCP internal load balancer to bridge the Shared VPC and each team-owned VPC.

Is this possible on GCP? i.e. can you have an internal load balancer that has an ingress IP in the Shared VPC but with backends that reside in a separate VPC?

1
Are all your VPC's in the same region ? What kind of load balancing you need ? HTTP(S) or TCP/UDP ?Wojtek_B
Yeah all in the same region. HTTP(S) and TCP/UDP preferably to give options, however only TCP would be useful too.dippynark

1 Answers

0
votes

Shared VPC can be used in conjunction with load balancing.

It iss possible to create internal TCP or HTTP(S) load balancer that has frontend in one VPC and backend the other (all backend VM groups must be in the same VPC though).

It's not possible to create LB with backend VM's in different zones:

An internal TCP/UDP load balancer doesn't support:

Backend VMs in multiple regions Balancing traffic that originates from the internet, unless you're using it with an external load balancer

Additionally all LB's clients must be in the same VPC as it's frontend.

If you require health checks then you must set up proper firewall rules.

There's one condition however - for this kind of load balancer to work your backend VM's have to have two network interfaces - one in their "home" VPC and second in the Shared VPC. You also have to configure proper routes on them but that's another matter.