2
votes

2 VPC's:

Primary VPC: 10.111.0.0/22

Primary VPC Subnet contains 4 subnets:

10.111.0.0/25
10.111.0.128/25
10.111.1.0/25
10.111.1.128/25

Secondary VPC: Kubernetes Minion VPC (172.16.0.0/20)

Additional notes: Primary VPC && Secondary VPC uses VPC peering to enable communication between the 2 VPC's

The issue: Is it possible to separate minion instances / nodes / pods etc out in their own VPC in order to save network space in the primary VPC. If possible I'd love to have the master and service endpoints running under the primary vpc so they are directly routable without going over public internet, and having the nodes / pods etc in their own space not cluttering up the already small ip space we have.

PS: The primary VPC address space is only a /22 due to ip overlap restrictions with the main corporate network.

1

1 Answers

1
votes

As soon as you define a service endpoint that is reachable from outside your k8s cluster (no matter if you use the NodePort or LoadBalancer option), k8s opens a service port on every node in your cluster (also on the master nodes). Every node in your cluster runs a kube-proxy, which takes care that any request on a service port gets routed to a running Pod, even if that Pod is running in an entirely different node in another VPC (given that the node is reachable via peering of course). Further, Pods run in virtual networks that have nothing to do with your physical network of your nodes - so the Pods do not exhaust your network's IP space, but the number of nodes in your VPC/network does. So, I think you should just limit the number of nodes in your VPC which has limited IP space (you could just place the master nodes there the way you wanted) and put the worker nodes in the other VPC.

About Node affinity of Pods: You could assign Pods to specific worker nodes (see here). For instance you could assign all your Pods to worker nodes in a single VPC, and route any public traffic to nodes in the other VPC, which will then proxy the traffic to a running Pod, but this does not tackle your IP space problem at all.

Update:

Concerning the service endpoints: When you configure a service which is reachable from outside your k8s cluster the master node first allocates a port, which is from then on reserved for that service. That port is then opened on every node (master and worker nodes) in your cluster. The port is operated by the kube-proxy, which of course also resides on every node. The kube-proxy then takes care of the rest and proxies the incoming traffic from that port to a running pod for the corresponding service, even if that pod is running on a completely different node (internally k8s achieves this with some iptables magic). This means you can send your request to that port (lets call it now <service-port>) on any node in your cluster. Your service endpoint is basically <proto>://<any-worker-or-master-node-ip>:<service-port>. With this you could also easily setup an ELB and add all your nodes as instances, so you have a public internet-facing endpoint. All this is explained in more detail here.