0
votes

I would like my Pods in Kubernetes to connect to other process outside the cluster but within the same VPC (on VM or BGP propagated network outside). As I'm running the cluster on GCP, outgoing traffic from Kubernetes cluster can be NAT'ed with Cloud NAT for external traffic, but the traffic inside the same VPC does not get NAT'ed.

I can simply connect with the private IP, but there are some source IP filtering in place for some of the target processes. They are not maintained by myself and need to run on VM or other network, I'm trying to see if there is any way to IP masquerade traffic that's leaving the Kubernetes cluster even within the same VPC. I thought of possibly getting a static IP somehow assigned to Pod / Statefulset, but that seems to be difficult (and does not seem right to bend Kubernetes networking even if it was somehow possible).

Is there anything I could do to handle the traffic requirements from Kubernetes? Or should I be looking to make a NAT separately outside the Kubernetes cluster, and route traffic through it?

2
Maybe an internal load balancer? Defining pod the IP range for a filter list is only likely to need updating on a cluster redeploy in any caseMatt
do you need a single IP or can you whitelist a range?Patrick W
Thanks both, I'm reading up on the load balancer if it will suffice my need. The whitelist only allows few IPs, and that's the reason why I was hoping to IP masquerade before hitting such process, so I could handle with IP ranges on my side.Ryota

2 Answers

0
votes

I think that a better option is configure an Internal TCP/UDP Load Balancing.

Internal TCP/UDP Load Balancing makes your cluster's services accessible to applications outside of your cluster that use the same VPC network and are located in the same Google Cloud region. For example, suppose you have a cluster in the us-west1 region and you need to make one of its services accessible to Compute Engine VM instances running in that region on the same VPC network.

0
votes

Internal Load Balancer was indeed the right solution for this.

Although this is not a GA release as of writing (at Beta stage), I went ahead with Kubernetes Service annotation, as described https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing

Exact excerpt from the doc above [ref]:

apiVersion: v1
kind: Service
metadata:
  name: ilb-service
  annotations:
    cloud.google.com/load-balancer-type: "Internal"
  labels:
    app: hello
spec:
  type: LoadBalancer
  selector:
    app: hello
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP

This meant there was no juggling between configs, and I could simply rely on Kubernetes config to get ILB up.

Just for the record, I also added loadBalancerIP: 10.0.0.55 attribute directly under spec, which allowed defining the IP used by ILB (provided the associated IP range matches) [ref].