1
votes

I'm new to GKE and K8S so please bare with me and my silliness. I currently have a GKE cluster that has two nodes in the default node pool and the cluster is exposed via a LoadBalancer type service.

These nodes are tasked with calling a Compute Engine instance via HTTP. I have a Firewall rule set in GCP to deny ingress traffic to the GCE instance except the one coming from the GKE cluster.

The issue is that the traffic isn't coming from the LoadBalancer's service IP but rather from the nodes themselves, so whitelisting the services' IP has no effect and I have to whitelist the IPs of the nodes instead of the cluster. This is not ideal, since each time a new node is created I have to change the Firewall rule. I understand that once you have a service set up in the cluster, all traffic will be directed towards the IP of the service, so why is this happening? What am I doing wrong? Please let me know if you need more details and thanks in advance.

YAML of the service:

https://i.stack.imgur.com/XBZmE.png

1
When you make the HTTP request in the code running on the nodes, what IP (or URL) are you using?Toadfish
@Toadfish The IP I'm sending the call to? It's a generic Google's IP of a GCE instance - 34.78.X.XJosh
Right. I had a similar problem the other day caused by targeting the LB itself so that the (thought to be VLAN-internal) calls could be load balanced. I ended up discovering that to achieve this I would need a separate internal TCP/IP load balancer. I was using Compute Engine instance groups, however, not GKE. The answer below covers this in more detail, and GKE specific.Toadfish

1 Answers

1
votes

When you create a service on GKE, and you expose it to internet, a load balancer is created. This load balancer manage only the ingress traffic (traffic from internet to your GKE cluster).

When your pod initiate a communication, the traffic is not managed by the load balancer, but by the node that host the pod, if the node have a public IP (Instead of denied the traffic to GCE instance, simply remove the public IP, it's easier and safer!).

If you want to manage the IP for egress traffic originated by your pod, you have to set up a Cloud NAT on your GKE cluster.