1
votes

I want to set up a WireGuard (but Wireguard is not relevant) VPN to make GKE pods&services accessible via that VPN. We have a few clusters, which we want to be accessible via the same VPN connection. E.g.:

  • *.cluster1.local resolves to GKE cluster-1*.cluster.local
  • *.cluster2.local resolves to GKE cluster-2*.cluster.local

In that case, I'll have to create DNS which will rewrite hosts, but I'm just not there yet.

I'm currently stuck because I can't access IPs of those clusters' services and pods (I create an instance with Ubuntu 16 and execute curl http://<some-k8s-service-ip>:<port of that service>/) because of the timeout.

It appears that every cluster is isolated and I basically can't access it outside via Internal IP, even if GCE VPN instance is in the same network as a cluster. For example, I can do, curl http://10.0.26.219:4000/ (this address resolves to a specific k8s service) from inside of a cluster, but I can't do it from the random GCE instance I create (it's in the same network as GKE clusters).

I have set up a firewall rule allowing all ingress&egress traffic to any ports, but it didn't do the trick.

To clarify, everything is located in the same network (eu-north-4) and VPC.

Perhaps anyone had experience with setting such a VPN? Please, let me know if there's information I could provide because there are so many things to consider. Shortly, it's all the defaults except it's private clusters.

1
But when you execute curl http://<some-k8s-service-ip>:<port of that service>/ from specific GKE node, you are also unable to connect to it, am I right ? So how are you going to make such connection from different GCE instance ? I guess that by <some-k8s-service-ip> you mean it's ClusterIP. If so, you cannot connect to it from outside the cluster. You can expose your Pods e.g. via NodePort Service and make them accessible on your node's internal or exteranal IP.mario
@mario I can, that's the point. I SSH into the GKE node and can query services that are inside of this cluster.blits
Can you give more details on how you're doing this ?mario
@mario 1. create gke private cluster 2. create deployment/statefulset/pod and associated service 3. lookup cluster ip of service you created 4. ssh into gke node that was created in the compute engine and run curl http://<service-ip>:<service-port>blits

1 Answers

3
votes

clusterIPs are never accessible from outside a cluster. The reason for this is the clusterIP only exists as a rule in the iptables of the various GKE nodes.

You will need to use either NodePort or LoadBalancer service (the LB would use an internal IP address) to expose your workloads outside the cluster.

You can also reach pod IPs directly, though this is not recommended since pod IPs are not static and will change if the pod is rescheduled for any reason.

Unfortunately, this will not leverage the built in cluster DNS using cluster.local as you mentioned you'd like to do. Instead, you can configure your DNS to resolve specific workloads to specific internal LoadBalancer IPs which will be static.