3
votes

I'm running an app on Kubernetes / GKE.

I have a bunch of devices without a public IP. I need to access SSH and VNC of those devices from the app.

The initial thought was to run an OpenVPN server within the cluster and have the devices connect, but then I hit the problem:

There doesn't seem to be any elegant / idiomatic way to route traffic from the app to the VPN clients.

Basically, all I need is to be able to tell route 10.8.0.0/24 via vpn-pod

Possible solutions I've found:

  • Modifying routes on the nodes. I'd like to keep nodes ephemeral and have everything in K8s manifests only.

  • DaemonSet to add the routes on nodes with K8s manifests. It's not clear how to keep track of OpenVPN pod IP changes, however.

  • Istio. Seems like an overkill, and I wasn't able to find a solution to my problem in the documentation. L3 routing doesn't seem to be supported, so it would have to involve port mapping.

  • Calico. It is natively supported at GKE and it does support L3 routing, but I would like to avoid introducing such far-reaching changes for something that could have been solved with a single custom route.

  • OpenVPN client sidecar. Would work quite elegantly and it wouldn't matter where and how the VPN server is hosted, as long as the clients are allowed to communicate with each other. However, I'd like to isolate the clients and I might need to access the clients from different pods, meaning having to place the sidecar in multiple places, polluting the deployments. The isolation could be achieved by separating clients into classes in different IP ranges.

  • Routes within GCP / GKE itself. They only allow to specify a node as the next hop. This also means that both the app and the VPN server must run within GCP.

I'm currently leaning towards running the OpenVPN server on a bare-bones VM and using the GCP routes. It works, I can ping the VPN clients from the K8s app, but it still seems brittle and hard-wired.

However, only the sidecar solution provides a way to fully separate the concerns.

Is there an idiomatic solution to accessing the pod-private network from other pods?

1
This is a general question on a solution instead of a specific programming question. Your use case can be resolved by an application layer solution like cloud messaging like NATS (cloud native solution) or vendor specific like GCP pub/sub. Or narrow it into network layer by using VPN. I suggest you go to discuss.kubernetes.io to raise attention.shawnzhu
@shawmzhu I need to access SSH and VNC on the devices, so messaging wouldn't work here. I've clarified the post.amq
I see. so this is not a specific programming question. According to networking solution, I'm happy to help if you choose either GCP cloud VPN or OpenVPN iroute or SSH tunnel, you just need to raise specific question in serverfault.comshawnzhu
I understand that you're running your Open VPN server outside GCP ? And what kind of client isolation you want to achieve ?Wojtek_B
@WojciechBogacz I'm running my own OpenVPN server. It can be inside and outside of GCP. By client isolation I mean that clients shouldn't be able to connect to each other.amq

1 Answers

0
votes

Solution you devised - with the OpenVPN server acting as a gateway for multiple devices (I assume there will be dozens or even hundreds simultaneous connections) is the best way to do it.

GCP's VPN unfortunatelly doesn't offer needed functionality (just Site2site connections) so we can't use it.

You could simplify your solution by putting OpenVPN in the GCP (in the same VPC network as your application) so your app could talk directly to the server and then to the clients. I believe by doing this you would get rid of that "brittle and hardwired" part.

You will have to decide which solution works best for you - Open VPN in or out of GCP.

In my opinion if you go for hosting Open VPN server in GCP it will be more elegant and simple but not necessarily cheaper.

Regardless of the solution you can put the clients in different ip ranges but I would go for configuring some iptables rules (on Open VPN server) to block communication and allow clients to reach only a few IP's in the network. That way if in the future you needed some clients to communicate it would just be a matter of iptable configuration.