2
votes

I am new to Google Cloud Platform and the following context:

I have a Compute Engine VM running as a MongoDB server and a Compute Engine VM running as a NodeJS server already with Docker. Then the NodeJS application connects to Mongo via the default VPC internal IP. Now, I'm trying to migrate the NodeJS application to Google Kubernetes Engine, but I can't connect to the MongoDB server when I deploy the NodeJS application Docker image to the cluster.

All services like GCE and GKE are in the same region (us-east-1).

I did a hard test accessing a kubernetes cluster node via SSH and deploying a simple MongoDB Docker image and trying to connect to the remote MongoDB server via command line, but the problem is the same, time out when trying to connect.

I have also checked the firewall settings on GCP as well as the bindIp setting on the MongoDB server and it has no blocking on that.

Does anyone know what may be happening? Thank you very much.

2
A private cluster can also use an internal load balancer to accept traffic from within your VPC network. - John Hanley
Hi @JohnHanley, from what I understand, this document says about incoming traffic from a GKE cluster and my problem is outgoing traffic from a GKE cluster to an GCE instance on the same VPC. - Felipe Antero
VPC IP addresses are in the private range (RFC1918). This means that whoever wants to use TCP/IP directly to those addresses either need to be in the same VPC or in a peered VPC. Public resources cannot address private addresses. When you create a private GKE cluster, Google creates a VPC and then peers it with your VPC allowing private address connectivity between the GKE VPC and your VPC. - John Hanley

2 Answers

2
votes

In my case traffic from GKE to GCE VM was blocked by Google Firewall even thou both are in the same network (default).

I had to whitelist cluster pod network listed in cluster details:

Pod address range 10.8.0.0/14

https://console.cloud.google.com/kubernetes/list enter image description here

https://console.cloud.google.com/networking/firewalls/list

firewall

1
votes

By default, containers in a GKE cluster should be able to access GCE VMs of the same VPC through internal IPs. It is just like you access the internet (e.g., google.com) from GKE containers, GKE and VPC know how to route the traffic. The problem must be with other configurations (firewall or your application).

You can do a test, start a simple HTTP server in the GCE VM, say the internal IP is 10.138.0.5:

python -m SimpleHTTPServer 8080

then create a GKE container and try to access the service:

kubectl run my-client -it --image=tutum/curl --generator=run-pod/v1 -- curl http://10.138.0.5:8080