1
votes

I am trying to connect a Java app running on a GCP Kubernetes engine cluster, with a Mongo Atlas cluster (M20). Before, it worked fine, when I didn't have VPC Peering turned on and I was using the regular connection string. But I am trying to use VPC Peering now, with the default VPC network in my GCP project. I followed the steps in https://docs.atlas.mongodb.com/security-vpc-peering/. I chose Atlas CIDR of 192.168.0.0/18 (b/c "The Atlas CIDR block must be at least a /18"), and after linking the GCP project and the Atlas cluster, added 10.128.0.0/9 to the IP whitelist for the Atlas cluster (b/c it says that is the default range for auto in GCP projects).

I am actually able to connect via Mongo shell via mongo "mongodb+srv://<cluster_name>-pri.crum0.gcp.mongodb.net/itls", from some other VM in my GCP project. But the app running on a pod in my GCP cluster is unable to connect. The exact error that I am seeing in the Java app is

Caused by: com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1@a07fbd8. Client view of cluster state is {type=REPLICA_SET, servers=[{address=<cluster_name>-shard-00-00-pri.crum0.gcp.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.SocketTimeoutException: connect timed out}}, {address=<cluster_name>-shard-00-01-pri.crum0.gcp.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.SocketTimeoutException: connect timed out}}, {address=<cluster_new>-shard-00-02-pri.crum0.gcp.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.SocketTimeoutException: connect timed out}}]

Possible issues:

1) is it possible to connect from a GCP cluster at all (or perhaps, why is this cluster somehow not part of default VPC network)?
2) is there something wrong in the Atlas CIDR range or my IP whitelist range?

Any help would be appreciated.

3
Can you do basic checks from within a running pod: find out your Java app pod name kubectl get pods and then kubectl exec java_pod_name ping <cluster_name>-pri.crum0.gcp.mongodb.net?mebius99
About the other VM which has access using mongo shell. What IP is it using? Also, could you run a linux pod, like ubuntu, log inside, download the mongo shell and check if you can connect from there? this way we can check if it's the java app or the cluster itself.Will R.O.F.
In case you need the command, is: kubectl run -it --rm --generator=run-pod/v1 ubuntu --image=ubuntu -- /bin/bash will log you to a shell inside the pod so you can apt-get install and run ping and mongo shell commands.Will R.O.F.
See exported routes in VPC networking peering settings and also check for which ip address connects to MongoDB, when blanket whitelist is in action. More details at developer.mongodb.com/community/forums/t/…Oleksandr Iegorov

3 Answers

2
votes

I ended up making 2 changes to make it work. First change is a definite requirement I was missing. Not sure yet if second change is absolutely necessary.

1) I had to create a new GCP cluster, and in it, the VPC-native (enable IP alias) was enabled by default. In my old cluster, this setting was disabled, and I couldn't change it for a running cluster. This setting definitely needs to be turned on, to fix the problem.

2) Although I'm using Mongo Java driver 3.11.1, I did couple nslookup commands and figured it's safer to use the older-driver-style connection URI string (i.e. mongodb://<username>:<password>@<cluster_name>-shard-00-00-pri.crum0.gcp.mongodb.net:27017,<cluster_name>-shard-00-01-pri.crum0.gcp.mongodb.net:27017,<cluster_name>-shard-00-02-pri.crum0.gcp.mongodb.net:27017/itls?ssl=true&replicaSet=<cluster_name>-shard-0&authSource=admin&retryWrites=true&w=majority), since nslookup actually gave real IP address for the old style but not the new.

nslookup commands that helped:

>> nslookup <cluster_name>-shard-00-00-pri.crum0.gcp.mongodb.net
Server:     8.8.8.8
Address:    8.8.8.8#53

Non-authoritative answer:
Name:   <cluster_name>-shard-00-00-pri.crum0.gcp.mongodb.net
Address: 192.168.248.2
>> nslookup <cluster_name>-pri.crum0.gcp.mongodb.net
Server:     8.8.8.8
Address:    8.8.8.8#53

Non-authoritative answer:
*** Can't find <cluster_name>-pri.crum0.gcp.mongodb.net: No answer
0
votes

I'm guessing you either should be using split horizon setup or you don't have connectivity from your application to the hostnames/IP addresses used in the replica set config.

The whitelist on the Atlas side should reflect the IP that is used for connections by your application, as seen from Atlas.

0
votes

In addition to the answer of user1145925 above, I had to also whitelist on Mongo Atlas the Pod address range from GKE.