1
votes

I have created a REST API using node and I have containerized it using Kubernetes and Docker. The kubernetes pods are run inside a minikube environment for development purposes.

The application was running fine and now its giving the below error.

[distribution] Initial Distribution API Database connection error occured - MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/
[distribution]     at NativeConnection.Connection.openUri (/app/node_modules/mongoose/lib/connection.js:830:32)
[distribution]     at Mongoose.connect (/app/node_modules/mongoose/lib/index.js:335:15)
[distribution]     at /app/src/index.ts:60:8
[distribution]     at step (/app/src/index.ts:34:23)
[distribution]     at Object.next (/app/src/index.ts:15:53)
[distribution]     at fulfilled (/app/src/index.ts:6:58)
[distribution]     at processTicksAndRejections (node:internal/process/task_queues:93:5) {
[distribution]   reason: TopologyDescription {
[distribution]     type: 'ReplicaSetNoPrimary',
[distribution]     setName: null,
[distribution]     maxSetVersion: null,
[distribution]     maxElectionId: null,
[distribution]     servers: Map(3) {
[distribution]       'cluster0-shard-00-00.psdty.mongodb.net:27017' => [ServerDescription],
[distribution]       'cluster0-shard-00-01.psdty.mongodb.net:27017' => [ServerDescription],
[distribution]       'cluster0-shard-00-02.psdty.mongodb.net:27017' => [ServerDescription]
[distribution]     },
[distribution]     stale: false,
[distribution]     compatible: true,
[distribution]     compatibilityError: null,
[distribution]     logicalSessionTimeoutMinutes: null,
[distribution]     heartbeatFrequencyMS: 10000,
[distribution]     localThresholdMS: 15,
[distribution]     commonWireVersion: null
[distribution]   }
[distribution] }

The issue seems like a MongoDB connection URL/ Access issue but connection string is correct. (Double checked it with MongoDB Cloud support). Relevant network access is given to every one

enter image description here

I can also confirm that the MongoCloudDB can be accessed via the MongoDB Compass using the same connection URL

enter image description here

My guess is the connection cannot be established from the pods inside minikube with the MongoDB Database.

Does any of you have any idea on how to overcome this??

kubernetes config of the pod and the external service

apiVersion: apps/v1
kind: Deployment
metadata:
  name: distribution-depl
spec:
  replicas: 1
  selector:
    matchLabels:
      app: distribution
  template:
    metadata:
      labels:
        app: distribution
    spec:
      containers:
        - name: distribution
          image: ssomlk/distribution
          env:
            - name: MONGO_URI
              value: 'mongodb://ssomlk:<password>@cluster0-shard-00-00.yeu7t.mongodb.net:27017,cluster0-shard-00-01.yeu7t.mongodb.net:27017,cluster0-shard-00-02.yeu7t.mongodb.net:27017/<db_name>?ssl=true&replicaSet=atlas-fznj9q-shard-0&authSource=admin&retryWrites=true&w=majority'
            - name: JWT_ACCESS_TOKEN_KEY
              valueFrom:
                secretKeyRef:
                  name: jwt-secret
                  key: JWT_ACCESS_TOKEN_KEY
            - name: JWT_REFRESH_TOKEN_KEY
              valueFrom:
                secretKeyRef:
                  name: jwt-secret
                  key: JWT_REFRESH_TOKEN_KEY
            - name: JWT_ACCESS_TOKEN_EXPIRES_IN
              value: '15m'
            - name: JWT_REFRESH_TOKEN_EXPIRES_IN
              value: '60m'
            - name: NATS_CLIENT_ID
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NATS_URL
              value: 'http://nats-srv:4222'
            - name: NATS_CLUSTER_ID
              value: nats-distribution-mailing
            - name: MAIL_USER
              valueFrom:
                secretKeyRef:
                  name: mail-secret
                  key: MAIL_USER
            - name: MAIL_PWD
              valueFrom:
                secretKeyRef:
                  name: mail-secret
                  key: MAIL_PWD
            - name: POOL_SIZE
              value: '8'
---
apiVersion: v1
kind: Service
metadata:
  name: distribution-srv
spec:
  type: ClusterIP
  selector:
    app: distribution
  ports:
    - name: distribution
      protocol: TCP
      port: 3000
      targetPort: 3000

Edited with The Error received when trying to implement ExternalService

  • The Service "distribution-database-srv" is invalid:
  • spec.externalName: Invalid value: "mongodb://ssomlk:@cluster0-shard-00-00.y8kuj.mongodb.net:27017,cluster0-shard-00-01.y8kuj.mongodb.net:27017,cluster0-shard-00-02.y8kuj.mongodb.net:27017/<db_name>?ssl=true&replicaSet=atlas-fznj9q-shard-0&authSource=admin&retryWrites=true&w=majority": must be no more than 253 characters
  • spec.externalName: Invalid value: "mongodb://ssomlk:@cluster0-shard-00-00.y8kuj.mongodb.net:27017,cluster0-shard-00-01.y8kuj.mongodb.net:27017,cluster0-shard-00-02.y8kuj.mongodb.net:27017/<db_name>?ssl=true&replicaSet=atlas-fznj9q-shard-0&authSource=admin&retryWrites=true&w=majority": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is 'a-z0-9?(.a-z0-9?)*')

Any ideas??

2
Can you share your kubernetes config of the pod and also the external service !Abhinav Kumar
@AbhinavKumar updated with kubernetes config of the pod and the external service. Please checkShanka Somasiri

2 Answers

1
votes

In your config the mongdb uri is not resolvable from inside the cluster, you have to create an external service to make the uri resolvable. Please see the config below as an example.

kind: Deployment
metadata:
  name: distribution-depl
spec:
  replicas: 1
  selector:
    matchLabels:
      app: distribution
  template:
    metadata:
      labels:
        app: distribution
    spec:
      containers:
        - name: distribution
          image: ssomlk/distribution
          env:
            - name: MONGO_URI
              value: my-service
            - name: JWT_ACCESS_TOKEN_KEY
              valueFrom:
                secretKeyRef:
                  name: jwt-secret
                  key: JWT_ACCESS_TOKEN_KEY
            - name: JWT_REFRESH_TOKEN_KEY
              valueFrom:
                secretKeyRef:
                  name: jwt-secret
                  key: JWT_REFRESH_TOKEN_KEY
            - name: JWT_ACCESS_TOKEN_EXPIRES_IN
              value: '15m'
            - name: JWT_REFRESH_TOKEN_EXPIRES_IN
              value: '60m'
            - name: NATS_CLIENT_ID
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NATS_URL
              value: 'http://nats-srv:4222'
            - name: NATS_CLUSTER_ID
              value: nats-distribution-mailing
            - name: MAIL_USER
              valueFrom:
                secretKeyRef:
                  name: mail-secret
                  key: MAIL_USER
            - name: MAIL_PWD
              valueFrom:
                secretKeyRef:
                  name: mail-secret
                  key: MAIL_PWD
            - name: POOL_SIZE
              value: '8'
---
apiVersion: v1
kind: Service
metadata:
  name: distribution-srv
spec:
  type: ClusterIP
  selector:
    app: distribution
  ports:
    - name: distribution
      protocol: TCP
      port: 3000
      targetPort: 3000
kind: Service
metadata:
  name: my-service
  namespace: prod
spec:
  type: ExternalName
  externalName: 'mongodb://ssomlk:<password>@cluster0-shard-00-00.yeu7t.mongodb.net:27017,cluster0-shard-00-01.yeu7t.mongodb.net:27017,cluster0-shard-00-02.yeu7t.mongodb.net:27017/<db_name>?ssl=true&replicaSet=atlas-fznj9q-shard-0&authSource=admin&retryWrites=true&w=majority'```
0
votes

Below steps worked for me. Help from another Github user.

I have stopped and deleted the existing minikube

Downloaded "https://github.com/kubernetes/minikube/releases/download/v1.8.2/minikube-windows-amd64.exe" and renamed it to minikube.exe

Placed the above exe in the same folder in which my kubectl.exe is existing. (C:\kube). This path is added to the environment variable "Path" of my user.

Created a hyperv switch, Open Hyper-V Manager, Click on Virtual Switch Manager, Create New Virtual Network Switch, Select external type, and OK.

Ran this command `minikube start driver="hyperv" --hyperv-virtual-switch="MY-SWITCH"`

minikube successfully started

Then I created a deployment which successfully pulled the image and ran the container without any issue.