2
votes

I want to create a 3 node Mongo Replica set in Kubernetes. I have created a headless service as below

apiVersion: v1
kind: Service
metadata:
  name: mongo
  labels:
    name: mongo
spec:
  ports:
  - port: 27017
    targetPort: 27017
  clusterIP: None
  selector:
    role: mongo

I have also created a 3 node stateful set as below -

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mongo
spec:
  serviceName: "mongo"
  replicas: 3
  template:
    metadata:
      labels:
        role: mongo
        environment: test
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongo
          image: mongo
          command:
            - mongod
            - "--replSet"
            - rs0
            - "--smallfiles"
            - "--noprealloc"
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongo-persistent-storage
              mountPath: /data/db
          resources:
            limits:
              cpu: 500m
              memory: 512Mi
            requests:
              cpu: 400m
              memory: 256Mi
  volumeClaimTemplates:
  - metadata:
      name: mongo-persistent-storage
      persistentVolumeClaim:
        claimName: fast
    spec:
      accessModes: [ "ReadWriteMany" ]
      resources:
        requests:
          storage: 10Gi

I have created the stateful set and the pods are up and running. Now if i login to one of the containers and set the config for the replication set in the mongo shell, I am getting error. The commands I enter are -

> config = {
... "_id" : "rs0",
... "members" : [
...   {
...     _id: 1,
...     host: 'mongo-0.mongo.demo.svc.cluster.local:27017'
...   },
...   {
...     _id: 2,
...     host: 'mongo-1.mongo.demo.svc.cluster.local:27017',
...   },
...   {
...     _id: 3,
...     host: 'mongo-2.mongo.demo.svc.cluster.local:27017'
...   }
... ]
... }

> rs.initiate(config)

When I do the following, I get the below error -

"errmsg" : "replSetInitiate quorum check failed because not all proposed set members responded affirmatively: mongo-1.mongo.demo.svc.cluster.local:27017 failed with Connection refused, mongo-2.mongo.demo.svc.cluster.local:27017 failed with Connection refused"
"code" : 74,
"codeName" : "NodeNotFound",

I dont know how to debug this, because the containers are up and running. Can somebody help me with this? Thanks

1
It looks like a Kubernetes DNS issue, verify whether nslookup is resolving names properly in your Cluster.hdhruna
Have you make it running successfully?Abdullah Al Maruf - Tuhin
is authentication working?Daksh Miglani
Just a note that noprealloc option has been deprecated since v2.6. No reason to specify it in your yaml.ice.nicer

1 Answers

4
votes

If you see the logs of any Pod, you will see a warning:

[initandlisten] ** WARNING: This server is bound to localhost.
[initandlisten] ** Remote systems will be unable to connect to this server.

So, you have to provide "--bind_ip" flag to ensure that MongoDB listens for connections from applications on configured addresses. See more about ip binding in official mongodb doc.

After correction, the improved yaml will look like:

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mongo
spec:
  ---
  template:
    ---
    spec:
      ---
      containers:
         - ---
          command:
            - mongod
            - "--replSet"
            - rs0
            - "--bind_ip" # Add these two 
            - 0.0.0.0     # lines...!!
            - "--smallfiles"
            - "--noprealloc"
     ---
---

And, Host DNS needs one slight modification too. The service is deployed in default namespaces, not in demo. So, the correct config will be:

> config = {
 "_id" : "rs0",
 "members" : [
   {
     _id: 1,
     host: 'mongo-0.mongo.default.svc.cluster.local:27017'
   },
   {
     _id: 2,
     host: 'mongo-1.mongo.default.svc.cluster.local:27017',
   },
   {
     _id: 3,
     host: 'mongo-2.mongo.default.svc.cluster.local:27017'
   }
 ]
}