I am trying to deploy MongoDB on my Kubernetes cluster.
I am deploying it in a few different steps where the first one will include the config-svr replicated , the second will include the shards replicated and he third will inc;ude replicated router pods.
I was able to create a kubernetes statefulset of 3 config pods but now I have to initialize the replica set.
I understood that I need to use rs.initiate() method in order to initialize the replicaset but I can't do it from remote.
my installation is all automatic and in which I can oinly run kubernetes job for the deployment and i cannot run exec command in any script.
I tried using a remote mongo pod job with the --host flg but I get an error message indicating admin is not authorized:
{
"ok" : 0,
"errmsg" : "not authorized on admin to execute command { replSetInitiate: { _id: \"crs\", configsvr: true, members: [ { _id: 0.0, host: \"mongo-config-0.mongo-config-svc.default.svc.cluster.local:27017\" }, { _id: 1.0, host: \"mongo-config-1.mongo-config-svc.default.svc.cluster.local:27017\" }, { _id: 2.0, host: \"mongo-config-2.mongo-config-svc.default.svc.cluster.local:27017\" } ] } }",
"code" : 13,
"codeName" : "Unauthorized"
}
I read that I need to give my admin user read/write permissions but that will require again doing 'exec' commands inside the pods manually ( or via external script ) and I can't do that.
I also tried using the kubernetes api with ServiceAccount token permissions inside a pod job but I found that it is not possible to use curl with exec api since it requires a web session unlike other kubernetes api commands.
was any one able to initialize mongoDB replica either but using some Mongo workaround or Kubernetes workaround?
thanks