0
votes

I've setup a simple server configuration for testing sharding functionnalities purpose and i get the error above.

My configuration is pretty simple: one config server, one shard server and one mongos (respectively in 127.0.0.1:27019, 127.0.0.1:27018, 127.0.0.1:27017).

Everything looks to work well until i try to shard a collection, the command gives me the following:

sh.shardCollection("test.test", { "test" : 1 } )
{
    "ok" : 0,
    "errmsg" : "ns not found",
    "code" : 26,
    "codeName" : "NamespaceNotFound",
    "operationTime" : Timestamp(1590244259, 5),
    "$clusterTime" : {
        "clusterTime" : Timestamp(1590244259, 5),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    }
}

The config server and shard server outputs show no errors:

2020-05-23T10:39:46.629-0400 I  SHARDING [conn11] about to log metadata event into changelog: { _id: "florent-Nitro-AN515-53:27018-2020-05-23T10:39:46.629-0400-5ec935b2bec982e313743b1a", server: "florent-Nitro-AN515-53:27018", shard: "rs0", clientAddr: "127.0.0.1:58242", time: new Date(1590244786629), what: "shardCollection.start", ns: "test.test", details: { shardKey: { test: 1.0 }, collection: "test.test", uuid: UUID("152add6f-e56b-40c4-954c-378920eceede"), empty: false, fromMapReduce: false, primary: "rs0:rs0/127.0.0.1:27018", numChunks: 1 } }
2020-05-23T10:39:46.620-0400 I  SHARDING [conn25] distributed lock 'test' acquired for 'shardCollection', ts : 5ec935b235505bcc59eb60c5
2020-05-23T10:39:46.622-0400 I  SHARDING [conn25] distributed lock 'test.test' acquired for 'shardCollection', ts : 5ec935b235505bcc59eb60c7
2020-05-23T10:39:46.637-0400 I  SHARDING [conn25] distributed lock with ts: 5ec935b235505bcc59eb60c7' unlocked.
2020-05-23T10:39:46.640-0400 I  SHARDING [conn25] distributed lock with ts: 5ec935b235505bcc59eb60c5' unlocked.

Of course the collection exists on primary shard:

rs0:PRIMARY> db.test.stats()
{
    "ns" : "test.test",
    "size" : 216,
    "count" : 6,
    "avgObjSize" : 36,
    "storageSize" : 36864,
    "capped" : false,
    ...
}

I have no idea what could be wrong here, i'd much appreciate any help :)

EDIT:

Here is the detail about steps i follom to run servers, i probably misunderstand something :

Config server:

sudo mongod --configsvr --replSet rs0 --port 27019 --dbpath /srv/mongodb/cfg  
mongo --port 27019

Then in mongo shell

rs.initiate(
  {
    _id: "rs0",
    configsvr: true,
    members: [
      { _id : 0, host : "127.0.0.1:27019" }
    ]
  }
)

Sharded server:

sudo mongod --shardsvr --replSet rs0  --dbpath /srv/mongodb/shrd1/ --port 27018
mongo --port 27018

Then in shell:

rs.initiate(
  {
    _id: "rs0",
    members: [
      { _id : 0, host : "127.0.0.1:27018" }
    ]
  }
)
db.test.createIndex({test:1})

Router:

sudo mongos --configdb rs0/127.0.0.1:27019
mongo

Then in shell:

sh.addShard('127.0.0.1:27018')
sh.enableSharding('test')
sh.shardCollection('test.test', {test:1})

2
did you run sh.enableSharding on the database?Joe
Hello Joe, yes I enabled it. I have no errors when enabling it and shard server logs display an info so operation has been taken into account.Florent Constant
What happens when you query the collection through mongos?D. SM
It throws an exception: Cannot accept sharding commands if not started with --shardsvrFlorent Constant

2 Answers

1
votes

That error happens sometimes when some routers have out of date ideas of what databases/collections exist in the sharded cluster.

Try running https://docs.mongodb.com/manual/reference/command/flushRouterConfig/ on each mongos (i.e. connect to each mongos sequentially by itself and run this command on it).

0
votes

I just misunderstood one base concept: config servers and shard servers are distinct and independant mongodb instances, so each must be part of distinct replicasets .

So replacing

sudo mongod --configsvr --replSet rs0 --port 27019 --dbpath /srv/mongodb/cfg 

with

sudo mongod --configsvr --replSet rs0Config --port 27019 --dbpath /srv/mongodb/cfg 

makes the configuration work.