We have attempted to shard a large collection in mongodb 2.4.9 across 3 replica sets (rs1, rs2, rs3). At present, all data resides on rs1.
We have 3 config servers running and enabled sharding using:
sh.enableSharding("test")
We then selected a shard key and sharded a collection:
sh.shardCollection("test.fs.chunks", { files_id : 1 , n : 1 } )
After that we added our additional shards:
sh.addShard( "rs2/mongo2:27017" )
sh.addShard( "rs3/mongo3:27017" )
However - after 4 days, all data still resides on rs1. Looking at the configuration, the database we are sharding is listed as "partitioned = true":
{ "_id" : "test", "partitioned" : true, "primary" : "rs1" }
However, when we execute db.fs.chunks.getShardDistribution() we are presented with an error stating that the collection is not sharded:
mongos> db.fs.chunks.getShardDistribution()
Collection test.fs.chunks is not sharded.
We then tried to re-execute the shardCollection command and receive an error stating that it is already sharded:
mongos> sh.shardCollection("test.fs.chunks", { files_id : 1 , n : 1 } )
"code" : 13449,
"ok" : 0,
"errmsg" : "exception: collection test.fs.chunks already sharded with 33463 chunks"
All 3 config servers are operational. The mongos logs contain a series of balancer distributed lock acquired / unlocked messages but nothing else noteworthy.
Does anyone have any advice on how we can troubleshoot this further and get some sharding happening?
Thanks
Dave
files_id
andn
(the shard key) before sharding the collection? Also, you're not describing that you're usingmongos
...? – Joachim Isaksson