0
votes

I've setup a sharded mongo db environment on localhost, with 3 config servers and 2 sharded mongo instances and a single mongos.

Once the cluster has started up, I run the following sequence of commands:

sh.addShard( "127.0.0.1:27010")
sh.addShard( "127.0.0.1:27011")

a = {"_id" : 1, "value" : 1}
b = {"_id" : 2, "value" : 2}
c = {"_id" : 3, "value" : 3}
d = {"_id" : 4, "value" : 4}

use foobar;
db.foo.insert(a);
db.foo.insert(b);
db.foo.insert(c);
db.foo.insert(d);

The I enable the db for sharding and create an index etc.

sh.enableSharding("foobar");
db.foo.ensureIndex({"value":"hashed"});
sh.shardCollection("foobar.foo", { value: "hashed" } )

The result of all the above operations is successful.

But once I do: db.foo.stats()

I see that all the data ends just in one shard without being distributed. And running

db.printShardingStatus();

produces:

--- Sharding Status --- 
sharding version: {
"_id" : 1,
"version" : 3,
"minCompatibleVersion" : 3,
"currentVersion" : 4,
"clusterId" : ObjectId("52170e8a7633066f09e0c9d3")
}
 shards:
{  "_id" : "shard0000",  "host" : "127.0.0.1:27010" }
{  "_id" : "shard0001",  "host" : "127.0.0.1:27011" }
 databases:
{  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
{  "_id" : "foobar",  "partitioned" : true,  "primary" : "shard0000" }
    foobar.foo
        shard key: { "value" : "hashed" }
        chunks:
            shard0000   1
        { "value" : { "$minKey" : 1 } } -->> { "value" : { "$maxKey" : 1 } } on : shard0000 Timestamp(1, 0) 

Interestingly, though, if I start off with a blank collection and have enable sharding on it before adding any data to it, the results are very different:

db.foo.stats();
{
"sharded" : true,
"ns" : "foobar.foo",
"count" : 4,
"numExtents" : 2,
"size" : 144,
"storageSize" : 16384,
"totalIndexSize" : 32704,
"indexSizes" : {
    "_id_" : 16352,
    "value_hashed" : 16352
},
"avgObjSize" : 36,
"nindexes" : 2,
"nchunks" : 4,
"shards" : {
    "shard0000" : {
        "ns" : "foobar.foo",
        "count" : 1,
        "size" : 36,
        "avgObjSize" : 36,
        "storageSize" : 8192,
        "numExtents" : 1,
        "nindexes" : 2,
        "lastExtentSize" : 8192,
        "paddingFactor" : 1,
        "systemFlags" : 1,
        "userFlags" : 0,
        "totalIndexSize" : 16352,
        "indexSizes" : {
            "_id_" : 8176,
            "value_hashed" : 8176
        },
        "ok" : 1
    },
    "shard0001" : {
        "ns" : "foobar.foo",
        "count" : 3,
        "size" : 108,
        "avgObjSize" : 36,
        "storageSize" : 8192,
        "numExtents" : 1,
        "nindexes" : 2,
        "lastExtentSize" : 8192,
        "paddingFactor" : 1,
        "systemFlags" : 1,
        "userFlags" : 0,
        "totalIndexSize" : 16352,
        "indexSizes" : {
            "_id_" : 8176,
            "value_hashed" : 8176
        },
        "ok" : 1
    }
},
"ok" : 1
}

So the question is whether I'm missing something, if I shard an existing collection?

2

2 Answers

1
votes

You need to type db.collection.getShardDistribution() to see how your chunks are getting divided.

mongos> db.people.getShardDistribution()
Shard S1 at S1/localhost:47017,localhost:47018,localhost:47019
 data : 32.37MiB docs : 479349 chunks : 1
 estimated data per chunk : 32.37MiB
 estimated docs per chunk : 479349
Shard foo at foo/localhost:27017,localhost:27018,localhost:27019
 data : 67.54MiB docs : 1000000 chunks : 2
 estimated data per chunk : 33.77MiB
 estimated docs per chunk : 500000
Totals
 data : 99.93MiB docs : 1479349 chunks : 3
 Shard S1 contains 32.4% data, 32.4% docs in cluster, avg obj size on shard : 70B
 Shard foo contains 67.59% data, 67.59% docs in cluster, avg obj size on shard : 70B

Thanks, Neha

0
votes

Currently you have such a small dataset that you only have 1 chunk of data. MongoDB will balance your data according to the Migration Thresholds - so that the impact of the balancer is kept to a minimum. Try adding more data :) and the balancer will split your data and balance the chunks over time.

Without data in the collection to begin with each shard will be allocated a range of chunks and that is why you are seeing data across the shards in the second case.