I have 7 billion data in my collection. My data model is :
{ "_id" : ObjectId("515921e3bbea58d25eb07b22"), "serverId" : 0, "counterId" : 0, "inserted" : ISODate("2013-03-21T00:26:30Z"), "counterValue" : 0.03256159112788737, "min" : -29.967438408872113, "max" : 20.032561591127887 } .
I have 2 index serverId,counterId ; and my query is:
{$match:{ 'inserted':{$gte: new Date("2013-03-21T00:00:00Z") , $lt: new Date("2013-03-22T10:00:00Z") } } }, {$group : {_id:{serverId:'$serverId', counterId:'$counterId'} ,avgValue:{ $avg : '$counterValue'}}}
Finds average counterValues , grouped by serverId,counterId , inserted between spesific dates .
But it takes 13 seconds i need it must take 1 seconds.
How can I do that ?
inserted
column if you want to avoid a full collection scan. – WiredPrairie