New to Mongodb I am trying understand the 100mb limit MongoDb for aggregate pipelines. Trying to find out what this actually means? Does it apply to the size of the database collection we are performing the aggregate on?
Bit of background we have the following query on an inventory ledger where we are taking a data set, running a group sum to find out which products are still in-stock (ie amount sum is greater than 0). Based on the result where the product is in stock we return those records by running a lookup in the original collection. The query is provided below.
Assume the inventory objects contains about 10 sub fields/record pair. And assume for 1000records/1mb.
QUESTION My question is if the inventory collection size reaches 100mb as a JSON object array does this mean the call with fail? ie the max we can run the aggregate on is 100mb x 1000 records = 100,000 records?
BTW we are on a server that does not support writing to disk hence the question.
db.inventory.aggregate([
{
$group: {
_id: {
"group_id": "$product"
},
"quantity": {
$sum: "$quantity"
}
}
},
{
"$match": {
"quantity": {
$gt: 0
}
}
},
{
$lookup: {
from: "inventory",
localField: "_id.group_id",
foreignField: "$product",
as: "records"
}
}
])
{allowDiskUse : true}
option are written to<dbPath>/_tmp
directory. Without having write access to it, the MongoDB service does not start at all. β Wernfried Domscheit