是的,这很可能。您可以利用批量写入API来处理异步操作,从而提高性能,尤其是处理大型数据集。对于支持MongoDB Server 3.2.x
, 的Mongoose版本>=4.3.0
,您可以使用bulkWrite()
进行更新。下面的例子演示了如何去了解这一点:
var bulkUpdateCallback = function(err, r){
console.log(r.matchedCount);
console.log(r.modifiedCount);
}
// Initialise the bulk operations array
var bulkOps = expenseListToEdit.map(function (expense) {
return {
"updateOne": {
"filter": { "_id": expense._id } ,
"update": { "$set": expense }
}
}
});
// Get the underlying collection via the native node.js driver collection object
Expense.collection.bulkWrite(bulkOps, { "ordered": true, w: 1 }, bulkUpdateCallback);
对于猫鼬版本~3.8.8, ~3.8.22, 4.x
支持MongoDB的服务器>=2.6.x
,你可以使用Bulk API
如下
var bulk = Expense.collection.initializeOrderedBulkOp(),
counter = 0;
expenseListToEdit.forEach(function(expense) {
bulk.find({ "_id": expense._id })
.updateOne({ "$set": expense });
counter++;
if (counter % 500 == 0) {
bulk.execute(function(err, r) {
// do something with the result
bulk = Expense.collection.initializeOrderedBulkOp();
counter = 0;
});
}
});
// Catch any docs in the queue under or over the 500's
if (counter > 0) {
bulk.execute(function(err, result) {
// do something with the result here
});
}
请尝试添加一些解释,这个代码是如何工作以及为什么,可能与文档链接 – dhdavvie