Allow configuration of 100mb memory limit per aggregation pipeline stage

In this old thread from 2016) it was asked whether there was a way to increase the 100mb in memory limit of each stage of an aggregation pipeline. The responses centered around two points:

  1. If too much memory is used per aggregation pipeline stage then it will reduce performance for the overall MongoDB database, impacting other queries negatively.
  2. You can set allowDiskUse: true and revert to performing these pipeline stages on disk when they exceed 100mb.

I believe this subject needs to be revisited for the following reasons:

  1. “Too much memory” is very subjective, and the 100mb number is arbitrary. The hardware for a typical MongoDB database setup in 2016 is not the same as in 2020. And what is the MongoDB use case for one company isn’t the use case for another company. Company A might have top of the line hardware they are running MongoDB on and only have a few concurrent users, and need large in memory aggregation queries to run fast so want to set a limit of 200mb or even higher. Company B might have lesser hardware and many concurrent users, and want to set the limit to 50mb or less.
  2. The allowDiskUse: true option is nice to have, but it obviously comes at a huge performance hit once your aggregation pipeline stages exceed the 100mb limit, even on high performance SSD drives.

This really comes down to who knows best what the limit should be, the company’s developers and database / server ops teams or the MongoDB developers? The MongoDB developers obviously have much more knowledge on the inner workings of Mongo, but that advice can still be taken into account by having a default limit of 100mb. I believe very strongly that today in 2020 this limit should be configurable so that we can customize the behavior for our own specific situation. The documentation can of course provide a warning that changing the setting can impact performance (although everyone already knows that.)

If you want to get into the details of why my company needs to increase this limit, and why it has been holding us back from switching from SQL Server to MongoDB for the last year (despite spending 6 months developing a working Mongo solution) then we can get into specifics. But really that will distract away from the central issue, which is that different customers have different scenarios, and a “one size fits all” solution isn’t appropriate and needs to be customizable.

1 Like

Hi Justin,

Welcome to the MongoDB Community forum and thank you for the detailed product feedback.

This would be a great improvement to suggest on the new MongoDB Feedback site so others can upvote and watch the request. I could copy your suggestion there, but it would be better if you file it directly so you automatically get updates and can follow up on any questions or feedback.

A general design goal is to try to have reasonable defaults without an overwhelming variety of tuning knobs (particularly when new features are introduced). However, this is a good example where a configurable option could be beneficial to suit different use cases and available resources.

Regards,
Stennie

Posted to https://feedback.mongodb.com/forums/924280-database/suggestions/40081492-allow-configuration-of-100mb-memory-limit-per-aggr.

1 Like