Recommended strategy for bulk processing in backend

Hi everyone, we currently have a collection with approximately 4 million documents.

One of our backend processes requires us to use those 4 million documents. Apply one-to-one update operations on these documents using information from documents in other collections.

Currently our idea is to load all the documents in memory in the backend (NetCore 3.1) and process them one by one, but we would like to know if in your experience there is any other mechanism that is more efficient, more secure and less expensive at the level of resources.

Take a look at the aggregation framework. This will allow you to process all the documents on the server without the requirement to load them all into memory or ship them to the client for processing.