Using index for optimized data access (reducing read operations)?

I’m wondering if one can optimize read performance (reading the document data) when using compound indexes.

Let’s say we have some relevant fields: date, country and log. But log could be anything else which could be relevant for my query. The document will have a lot more fields, which are not relevant for this query.

While I want to filter by date and country I would create an compound index for date and country.

Since the only information I need is log, does it make sense to make an compound index with date, country and log? I still filter only by date and country, but would this log information inside of the index prevent MongoDB to load the additional field data from the collection itself?

I’m wondering if read operations can be minimized this way. Or, if the index can be used for reading the log data as well, MongoDB still needs additional reads to get this information from the index.

TIA,
Sebastian

@ Sebastian_61869

Excellent question! And it will be covered (pun intended :grin: ) in Chapter 4: Lecture “Covered Queries”.

:laughing: Thank you @DHz. Great, I just watched the video about that. What doesn’t get answered there is: Let’s say I have a log of log (from my example) entries and the index can’t be loaded into RAM. I understand the next advantage is, the data is in sequential order.

I’m wondering if I still have such a big performance improvement, when I have to load the index data from disc to cover my projection fields, compared to reading the information from the collection, which is on a SSD or better.

@ Sebastian_61869

This really isn’t an appropriate discussion for this Forum. I’d suggest if you want an answer, try creating a sample situation and running tests.