Over the top IOPS - Behaviour

Hello

I want to understand what would be the actual behaviour of the WIRED Tiger engine in the below case

Write IOPS is very high, exceeding provisioned or expected capacity for several 5-mins intervals together, almost 2-3 times of expected capacity. For eg if its M10 or M20 the peak IOPS is 100. Lets say if the IOPS is going beyond 250 consistently for 30 mins

What exactly happens during this sceario. Would the Wired Tiger engine queue all the operations and then keep clearing the queue based on Disk IOPS. If this is the case, assuming the relevant documnts are in memory set (cache) would it do the writes to the memory cache first and then write to the DB. So basically even if the write operation takes time to get completed (based on how far it is down the queue), future reads of the same document wont be affected since it will be read from cache.

This is my guess. I would like to know how the WiredTiger engine works

Hi @Prasanna_Venkatesan and welcome in the MongoDB Community :muscle: !

I will let someone answer your question about WiredTiger but I just wanted to mention that, usually, high IOPS are generated because too many documents are evicted from the RAM too soon.

If these frequently accessed documents can’t stay in RAM long enough, you are consistently fetching the same documents over and over again from the disk and your RAM isn’t large enough to keep them loaded. Adding more RAM would reduce the evictions and your queries would find more often what they need in RAM directly without the need to fetch on disk.

Usually high IOPS == your working set + indexes + queries and workload don’t fit in your RAM.

Cheers,
Maxime.