Memory slowly increasing over several weeks

Hi All,

At this moment I am experiencing issues with a Mongo replicaset version 4.0.12 and slowly increasing memory usage only on the primary node. Over the course of 3-4 weeks memory slowly grows from 30% to 99% after which the server becomes unresponsive and at a certain point steps down as primary. Data size does not change as all collections are cleaned up periodically using a client side script or capped collection. A similar setup which is processing more data is not having this issue. Restarting the node solves the issue for the next couple of weeks. Is there anything I should check or change?

Some more detailed information:

replicas have 2GiB memory and around 50GiB of data. Indexes are around 30MiB. No slow queries reported by Mongo. 60-70 connections are open at the same time. TcMalloc looks like this before crashing. wiredtiger cache only uses 500MiB.

MALLOC:     2091358456 ( 1994.5 MiB) Bytes in use by application\nMALLOC: +     96292864 (   91.8 MiB) Bytes in page heap freelist\nMALLOC: +     43007552 (   41.0 MiB) Bytes in central cache freelist\nMALLOC: +      4904832 (    4.7 MiB) Bytes in transfer cache freelist\nMALLOC: +     21930312 (   20.9 MiB) Bytes in thread cache freelists\nMALLOC: +     19816704 (   18.9 MiB) Bytes in malloc metadata\nMALLOC:   ------------\nMALLOC: =   2277310720 ( 2171.8 MiB) Actual memory used (physical + swap)\nMALLOC: +    345247744 (  329.3 MiB) Bytes released to OS (aka unmapped)\nMALLOC:   ------------\nMALLOC: =   2622558464 ( 2501.1 MiB) Virtual address space used\nMALLOC:\nMALLOC:         273100              Spans in use\nMALLOC:             86              Thread heaps in use\nMALLOC:           4096              Tcmalloc page size\n------------------------------------------------\nCall ReleaseFreeMemory() to release freelist memory to the OS (via madvise()).\nBytes released to the OS take up virtual address space but no physical memory.		

A possible related issue is

Hi @Kees,

Welcome to MongoDB community.

It sounds like Wired Tiger memory is overwhelmed. This can be happening even if queries are under the 100ms slow threshold for logging…

I suggest to first upgrade to latest 4.0.24 as a minimum.

Additional please add resources to the host or at least grow cache to 1GB (~50% ram)


Hi Pavel,

Thanks a lot for the quick response.

Wired Tiger memory usage grows slowly over 3-4 weeks while data size and query patterns stay the same. This doesn’t exactly look like overwhelmed to me. Could you explicate a bit on this?

Indeed I’ve been thinking of upgrading to a newer version. Are there any particular ‘fixes’ in 4.0.24 compared to 4.0.12 that I should be aware of?

Note that "bytes currently in the cache" : 296423058. Not sure if it needs more?



Yes, there is around ~2 years of fixes between the versions specifically around menory consumption

It doesn’t make alot of seance to investigate memory with 500mb of cache its too small for 50gb of data…

Okay thanks, upgrading can be tried.