MongoDB.live, free & fully virtual. June 9th - 10th. Register Now

Primary RAM maxes when secondary down

just trialing out the forum…a very welcomed area…
over at google.group - Stennie has seen - there is a user stating their primary’s RAM maxes out when their secondaries are down.

I was wondering: if the write concern exists with a value that requires that secondary…but without any timeout value - - so now they hang… how does something like that affect RAM consumption?..if at all…

The wait is indefinite in that case, but I wouldn’t expect it to max out CPU…

maxing out RAM is what is said…

Ah, yes, sorry. Generally if there are any queries that normally go to secondaries (secondaryPreferred read preference) that now go to primary they are likely using the extra RAM.

noted - I copy/pasted your comment over to google.groups…

on the write side: if the secondary being down does not allow the write concern verification AND there is no timeout parameter - - does that create an unsatisfied ‘pending verification’ hang in RAM too? … such that it also would contribute to the burden? …or does it just go away perhaps…

1 Like

I think the answer depends on a few things - obviously more connections hanging around will contribute to RAM pressure somewhat, since each connection uses some amount of RAM on the OS level, but things also depend on whether these are synchronous or asynchronous requests…

I would recommend that if you’re using any write concern that involves other nodes (and therefore may not be satisfiable in some cases) use a non-default wtimeout to allow the application decide what to do in these cases, rather than hanging for long periods of time.

could read concern majority be a cause here. When running replicaSet in PSA architecture and if the secondary is down, there is going to be considerable amount of cache pressure on Primary and eventually it overflows to the wiredTigerLAS.wt file. I would try setting enableReadConcernMajority: false and see if the issue can be reproduced.