MongoDB disk space increases abruptly - WiredTigerLAS.wt

Hi,
we are using latest MongoDB recently. But after changing to the latest MongoDB we are facing issue with file system utilization. MongoDB file WiredTigerLAS.wt abruptly using more than the 70% of disk space. So, Utilization of the disk space become 100% sometime and downs DB automatically. Please help us to understand the issue and provide a solution for this issue.

MongoDB Version : 4.2.2
No. of application point MongoDB Instance : 2
File Name: WiredTigerLAS.wt

Case 1
Data Growth: 265GB in 20 Hours

Case 2:
Data Growth: 395GB in 12 Hours

Not something I’ve come across. So this is just my 5 minute note:

This is worth a read IMO
From Percona:

Note for readers coming here from web search etc: The typical symptoms are suddenly you notice that the WiredTigerLAS.wt file is growing rapidly. So long as the WT cache is filled to it’s maximum the file will grow approximately as fast as the Oplog GB/hr rate at the time. Disk utilization will be 100%, and the WiredTigerLAS.wt writes and read compete for the same disk IO as the normal db disk files. The WiredTigerLAS.wt never shrinks, not even after a restart. The only way to get rid of it is to delete all the files and restart the node for an initial sync (which you probably can’t do until the heavy application load stops).

Don’t forget: The initial cause is not primarily the software issue - the initial cause is that the application load has overwhelmed replica sets node’s capacity to write all the document updates to disk. The symptom manifests on the primary, but it may be lag on a secondary that is the driving the issue.

You have this option: https://docs.mongodb.com/manual/reference/configuration-options/#storage.wiredTiger.engineConfig.maxCacheOverflowFileSizeGB

But pay close attention to what happens when this is non-zero:

If the WiredTigerLAS.wt file exceeds this size, mongod exits with a fatal assertion. You can clear the WiredTigerLAS.wt file and restart mongod .

Hopefully someone else can give some knowledgeable input.

Thanks much chris. I will try to control this file size by configuring maximum size as mentioned.

Hi @Visva_Ram,

What sort of deployment do you have (standalone, replica set, or sharded cluster)? If you have a replica set or sharded cluster, can you describe your the roles of your instances in terms of Primary, Secondary, and Arbiter and also confirm whether you are seeing the LAS growth on the Primary, Secondaries, or both?

WiredTigerLAS.wt is an overflow buffer for data that does not fit in the WiredTiger cache but cannot be persisted to the data files yet (analogous to “swap” if you run out of system memory). This file should be removed on restart by mongod as it is not useful without the context of the in-memory WiredTiger cache which is freed when mongod is restarted.

If you are seeing unbounded growth of WiredTigerLAS.wt, likely causes are a deployment that is severely underprovisioned for the current workload, a replica set configuration with significant lag, or a replica set deployment including an arbiter with a secondary unavailable.

The last scenario is highlighted in the documentation: Read Concern majority and Three-Member PSA and as a startup warning in recent versions of MongoDB (3.6.10+, 4.0.5+, 4.2.0+).

The maxCacheOverflowFileSizeGB configuration option mentioned by @chris will prevent your cache overflow from growing unbounded, but is not a fix for the underlying problem.

Please provide additional details on your deployment so we can try to identify the issue.

Regards,
Stennie

1 Like

Thanks Stennie,

I am using standalone setup, but the single MongoDB instance used by two homogenised application by pointing different database.

We were used old version of MongoDB 3.4.17. In that, i haven’t seen this kind issues.

In this, Are you saying? I am overloading the MongoDB?
If so, Performance can be degaraded, why the caching size is keep on increasing, That I don’t understand.

What is the purpose of WiredTigerLAS.wt in MongoDB?

I am not able to accept that disk usage will keep on increase and down automcatically.

If So, then How i can calculate the load that can be given to MongoDB to avoid this issue. Please comment. Thanks.

Hey there,
I have the exact same problem with a single mongo instance.

I’m doing many bulkWrites one after the other and in parallel and for some cases the file size increases until the entire disk gets full and Mongo crash.

I think me and many others need a more definite answer on how can I tell how much writing operation could cause this and how to limit my load on mongo, it’s really hard to just “guess” the numbers here. Also I think it’s weird that mongo can’t handle the load in a better way or clear this file after the load has finished.

Yeah, I also think the database should handle it in a more elegant way. DONOT let users see strange things, though it’s about internal implementation.

I have some question about this case.
when we do a an __wt_las_insert_block ,we can insert some page which is not WT_UPDATE_BIRTHMARK


And when we do __wt_las_sweep,we only delete those page with WT_UPDATE_BIRTHMARK flag

Does it means there always some page can not delete from WiredTigerLAS.wt file? Because i notice that the WiredTigerLAS.wt file size nerver shirnk.