Mongod taking several hours to start

Starting mongod is taking several hours due to the number of file pointers being opened.

I have approx 133k .wt files on disk and on my small test server the startup process seems to manage about 5 per second. (~7 hours). Even in a test environment this is completely unworkable.

I’m aware that under normal operation idle pointers are closed. I can verify that my live servers have between 5k-10k files open at once under normal load. This fine, but the startup time is a big problem.

Am I doing something wrong?

Is there any way I can speed this up to a sane boot time?

Even just for testing or data crunching a backup, is there a way I can skip these initial file openings?


MongoDB 4.0.9 on CentOS 7.
Disk is formatted as XFS.

Just to follow up on my own issue…

I incrementally upgraded my test server from Linode’s 1G “Nanode”. I gave up on the 2G instance after 30 mins. Then I tried the 4G instance (lowest 2 core option) and mongod startup plummeted to 5 minutes! Out of curiosity I then tried the 8G (4 core) instance and the result was 2 minutes.

My live servers are the 4G instances, so a 5 minute start is far from ideal in emergency situations, but at least I know not to bother with single core machines for testing. I’d still like to know if there are any tips for faster starts in general.

It is possible that your issue isn’t just MongoDB related, more about your cloud provider related. In your first message you didn’t mention that you use Linode’s virtual servers, but that could be the clue. Many VM providers limit IOPS on their products based on tier (which is often memory based). What you experienced with your testing kind of shows that. Those Nanode’s and lowest 2 core computers, they seem to be shared hosting. Dedicated VM’s could have more IOPS, and startup times could be a lot faster.

I haven’t tried out MongoDB on real hardware, or big databases, so I don’t have exact tips for you. But if you have hardware where you could try out starting up your MongoDB, it could verify those usage restrictions. Or you could find it from some more detailed specs sheets of Linode. I know that Google Cloud & AWS at least have such resource restrictions in their VMs, and I/O intence products need more juice than they would otherwise require CPU/RAM.

That could potentially be selling point of MongoDB Atlas, if wanting to use hosting provider anyway, why not use theirs, and get their support making sure things are working as it should. :slight_smile:

3 Likes