Watch keynotes and sessions from MongoDB.live, our virtual developer conference.

Sometimes collection gets saved, sometimes not

Hello, I have a discord bot (1.1k+ servers), and we have a bug writing to the database.
Sometimes it does save it, but somtimes it doesn’t.
And we can’t reproduce it on the testing bot, so I think this is because the bot is writing a lot of data to mongodb?

Any clues, error messages of some sort, in the logs that would make you think that bot is writing too much data?

This kind of behaviour

is reflected somewhere in the logs of mongod, or the bot logs, or the system logs of the OS.

What’s the general architecture? Where is the bot or bots running? Where is mongod running?

We have the bot running on a vps, discord.js. MongoDB is in the cloud. No error in console afaik

Hello @Robin_Schapendonk

You can do one thing here to find how your operations are going in your collection.

Enable profiler for particular database, it will create system.profile collection, further we can filter your particular operations those are not working as expected.

You could enable profiler as explained in this document: https://docs.mongodb.com/manual/tutorial/manage-the-database-profiler/#profiling-levels

1 Like

image
I did mongosh “connection string”
and then db.setProfilelingLevel(2)

@Aayushi_Mangal what tool should I use for this? (Btw I can’t find what I must install for mongod)

To get the log from mongod on atlas refer to https://docs.atlas.mongodb.com/reference/api/monitoring-and-logs/

https://cloud.mongodb.com/api/atlas/v1.0/groups/projectID/clusters/cluster0-1c9u4/logs/mongodb.gz

then I get image
When I enter my details it just reloads and does nothing else

I would try with a different browser just in case the issue is browser specific.

Did not fixed it, must I login with a mongodb database access account, or my user account? @steevej

It should be your Atlas user account. The one you use to create project/cluster …

I have google connected, in the right top it says “Robin” and when I click on it it says “Robin Schapendonk” and under that my email

image

image

What should I use then? @steevej

I tried all 3 already and none is working

I am not sure of which user but if the user is a “Cluster Owner” it should be able. May be something was broken when they switch to unified login.

One thing, I note is that your projectID is probably not projectID. Mine looks like 5a6cb…03b161. Also in place of cluster0-1c9u4, you need the real host name, which will look like cluster0-shard-00-01-1c9u4.mongodb.net.

As I wrote the first paragraph above which I stiked out I am reading:

Authentication

As previously mentioned, the Atlas API uses HTTP Digest Authentication. The details of digest authentication are beyond the scope of this document, but it essentially requires a username and a password which are hashed using a unique server-generated value called a nonce . The username is the API public key and the password is the corresponding private key.

and

Each Atlas user or application needing to connect to Atlas must generate an API key before accessing the Atlas API.

Please note that this also new to me. I used the logs from my local servers before but I never had to get the logs from an Atlas cluster.

Okay @Aayushi_Mangal / @steevej I downloaded the logs using the api what now?
https://cloud.mongodb.com/api/atlas/v1.0/groups/5ed88b7804e0302cd1df2400/clusters/cluster0-shard-00-01-1c9u4.mongodb.net/logs/mongodb.gz
that is the link, but I got a 0 bytes .gz file ??

image

I am not sure if we have log files on the free tier.

Is there anything which can cause this problem (sometimes getting saved / sometimes not)? A lot of read/write? Or something else?

Since you cannot reproduce the issue in your test environment, I suspect the following

  1. the problem does not occur in test env because you are using a local server
  2. the problem occurs with Atlas because you reach the IO quota of a free tier
  1. Our test database is just another database in the same project
  2. What is the IO quota?

Hi @Robin_Schapendonk if you are on the free tier of MongoDB Atlas (M0 instance), or the other shared instances (M2 and M5) there are limitations to what is allowed.

I wonder if you’re hitting an operation limit (taken from the document above):

Maximum operations:

  • M0 : 100 per second
  • M2 : 200 per second
  • M5 : 500 per second

I believe what @steevej is referring to here is the data transfer limits (again taken from the linked document):

M0/M2/M5 clusters limit the total data transferred into or out of the cluster as follows:

  • M0 : 10 GB in and 10 GB out per week
  • M2 : 20 GB in and 20 GB out per week
  • M5 : 50 GB in and 50 GB out per week

Atlas throttles the network speed of clusters which exceed the posted limits.

Well I had ~25/s, which isn’t a problem. And also, sometimes it did got saved, so I guess that the limit wasn’t reached