MongoDB.live, free & fully virtual. June 9th - 10th. Register Now MongoDB.live, free & fully virtual. June 9th - 10th. Register Now

Why "db.serverStatus().asserts.user" high number?

My mongodb is community server. It’s a sharding cluster, In config server:

xxxConfig:PRIMARY> db.serverStatus().asserts
{ "regular" : 0, "warning" : 0, "msg" : 0, "user" : 335, "rollovers" : 0 }

Increase to 335 in 3 hours.

In config server log:

2020-03-05T17:01:14.724+0800 D COMMAND [replSetDistLockPinger] assertion while executing command ‘findAndModify’ on database ‘config’ with arguments ‘{ findAndModify: “lockpings”, query: { _id: “ConfigServer” }, update: { $set: { ping: new Date(1583398874724) } }, upsert: true, writeConcern: { w: “majority”, wtimeout: 15000 }, $db: “config” }’: NotMaster: Not primary while running findAndModify command on collection config.lockpings

and

2020-03-06T08:45:11.555+0800 D COMMAND [conn321719] assertion while executing command ‘createIndexes’ on database ‘config’ with arguments ‘{ createIndexes: “system.sessions”, indexes: [ { key: { lastUse: 1 }, name: “lsidTTLIndex”, expireAfterSeconds: 1800 } ], allowImplicitCollectionCreation: false, $clusterTime: { clusterTime: Timestamp(1583455507, 1), signature: { hash: BinData(0, 53B67F92D69C8BFBFE808B0127033E527960EC14), keyId: 6766524995490283521 } }, $configServerState: { opTime: { ts: Timestamp(1583455502, 2), t: 6 } }, $db: “config” }’: CannotImplicitlyCreateCollection{ ns: “config.system.sessions” }: request doesn’t allow collection to be created implicitly.

Run cmd:

use config  
db.system.session.find()     # is blank .

I think so memory’s sessions info cannot write collection “config.system.sessions” , so asserts.user are growing.

How should I solve this problem?

Thanks!

Since these assertions are generated by system activity (updating sharding metadata), I’d suspect a bug in your server version or deployment configuration. Shards should detect changes in the primary config server and should not be sending write commands directly to secondaries (as appears to be the case in the few log messages you’ve included).

Can you provide some further details:

  • What specific version of MongoDB server are you running?
  • Were there any recent changes in your sharded cluster configuration (for example: upgrades or new config servers added) prior to these errors appearing?
  • Are you seeing this message on multiple config servers or just a single secondary config server?
  • Are all members of your sharded cluster running the same version of MongoDB server? If not, what versions are being used?

The query should be db.system.sessions.find(), however there does appear to be an issue with this collection not existing since one of your config secondaries is trying to create it implicitly.

Regards,
Stennie

1 Like

Thank you very much!
My cluster status:

mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("5d51088a47213054f0685153")
  }
  shards:
        {  "_id" : "cldRS0",  "host" : "cldRS0/node01:27018,node02:27018,node03:27018",  "state" : 1 }
        {  "_id" : "cldRS1",  "host" : "cldRS1/node04:27018,node05:27018,node06:27018",  "state" : 1 }
        {  "_id" : "cldRS2",  "host" : "cldRS2/node07:27018,node08:27018,node09:27018",  "state" : 1 }
  active mongoses:
        "4.0.10" : 2
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  1
        Last reported error:  Couldn't get a connection within the time limit
        Time of Reported error:  Thu Nov 07 2019 13:04:37 GMT+0800 (CST)
        Migration Results for the last 24 hours: 
                No recent migrations
  databases: 
... ...  

     {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
            config.system.sessions
                    shard key: { "_id" : 1 }
                    unique: false
                    balancing: true
                    chunks:
                            cldRS0  1
                    { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : cldRS0 Timestamp(1, 0) 

More details:

  • Mongodb Community Server all members same version is 4.0.10 (shard, config, mongos.)
  • Recent I just only restart some nodes one by one. not any change.
  • There is 3 config servers in this cluster, this message on all config servers. my front post log from “cldRS1/node04”

Now, all nodes asserts is growthing.

cldRS1:PRIMARY> db.serverStatus().asserts
{
        "regular" : 0,
        "warning" : 0,
        "msg" : 0,
        "user" : 107478,
        "rollovers" : 0
}


mongos> db.serverStatus().asserts
{
        "regular" : 0,
        "warning" : 0,
        "msg" : 0,
        "user" : 3103,
        "rollovers" : 0
}


xxxConfig:SECONDARY> db.serverStatus().asserts
{
        "regular" : 0,
        "warning" : 0,
        "msg" : 0,
        "user" : 11149,
        "rollovers" : 0
}

Best Regards,
Ruoxue Feng

Could you help me analyze this problem? (my second post) . Thank you very much.