Entering the danger zone: I have no choice but to edit locks collection on config db

Alternative title: Upgrading cluster 3.2 to 3.4: Part I: Can’t stop the balancer, or is it already stopped?
Sharded cluster details: 4 shards, 1 config server, 1 mongos
DB size: 33601GB

I’m on the process of upgrading a sharded cluster from 3.2.21 to 3.4. Before that I need to convert the config servers from SCCC to a replica set (CSRS).[1] In order to do that I need first to stop the balancer.[2]

When I tried to stop the balancer I got this:

Waiting for active host 9e6d128f5e63:27017 to recognize new settings… (ping : Wed Mar 31 2021 13:27:04 GMT+0000 (UTC))
Waited for active ping to change for host 9e6d128f5e63:27017, a migration may be in progress or the host may be down.
Waiting for the balancer lock…
assert.soon failed, msg:Waited too long for lock balancer to unlock
doassert@src/mongo/shell/assert.js:18:14
assert.soon@src/mongo/shell/assert.js:202:13
sh.waitForDLock@src/mongo/shell/utils_sh.js:198:1
sh.waitForBalancerOff@src/mongo/shell/utils_sh.js:264:9
sh.waitForBalancer@src/mongo/shell/utils_sh.js:294:9
sh.stopBalancer@src/mongo/shell/utils_sh.js:161:5
@(shell):1:1Balancer still may be active, you must manually verify this is not the case using the config.changelog collection.
2021-04-09T00:22:21.478+0000 E QUERY [thread1] Error: Error: assert.soon failed, msg:Waited too long for lock balancer to unlock :
sh.waitForBalancerOff@src/mongo/shell/utils_sh.js:268:15
sh.waitForBalancer@src/mongo/shell/utils_sh.js:294:9
sh.stopBalancer@src/mongo/shell/utils_sh.js:161:5

@(shell):1:1mongos> sh.isBalancerRunning()
true

After that I checked the changelog collection and actually the balancer stopped. This has been almost 2 months ago. Right now changelog has only “multi-split” documents on “what” key…

mongos> db.changelog.find({“what”: {$ne: “multi-split”}}).count()
0
mongos> db.changelog.count()
23946

The host 9e6d128f5e63:27017 (which holds the lock) does not even exist anymore. Because the mongod was inside a docker container [A], the host name 9e6d128f5e63 was the container id, and it changed each time the server was restarted.

For those familiar with docker I already fix that issue by setting a fixed hostname, but the config collection is full of old stuff (btw, official mongo docker hub repo does not mention any of this). For example, this is the “mongos” collection:

mongos> db.mongos.find()
{ “_id” : “a258014d8fed:27017”, “ping” : ISODate(“2016-09-08T18:01:26.348Z”), “up” : NumberLong(155659), “waiting” : true, “mongoVersion” : “3.2.8” }
{ “_id” : “a4f76de5eec0:27017”, “ping” : ISODate(“2016-11-20T02:57:28.059Z”), “up” : NumberLong(544670), “waiting” : true, “mongoVersion” : “3.2.8” }
{ “_id” : “a2f1cc2446c1:27017”, “ping” : ISODate(“2016-11-23T18:46:50.085Z”), “up” : NumberLong(315587), “waiting” : true, “mongoVersion” : “3.2.8” }
(…)
{ “_id” : “000467baccd7:27017”, “ping” : ISODate(“2018-11-22T02:06:05.416Z”), “up” : NumberLong(449029), “waiting” : true, “mongoVersion” : “3.2.19” }
{ “_id” : “9e6d128f5e63:27017”, “ping” : ISODate(“2021-03-31T13:27:04.488Z”), “up” : NumberLong(34958273), “waiting” : false, “mongoVersion” : “3.2.19” }
{ “_id” : “mongos:27017”, “ping” : ISODate(“2021-04-25T23:08:21.602Z”), “up” : NumberLong(10511), “waiting” : false, “mongoVersion” : “3.2.19” }
mongos> db.mongos.count()
89

From 89 documents, only the last one “_id”: “mongos:27017” is real. All the rest are previous ids of the same mongos container.

The lockpings collection is almost clean. Given that I have fixed the issue of the changing hostname, it now has the shards + 1 old doc:

mongos> db.lockpings.find()
{ “_id” : “9e6d128f5e63:27017:1582238937:234309158”, “ping” : ISODate(“2021-04-09T00:03:33.849Z”) }
{ “_id” : “aurelion:27037:1617974343:182807933”, “ping” : ISODate(“2021-05-27T19:49:57.848Z”) }
{ “_id” : “codex-mirrow:37017:1617974687:1630394397”, “ping” : ISODate(“2021-05-27T19:49:57.856Z”) }
{ “_id” : “codex-shard-3b:37018:1617975885:-683036752”, “ping” : ISODate(“2021-05-27T19:49:57.860Z”) }
{ “_id” : “ubuntumal:37017:1618249908:-1111609285”, “ping” : ISODate(“2021-05-27T19:49:57.863Z”) }
{ “_id” : “mongos:27017:1619381588:-804969375”, “ping” : ISODate(“2021-05-27T19:49:57.846Z”) }

“ubuntumal”, “codex-shard-3b”, “codex-mirrow” and “aurelion” are the 4 shards. The only one that is old data and no longer exist from there is 9e6d128f5e63:27017, but it seems it is not getting deleted automatically because it has the balancer lock. (even if there is no balancer acitivty in the changelog since I attempted to stop it 2 months ago)

I need to let mongo know that the balancer is in fact stopped, spite of the lock, so that the command isBalancerRunning returns false and I can move on with the upgrade.

I think I need to edit the ‘state’ key on ‘balancer’ document on ‘locks’ collection on config database. (But I’m not really sure). Right now has value 2. (which is the only value documented on the documentation [3]). Maybe it should have value 0, but I don’t know.

This is what locks look like (check last document):

configsvr> db.locks.find()
{ “_id” : “configUpgrade”, “state” : 0, “who” : “a258014d8fed:27017:1473202024:1194521542:mongosMain:1804289383”, “ts” : ObjectId(“57cf4768b7dc19454fe95602”), “process” : “a258014d8fed:27017:1473202024:1194521542”, “when” : ISODate(“2016-09-06T22:47:04.820Z”), “why” : “initializing config database to new format v6” }
{ “_id” : “DB_files”, “state” : 0, “who” : “a258014d8fed:27017:1473202024:1194521542:conn1:596516649”, “ts” : ObjectId(“57cf47adb7dc19454fe9560d”), “process” : “a258014d8fed:27017:1473202024:1194521542”, “when” : ISODate(“2016-09-06T22:48:13.861Z”), “why” : “enableSharding” }
{ “_id” : “DB_files.fs.chunks”, “state” : 0, “who” : “ubuntumal:37017:1618234782:1939117970:conn6:691977887”, “ts” : ObjectId(“60745116a0119faa257ea5f7”), “process” : “ubuntumal:37017:1618234782:1939117970”, “when” : ISODate(“2021-04-12T13:54:30.894Z”), “why” : “splitting chunk [{ files_id: ObjectId(‘60744cb301a808a4c578bddb’), n: 37 }, { files_id: MaxKey, n: MaxKey }) in DB_files.fs.chunks” }
{ “_id” : “DB_files-movePrimary”, “state” : 0, “who” : “9e6d128f5e63:27017:1552409205:279181987:conn10953768:312715989”, “ts” : ObjectId(“5cdf23d3e226ea53560379fb”), “process” : “9e6d128f5e63:27017:1552409205:279181987”, “when” : ISODate(“2019-05-17T21:12:51.602Z”), “why” : “Moving primary shard of DB_files” }
{ “_id” : “balancer”, “state” : 2, “who” : “9e6d128f5e63:27017:1582238937:234309158:Balancer:2025600939”, “ts” : ObjectId(“60647abde2785e16702f6ef4”), “process” : “9e6d128f5e63:27017:1582238937:234309158”, “when” : ISODate(“2021-03-31T13:35:57.979Z”), “why” : “doing balance round” }

I know that on normal operation I’m not supposed to edit the config db, but I’m running out of ideas. I have a backup just in case. I can temporarily stop all the shards if necessary.

Also I’m using sh.isBalancerRunning() because db.adminCommand({balancerStatus:1}) did not exist back in 3.2. (it was introduced in 3.4)

mongos> use admin
switched to db admin
mongos> db.adminCommand({balancerStatus:1})
{ “ok” : 0, “errmsg” : “no such cmd: balancerStatus”, “code” : 59 }

I was thinking of stopping the shards and execute something like this on config db:

mongos> db.locks.update({“_id”: “balancer”, “state”: 2}, {$set: {“state”: 0}})

Currently I see no alternative. Because of the DB size, I want to avoid the mongodump/mongorestore strategy (literally I have no storage to do it). I still need to research to see if 0 is the correct number for ‘state’ key in a stopped cluster. Any other ideas?

[A] The shard used to be inside a docker swarm (with this config MongoDB sharding + Docker Swarm mode · GitHub ), but now it is no longer inside swarm and each contianer has the “hostname” setting fixed.

[1] https://docs.mongodb.com/manual/release-notes/3.4-upgrade-sharded-cluster/

[2] Upgrade Config Servers to Replica Set — MongoDB Manual

[3] Config Database — MongoDB Manual

Hi @javier

It’s been a while since you posted this issue. How are you doing with the solution? Is this still an issue?

I agree that doing surgical changes to the config server is strongly not recommended, since you can easily make the cluster inaccessible due to differences between the config server’s view of the cluster and the reality. However in some cases there might be little choice. If it has to be done, all operation on the cluster should be stopped while it’s being done, otherwise there’s a high risk of further damage.

Having said that, have you been successful in implementing the changes?

Best regards
Kevin

1 Like

I have actually not attempted anything yet. I have in mind setting up a mongo 3.2 cluster on the cloud and inspect “normal” locks collections documents to see the behavior. Before, during and after balancing. (I could also check the source code but that may be too complicated)
If my understanding is correct, the state code should be 0 for a balanced cluster, with that I would have the necessary confidence to cross my fingers and execute the update command on production. If all goes smooth, I would be able to formally stop the balancer. After that I should be able to start it again, leave it on during a few hours and stop it again. After that, my wish is that the “who” key of the “_id”: “balancer” document changes to an actual host, and not to a one that does not exist. Also I hope is that the first document of lockpings collection, which has an non-existent host, just go away by itself. But I don’t discard deleting it manually. I’m talking about this document:

mongos> db.lockpings.find().limit(1).pretty()
{
        "_id" : "9e6d128f5e63:27017:1582238937:234309158",
        "ping" : ISODate("2021-04-09T00:03:33.849Z")
}