Divisible Jumbo Chunks?

I’m a bit confused about jumbo chunks.

https://docs.mongodb.com/manual/core/sharding-data-partitioning/#indivisible-chunks says

Since the chunk cannot split, it continues to grow beyond the chunk size, becoming a jumbo chunk"

Which means to me, only if MongoDB cannot split the chunk it gets the jumbo flag, otherwise it would split the chunk, meaning there is only a single shard key in the chunk, or it would have been split.

But https://docs.mongodb.com/manual/tutorial/clear-jumbo-flag/#divisible-chunks shows a jumbo chunk with shard key { "x" : 2 } -->> { "x" : 4 } that could be split at { x: 3 }.

But if it could be split, how did it become a jumbo chunk after all and wasn’t split automatically?

Ah, got it.
You first make it jumbo by having too many data in it.
Then you add docs with a different shard key that falls within the range of that shard, like if your boundaries of the jumbo chunk are foo and goo and you insert something with fooo, then you have a divisible jumbo chunk and can manually split it.

But one more question, actually when trying this out, sh.status() didn’t mark my chunk as jumbo though it is:


The foo -> fooo one is jumbo, but it is not marked as jumbo.
Did I get something wrong? Shouldn’t this be marked as jumbo?

Here the actual chunk sizes:

Chunk: test.test-test_MinKey has a size of 196877, and includes 1 objects (took 0ms)
Chunk: test.test-test_"bar" has a size of 196877, and includes 1 objects (took 0ms)
Chunk: test.test-test_"baz" has a size of 196877, and includes 1 objects (took 0ms)
Chunk: test.test-test_"foo" has a size of 3937540, and includes 20 objects (took 0ms)
Chunk: test.test-test_"fooo" has a size of 1181268, and includes 6 objects (took 0ms)
Chunk: test.test-test_"goo" has a size of 590631, and includes 3 objects (took 0ms)
Chunk: test.test-test_"gooo" has a size of 590637, and includes 3 objects (took 0ms)

Chunksize is set to 1MB

@Vampire0

Well, actually there are a number of questions here that I’d need more information on. First of all, when you run sh.status() what does it report on the balancer? Is the balancer running, and does it report any failed rounds? If not, I’d suggest running through the set of tests shown here to make sure everything is working correctly. If it is, then when the balancer runs next time it looks to me (from what you’ve shown here) that the given chunk should be split automatically. Remember that a chunk isn’t actually “jumbo” until it cannot be split automatically. Good luck.

Assuming

MongoDB Enterprise mongos> sh.getBalancerHost()
getBalancerHost is deprecated starting version 3.4. The balancer is running on the config server primary host.

means I should use the config server and

$ ps aux | grep mongo
vagrant  12792  1.2  1.2 480204 51948 ?        Sl   Apr25   9:49 mongod --dbpath /home/vagrant/data/shard01/db --logpath /home/vagrant/data/shard01/mongod.log --port 27018 --fork --shardsvr --wiredTigerCacheSizeGB 1
vagrant  12813  1.2  1.0 476892 43232 ?        Sl   Apr25   9:45 mongod --dbpath /home/vagrant/data/shard02/db --logpath /home/vagrant/data/shard02/mongod.log --port 27019 --fork --shardsvr --wiredTigerCacheSizeGB 1
vagrant  12834  1.2  1.0 477036 42232 ?        Sl   Apr25   9:43 mongod --dbpath /home/vagrant/data/shard03/db --logpath /home/vagrant/data/shard03/mongod.log --port 27020 --fork --shardsvr --wiredTigerCacheSizeGB 1
vagrant  12855  1.8  1.3 924196 53396 ?        Sl   Apr25  14:58 mongod --replSet configRepl --dbpath /home/vagrant/data/configRepl/rs1/db --logpath /home/vagrant/data/configRepl/rs1/mongod.log --port 27021 --fork --configsvr --wiredTigerCacheSizeGB 1
vagrant  12956  0.7  0.4 321312 17436 ?        Sl   Apr25   6:04 mongos --logpath /home/vagrant/data/mongos.log --port 27017 --configdb configRepl/localhost:27021 --fork

Here the result:

$ mongo --port 27021 --eval 'sh.status()'
MongoDB shell version v3.4.2
connecting to: mongodb://127.0.0.1:27021/
MongoDB server version: 3.4.2
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("5cc1f0c77d35b464808374e5")
}
  shards:
        {  "_id" : "shard01",  "host" : "localhost:27018",  "state" : 1 }
        {  "_id" : "shard02",  "host" : "localhost:27019",  "state" : 1 }
        {  "_id" : "shard03",  "host" : "localhost:27020",  "state" : 1 }
  active mongoses:
        "3.4.2" : 1
 autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  yes
                Balancer lock taken at Thu Apr 25 2019 17:39:19 GMT+0000 (UTC) by ConfigServer:Balancer
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                4 : Success
  databases:
        {  "_id" : "test",  "primary" : "shard01",  "partitioned" : true }
                test.test
                        shard key: { "test" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard01 3
                                shard02 2
                                shard03 2
                        { "test" : { "$minKey" : 1 } } -->> { "test" : "bar" } on : shard03 Timestamp(5, 0)
                        { "test" : "bar" } -->> { "test" : "baz" } on : shard02 Timestamp(5, 1)
                        { "test" : "baz" } -->> { "test" : "foo" } on : shard03 Timestamp(3, 0)
                        { "test" : "foo" } -->> { "test" : "fooo" } on : shard01 Timestamp(5, 2)
                        { "test" : "fooo" } -->> { "test" : "goo" } on : shard01 Timestamp(5, 3)
                        { "test" : "goo" } -->> { "test" : "gooo" } on : shard01 Timestamp(3, 3)
                        { "test" : "gooo" } -->> { "test" : { "$maxKey" : 1 } } on : shard02 Timestamp(4, 0)

According to the sh.status() output I’d say yes?
Also it split and migrated other chunks successfully.
Where else should I look?
Does this help?

$ find /home/vagrant/data/ -type f -name '*.log' -exec grep -i balanc {} +
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T17:39:19.814+0000 I SHARDING [rsSync] distributed lock 'balancer' acquired for 'CSRS Balancer', ts : 5cc1f0c57d35b464808374c2
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T17:39:19.818+0000 I SHARDING [Balancer] CSRS balancer is starting
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T17:39:19.843+0000 I SHARDING [Balancer] lock 'balancer' successfully forced
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T17:39:19.844+0000 I SHARDING [Balancer] distributed lock 'balancer' acquired, ts : 5cc1f0c57d35b464808374c2
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T17:39:19.899+0000 I SHARDING [Balancer] CSRS balancer thread is recovering
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T17:39:19.899+0000 I SHARDING [Balancer] CSRS balancer thread is recovered
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T17:41:40.113+0000 I SHARDING [Balancer] MaxChunkSize changing from 64MB to 1MB
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T17:45:00.411+0000 I SHARDING [Balancer] ChunkManager loading chunks for test.test sequenceNumber: 2 based on: (empty)
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T17:45:00.411+0000 I SHARDING [Balancer] ChunkManager load took 0 ms and found version 1|0||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:00:11.998+0000 I SHARDING [Balancer] distributed lock 'test.test' acquired for 'Migrating chunk(s) in collection test.test', ts : 5cc1f0c57d35b464808374c2
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:00:12.271+0000 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "m312-2019-04-25T18:00:12.271+0000-5cc1f5ac7d35b46480837b72", server: "m312", clientAddr: "", time: new Date(1556215212271), what: "balancer.round", ns: "", details: { executionTimeMillis: 217, errorOccured: false, candidateChunks: 1, chunksMoved: 1 } }
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:00:13.283+0000 I SHARDING [Balancer] ChunkManager loading chunks for test.test sequenceNumber: 5 based on: 2|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:00:13.284+0000 I SHARDING [Balancer] ChunkManager load took 0 ms and found version 3|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:18:55.545+0000 I SHARDING [Balancer] ChunkManager loading chunks for test.test sequenceNumber: 8 based on: 4|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:18:55.546+0000 I SHARDING [Balancer] ChunkManager load took 0 ms and found version 4|3||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:18:55.585+0000 I SHARDING [Balancer] distributed lock 'test.test' acquired for 'Migrating chunk(s) in collection test.test', ts : 5cc1f0c57d35b464808374c2
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:18:55.660+0000 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "m312-2019-04-25T18:18:55.660+0000-5cc1fa0f7d35b46480838156", server: "m312", clientAddr: "", time: new Date(1556216335660), what: "balancer.round", ns: "", details: { executionTimeMillis: 135, errorOccured: false, candidateChunks: 1, chunksMoved: 1 } }
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:18:56.674+0000 I SHARDING [Balancer] ChunkManager loading chunks for test.test sequenceNumber: 9 based on: 4|3||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:18:56.674+0000 I SHARDING [Balancer] ChunkManager load took 0 ms and found version 5|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:25:07.374+0000 I SHARDING [Balancer] ChunkManager loading chunks for test.test sequenceNumber: 10 based on: 5|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:25:07.374+0000 I SHARDING [Balancer] ChunkManager load took 0 ms and found version 5|3||5cc1f21c7d2ebf0a897998eb

Or maybe this?

$ find /home/vagrant/data/ -type f -name '*.log' -exec grep -i chunk {} +
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T17:39:19.022+0000 I INDEX    [rsSync] build index on: config.chunks properties: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T17:39:19.028+0000 I COMMAND  [rsSync] command config.$cmd command: createIndexes { createIndexes: "chunks", indexes: [ { name: "ns_1_min_1", key: { ns: 1, min: 1 }, unique: true } ] } numYields:0 reslen:193 locks:{ Global: { acquireCount: { r: 21, w: 5, R: 1, W: 1 } }, Database: { acquireCount: { r: 7, w: 3, W: 2 } }, Collection: { acquireCount: { r: 6, w: 1 } }, Metadata: { acquireCount: { w: 3 } }, oplog: { acquireCount: { r: 1, w: 3 } } } protocol:op_query 115ms
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T17:39:19.057+0000 I INDEX    [rsSync] build index on: config.chunks properties: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 },name: "ns_1_shard_1_min_1", ns: "config.chunks" }
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T17:39:19.097+0000 I INDEX    [rsSync] build index on: config.chunks properties: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name:"ns_1_lastmod_1", ns: "config.chunks" }
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T17:41:40.113+0000 I SHARDING [Balancer] MaxChunkSize changing from 64MB to 1MB
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T17:45:00.411+0000 I SHARDING [Balancer] ChunkManager loading chunks for test.test sequenceNumber: 2 based on: (empty)
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T17:45:00.411+0000 I SHARDING [Balancer] ChunkManager load took 0 ms and found version 1|0||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:00:04.096+0000 I SHARDING [conn11] about to log metadata event into changelog: { _id: "m312-2019-04-25T18:00:04.096+0000-5cc1f5a47d35b46480837b19", server: "m312", clientAddr: "127.0.0.1:40531", time: new Date(1556215204096), what: "multi-split", ns: "test.test", details: { before: { min: { test: MinKey }, max: { test: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5cc1f21c7d2ebf0a897998eb') }, number: 1, of: 3, chunk: { min: { test: MinKey }, max: { test: "baz" }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5cc1f21c7d2ebf0a897998eb') } } }
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:00:04.096+0000 I SHARDING [conn11] about to log metadata event into changelog: { _id: "m312-2019-04-25T18:00:04.096+0000-5cc1f5a47d35b46480837b1b", server: "m312", clientAddr: "127.0.0.1:40531", time: new Date(1556215204096), what: "multi-split", ns: "test.test", details: { before: { min: { test: MinKey }, max: { test: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5cc1f21c7d2ebf0a897998eb') }, number: 2, of: 3, chunk: { min: { test: "baz" }, max: { test: "foo" }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5cc1f21c7d2ebf0a897998eb') } } }
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:00:04.097+0000 I SHARDING [conn11] about to log metadata event into changelog: { _id: "m312-2019-04-25T18:00:04.097+0000-5cc1f5a47d35b46480837b1d", server: "m312", clientAddr: "127.0.0.1:40531", time: new Date(1556215204097), what: "multi-split", ns: "test.test", details: { before: { min: { test: MinKey }, max: { test: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5cc1f21c7d2ebf0a897998eb') }, number: 3, of: 3, chunk: { min: { test: "foo" }, max: { test: MaxKey }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('5cc1f21c7d2ebf0a897998eb') } } }
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:00:04.122+0000 I SHARDING [conn40] ChunkManager loading chunks for test.test sequenceNumber: 3 based on: 1|0||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:00:04.122+0000 I SHARDING [conn40] ChunkManager load took 0 ms and found version 1|3||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:00:04.134+0000 I SHARDING [conn40] distributed lock 'test.test' acquired for 'Migrating chunk(s) in collection test.test', ts : 5cc1f0c57d35b464808374c2
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:00:04.328+0000 I SHARDING [conn40] ChunkManager loading chunks for test.test sequenceNumber: 4 based on: 1|3||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:00:04.328+0000 I SHARDING [conn40] ChunkManager load took 0 ms and found version 2|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:00:04.332+0000 I COMMAND  [conn40] command admin.$cmd appName: "MongoDB Shell" command: _configsvrMoveChunk { _configsvrMoveChunk: 1, _id: "test.test-test_MinKey", ns: "test.test", min: { test: MinKey }, max: { test: "baz" }, shard: "shard01", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5cc1f21c7d2ebf0a897998eb'), writeConcern: { w: "majority", wtimeout: 15000 } } numYields:0 reslen:308 locks:{ Global: { acquireCount: { r: 26, w: 6 } }, Database: { acquireCount: { r: 10, w: 6 } }, Collection: { acquireCount: { r: 10, w: 3 } }, Metadata: { acquireCount: { w: 3 } }, oplog: { acquireCount: { w: 3 } } } protocol:op_command 219ms
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:00:11.998+0000 I SHARDING [Balancer] distributed lock 'test.test' acquired for 'Migrating chunk(s) in collection test.test', ts : 5cc1f0c57d35b464808374c2
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:00:12.271+0000 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "m312-2019-04-25T18:00:12.271+0000-5cc1f5ac7d35b46480837b72", server: "m312", clientAddr: "", time: new Date(1556215212271), what: "balancer.round", ns: "", details: { executionTimeMillis: 217, errorOccured: false, candidateChunks: 1, chunksMoved: 1 } }
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:00:13.283+0000 I SHARDING [Balancer] ChunkManager loading chunks for test.test sequenceNumber: 5 based on: 2|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:00:13.284+0000 I SHARDING [Balancer] ChunkManager load took 0 ms and found version 3|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:08:39.956+0000 I SHARDING [conn11] about to log metadata event into changelog: { _id: "m312-2019-04-25T18:08:39.956+0000-5cc1f7a77d35b46480837e3e", server: "m312", clientAddr: "127.0.0.1:40531", time: new Date(1556215719956), what: "multi-split", ns: "test.test", details: { before: { min: { test: "foo" }, max: { test: MaxKey }, lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('5cc1f21c7d2ebf0a897998eb') }, number: 1, of: 3, chunk: { min: { test: "foo" }, max: { test: "goo" }, lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('5cc1f21c7d2ebf0a897998eb') } } }
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:08:39.956+0000 I SHARDING [conn11] about to log metadata event into changelog: { _id: "m312-2019-04-25T18:08:39.956+0000-5cc1f7a77d35b46480837e40", server: "m312", clientAddr: "127.0.0.1:40531", time: new Date(1556215719956), what: "multi-split", ns: "test.test", details: { before: { min: { test: "foo" }, max: { test: MaxKey }, lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('5cc1f21c7d2ebf0a897998eb') }, number: 2, of: 3, chunk: { min: { test: "goo" }, max: { test: "gooo" }, lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('5cc1f21c7d2ebf0a897998eb') } } }
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:08:39.957+0000 I SHARDING [conn11] about to log metadata event into changelog: { _id: "m312-2019-04-25T18:08:39.957+0000-5cc1f7a77d35b46480837e42", server: "m312", clientAddr: "127.0.0.1:40531", time: new Date(1556215719957), what: "multi-split", ns: "test.test", details: { before: { min: { test: "foo" }, max: { test: MaxKey }, lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('5cc1f21c7d2ebf0a897998eb') }, number: 3, of: 3, chunk: { min: { test: "gooo" }, max: { test: MaxKey }, lastmod: Timestamp 3000|4, lastmodEpoch: ObjectId('5cc1f21c7d2ebf0a897998eb') } } }
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:08:39.981+0000 I SHARDING [conn40] ChunkManager loading chunks for test.test sequenceNumber: 6 based on: 3|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:08:39.982+0000 I SHARDING [conn40] ChunkManager load took 0 ms and found version 3|4||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:08:39.991+0000 I SHARDING [conn40] distributed lock 'test.test' acquired for 'Migrating chunk(s) in collection test.test', ts : 5cc1f0c57d35b464808374c2
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:08:40.051+0000 I SHARDING [conn40] ChunkManager loading chunks for test.test sequenceNumber: 7 based on: 3|4||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:08:40.052+0000 I SHARDING [conn40] ChunkManager load took 0 ms and found version 4|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:18:55.545+0000 I SHARDING [Balancer] ChunkManager loading chunks for test.test sequenceNumber: 8 based on: 4|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:18:55.546+0000 I SHARDING [Balancer] ChunkManager load took 0 ms and found version 4|3||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:18:55.585+0000 I SHARDING [Balancer] distributed lock 'test.test' acquired for 'Migrating chunk(s) in collection test.test', ts : 5cc1f0c57d35b464808374c2
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:18:55.660+0000 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "m312-2019-04-25T18:18:55.660+0000-5cc1fa0f7d35b46480838156", server: "m312", clientAddr: "", time: new Date(1556216335660), what: "balancer.round", ns: "", details: { executionTimeMillis: 135, errorOccured: false, candidateChunks: 1, chunksMoved: 1 } }
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:18:56.674+0000 I SHARDING [Balancer] ChunkManager loading chunks for test.test sequenceNumber: 9 based on: 4|3||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:18:56.674+0000 I SHARDING [Balancer] ChunkManager load took 0 ms and found version 5|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:25:07.374+0000 I SHARDING [Balancer] ChunkManager loading chunks for test.test sequenceNumber: 10 based on: 5|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:25:07.374+0000 I SHARDING [Balancer] ChunkManager load took 0 ms and found version 5|3||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard03/mongod.log:2019-04-25T18:00:12.016+0000 I SHARDING [conn10] MetadataLoader loading chunks for test.test based on: (empty)
/home/vagrant/data/shard03/mongod.log:2019-04-25T18:00:12.018+0000 I SHARDING [migrateThread] Starting receiving end of migration of chunk { test: "baz" } -> { test: "foo" } for collection test.test from localhost:27018 at epoch 5cc1f21c7d2ebf0a897998eb with session id shard01_shard03_5cc1f5acb15e94cf35372e5f
/home/vagrant/data/shard03/mongod.log:2019-04-25T18:00:12.151+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "m312-2019-04-25T18:00:12.151+0000-5cc1f5ac7547feefd2cd633a", server: "m312", clientAddr: "", time: new Date(1556215212151), what: "moveChunk.to", ns: "test.test", details: { min: { test: "baz" }, max: { test: "foo" }, step 1 of 6: 96, step 2 of 6: 1, step 3 of 6: 2, step 4 of 6: 0, step 5 of 6: 31, step 6 of 6: 0, note: "success" } }
/home/vagrant/data/shard03/mongod.log:2019-04-25T18:03:26.038+0000 I SHARDING [conn11] MetadataLoader loading chunks for test.test based on: 2|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard03/mongod.log:2019-04-25T18:18:55.594+0000 I SHARDING [conn15] MetadataLoader loading chunks for test.test based on: 3|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard03/mongod.log:2019-04-25T18:18:55.595+0000 I SHARDING [migrateThread] Starting receiving end of migration of chunk { test: MinKey } -> { test: "bar" } for collection test.test from localhost:27019 at epoch 5cc1f21c7d2ebf0a897998eb with session id shard02_shard03_5cc1fa0f50f841ae6c27fcce
/home/vagrant/data/shard03/mongod.log:2019-04-25T18:18:55.610+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "m312-2019-04-25T18:18:55.610+0000-5cc1fa0f7547feefd2cd65b0", server: "m312", clientAddr: "", time: new Date(1556216335610), what: "moveChunk.to", ns: "test.test", details: { min: { test: MinKey }, max: { test: "bar" }, step 1 of 6: 1, step 2 of 6: 0, step 3 of 6: 1, step 4 of 6: 0, step 5 of 6: 11, step 6 of 6: 0, note: "success" } }
/home/vagrant/data/shard02/mongod.log:2019-04-25T18:00:04.154+0000 I SHARDING [conn11] MetadataLoader loading chunks for test.test based on: (empty)
/home/vagrant/data/shard02/mongod.log:2019-04-25T18:00:04.156+0000 I SHARDING [migrateThread] Starting receiving end of migration of chunk { test: MinKey } -> { test: "baz" } for collection test.test from localhost:27018 at epoch 5cc1f21c7d2ebf0a897998eb with session id shard01_shard02_5cc1f5a4b15e94cf35372e45
/home/vagrant/data/shard02/mongod.log:2019-04-25T18:00:04.300+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "m312-2019-04-25T18:00:04.300+0000-5cc1f5a450f841ae6c27fa2e", server: "m312", clientAddr: "", time: new Date(1556215204300), what: "moveChunk.to", ns: "test.test", details: { min: { test: MinKey }, max: { test: "baz" }, step 1 of 6: 86, step 2 of 6: 0, step 3 of 6: 0, step 4 of 6: 0, step 5 of 6: 55, step 6 of 6: 0, note: "success" } }
/home/vagrant/data/shard02/mongod.log:2019-04-25T18:00:06.865+0000 I SHARDING [conn13] MetadataLoader loading chunks for test.test based on: 1|3||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard02/mongod.log:2019-04-25T18:08:40.008+0000 I SHARDING [conn19] MetadataLoader loading chunks for test.test based on: 2|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard02/mongod.log:2019-04-25T18:08:40.010+0000 I SHARDING [migrateThread] Starting receiving end of migration of chunk { test: "gooo" } -> { test: MaxKey } for collection test.test from localhost:27018 at epoch 5cc1f21c7d2ebf0a897998eb with session id shard01_shard02_5cc1f7a8b15e94cf35372fc0
/home/vagrant/data/shard02/mongod.log:2019-04-25T18:08:40.022+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "m312-2019-04-25T18:08:40.022+0000-5cc1f7a850f841ae6c27fb71", server: "m312", clientAddr: "", time: new Date(1556215720022), what: "moveChunk.to", ns: "test.test", details: { min: { test: "gooo" }, max: { test: MaxKey }, step 1 of 6: 0, step 2 of 6: 0, step 3 of 6: 1, step 4 of 6: 0, step 5 of 6: 10, step 6 of 6: 0, note: "success" } }
/home/vagrant/data/shard02/mongod.log:2019-04-25T18:08:42.601+0000 I SHARDING [conn13] MetadataLoader loading chunks for test.test based on: 3|4||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard02/mongod.log:2019-04-25T18:18:49.272+0000 I SHARDING [conn21] request split points lookup for chunk test.test { : MinKey } -->> { : "baz" }
/home/vagrant/data/shard02/mongod.log:2019-04-25T18:18:49.273+0000 I SHARDING [conn21] received splitChunk request: { splitChunk: "test.test", configdb: "configRepl/localhost:27021", from: "shard02", keyPattern: { test: 1.0 }, shardVersion: [ Timestamp 4000|1, ObjectId('5cc1f21c7d2ebf0a897998eb') ], min: { test: MinKey }, max: { test: "baz" }, splitKeys: [ { test: "bar" } ] }
/home/vagrant/data/shard02/mongod.log:2019-04-25T18:18:49.279+0000 I SHARDING [conn21] distributed lock 'test.test' acquired for 'splitting chunk [{ test: MinKey }, { test: "baz" }) in test.test', ts : 5cc1fa0950f841ae6c27fcc6
/home/vagrant/data/shard02/mongod.log:2019-04-25T18:18:49.279+0000 I SHARDING [conn21] MetadataLoader loading chunks for test.test based on: 4|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard02/mongod.log:2019-04-25T18:18:49.288+0000 I SHARDING [conn21] MetadataLoader loading chunks for test.test based on: 4|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard02/mongod.log:2019-04-25T18:18:55.586+0000 I SHARDING [conn9] Starting chunk migration ns: test.test, [{ test: MinKey }, { test: "bar" }), fromShard: shard02, toShard:shard03 with expected collection version 4|3||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard02/mongod.log:2019-04-25T18:18:55.586+0000 I SHARDING [conn9] MetadataLoader loading chunks for test.test based on: 4|3||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard02/mongod.log:2019-04-25T18:18:55.587+0000 I SHARDING [conn9] about to log metadata event into changelog: { _id: "m312-2019-04-25T18:18:55.587+0000-5cc1fa0f50f841ae6c27fccd", server: "m312", clientAddr: "127.0.0.1:53379", time: new Date(1556216335587), what: "moveChunk.start", ns: "test.test", details: { min: { test: MinKey }, max: { test: "bar" }, from: "shard02", to: "shard03" } }
/home/vagrant/data/shard02/mongod.log:2019-04-25T18:18:55.598+0000 I SHARDING [conn9] moveChunk data transfer progress: { active: true, sessionId: "shard02_shard03_5cc1fa0f50f841ae6c27fcce", ns: "test.test", from: "localhost:27019", min: { test: MinKey }, max: { test: "bar" }, shardKeyPattern: { test: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } mem used: 0 documents remaining to clone: 0
/home/vagrant/data/shard02/mongod.log:2019-04-25T18:18:55.600+0000 I SHARDING [conn9] moveChunk data transfer progress: { active: true, sessionId: "shard02_shard03_5cc1fa0f50f841ae6c27fcce", ns: "test.test", from: "localhost:27019", min: { test: MinKey }, max: { test: "bar" }, shardKeyPattern: { test: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 196877, catchup: 0, steady: 0 }, ok: 1.0 } mem used: 0 documents remaining to clone: 0
/home/vagrant/data/shard02/mongod.log:2019-04-25T18:18:55.625+0000 I SHARDING [conn9] MetadataLoader loading chunks for test.test based on: 4|3||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard02/mongod.log:2019-04-25T18:18:55.626+0000 I SHARDING [conn9] about to log metadata event into changelog: { _id: "m312-2019-04-25T18:18:55.626+0000-5cc1fa0f50f841ae6c27fcda", server: "m312", clientAddr: "127.0.0.1:53379", time: new Date(1556216335626), what: "moveChunk.commit", ns: "test.test", details: { min: { test: MinKey }, max: { test: "bar" }, from: "shard02", to: "shard03" } }
/home/vagrant/data/shard02/mongod.log:2019-04-25T18:18:55.631+0000 I SHARDING [conn9] forking for cleanup of chunk data
/home/vagrant/data/shard02/mongod.log:2019-04-25T18:18:55.632+0000 I SHARDING [conn9] about to log metadata event into changelog: { _id: "m312-2019-04-25T18:18:55.632+0000-5cc1fa0f50f841ae6c27fcdb", server: "m312", clientAddr: "127.0.0.1:53379", time: new Date(1556216335632), what: "moveChunk.from", ns: "test.test", details: { min: { test: MinKey }, max: { test: "bar" }, step 1 of7: 0, step 2 of 7: 1, step 3 of 7: 7, step 4 of 7: 5, step 5 of 7: 18, step 6 of 7: 11, step 7 of 7: 0, to: "shard03", from: "shard02", note: "success" } }
/home/vagrant/data/shard01/mongod.log:2019-04-25T17:45:00.105+0000 I SHARDING [conn9] MetadataLoader loading chunks for test.test based on: (empty)
/home/vagrant/data/shard01/mongod.log:2019-04-25T17:51:47.272+0000 I SHARDING [conn12] request split points lookup for chunk test.test { : MinKey } -->> { : MaxKey }
/home/vagrant/data/shard01/mongod.log:2019-04-25T17:51:51.911+0000 I SHARDING [conn12] request split points lookup for chunk test.test { : MinKey } -->> { : MaxKey }
/home/vagrant/data/shard01/mongod.log:2019-04-25T17:53:37.324+0000 I SHARDING [conn12] request split points lookup for chunk test.test { : MinKey } -->> { : MaxKey }
/home/vagrant/data/shard01/mongod.log:2019-04-25T17:54:06.203+0000 I SHARDING [conn12] request split points lookup for chunk test.test { : MinKey } -->> { : MaxKey }
/home/vagrant/data/shard01/mongod.log:2019-04-25T17:54:09.106+0000 I SHARDING [conn12] request split points lookup for chunk test.test { : MinKey } -->> { : MaxKey }
/home/vagrant/data/shard01/mongod.log:2019-04-25T17:59:55.636+0000 I SHARDING [conn14] request split points lookup for chunk test.test { : MinKey } -->> { : MaxKey }
/home/vagrant/data/shard01/mongod.log:2019-04-25T17:59:58.212+0000 I SHARDING [conn14] request split points lookup for chunk test.test { : MinKey } -->> { : MaxKey }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:00.957+0000 I SHARDING [conn14] request split points lookup for chunk test.test { : MinKey } -->> { : MaxKey }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:04.080+0000 I SHARDING [conn14] request split points lookup for chunk test.test { : MinKey } -->> { : MaxKey }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:04.082+0000 I SHARDING [conn14] received splitChunk request: { splitChunk: "test.test", configdb: "configRepl/localhost:27021", from: "shard01", keyPattern: { test: 1.0 }, shardVersion: [ Timestamp 1000|0, ObjectId('5cc1f21c7d2ebf0a897998eb') ], min: { test: MinKey }, max: { test: MaxKey }, splitKeys: [ { test: "baz" }, { test: "foo" } ] }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:04.093+0000 I SHARDING [conn14] distributed lock 'test.test' acquired for 'splitting chunk [{ test: MinKey }, { test: MaxKey }) in test.test', ts : 5cc1f5a4b15e94cf35372e40
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:04.093+0000 I SHARDING [conn14] MetadataLoader loading chunks for test.test based on: 1|0||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:04.101+0000 I SHARDING [conn14] MetadataLoader loading chunks for test.test based on: 1|0||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:04.136+0000 I SHARDING [conn8] Starting chunk migration ns: test.test, [{ test: MinKey }, { test: "baz" }), fromShard: shard01, toShard:shard02 with expected collection version 1|3||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:04.136+0000 I SHARDING [conn8] MetadataLoader loading chunks for test.test based on: 1|3||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:04.140+0000 I SHARDING [conn8] about to log metadata event into changelog: { _id: "m312-2019-04-25T18:00:04.140+0000-5cc1f5a4b15e94cf35372e44", server: "m312", clientAddr: "127.0.0.1:53555", time: new Date(1556215204140), what: "moveChunk.start", ns: "test.test", details: { min: { test: MinKey }, max: { test: "baz" }, from: "shard01", to: "shard02" } }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:04.158+0000 I SHARDING [conn8] moveChunk data transfer progress: { active: true, sessionId: "shard01_shard02_5cc1f5a4b15e94cf35372e45", ns: "test.test", from: "localhost:27018", min: { test: MinKey }, max: { test: "baz" }, shardKeyPattern: { test: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } mem used: 0 documents remaining to clone: 1
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:04.160+0000 I SHARDING [conn8] moveChunk data transfer progress: { active: true, sessionId: "shard01_shard02_5cc1f5a4b15e94cf35372e45", ns: "test.test", from: "localhost:27018", min: { test: MinKey }, max: { test: "baz" }, shardKeyPattern: { test: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } mem used: 0 documents remaining to clone: 1
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:04.165+0000 I SHARDING [conn8] moveChunk data transfer progress: { active: true, sessionId: "shard01_shard02_5cc1f5a4b15e94cf35372e45", ns: "test.test", from: "localhost:27018", min: { test: MinKey }, max: { test: "baz" }, shardKeyPattern: { test: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } mem used: 0 documents remaining to clone: 1
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:04.175+0000 I SHARDING [conn8] moveChunk data transfer progress: { active: true, sessionId: "shard01_shard02_5cc1f5a4b15e94cf35372e45", ns: "test.test", from: "localhost:27018", min: { test: MinKey }, max: { test: "baz" }, shardKeyPattern: { test: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } mem used: 0 documents remaining to clone: 1
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:04.192+0000 I SHARDING [conn8] moveChunk data transfer progress: { active: true, sessionId: "shard01_shard02_5cc1f5a4b15e94cf35372e45", ns: "test.test", from: "localhost:27018", min: { test: MinKey }, max: { test: "baz" }, shardKeyPattern: { test: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } mem used: 0 documents remaining to clone: 1
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:04.226+0000 I SHARDING [conn8] moveChunk data transfer progress: { active: true, sessionId: "shard01_shard02_5cc1f5a4b15e94cf35372e45", ns: "test.test", from: "localhost:27018", min: { test: MinKey }, max: { test: "baz" }, shardKeyPattern: { test: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } mem used: 0 documents remaining to clone: 1
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:04.291+0000 I SHARDING [conn8] moveChunk data transfer progress: { active: true, sessionId: "shard01_shard02_5cc1f5a4b15e94cf35372e45", ns: "test.test", from: "localhost:27018", min: { test: MinKey }, max: { test: "baz" }, shardKeyPattern: { test: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 196877, catchup: 0, steady: 0 }, ok: 1.0 } mem used: 0 documents remaining to clone: 0
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:04.311+0000 I SHARDING [conn8] MetadataLoader loading chunks for test.test based on: 1|3||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:04.313+0000 I SHARDING [conn8] about to log metadata event into changelog: { _id: "m312-2019-04-25T18:00:04.313+0000-5cc1f5a4b15e94cf35372e56", server: "m312", clientAddr: "127.0.0.1:53555", time: new Date(1556215204313), what: "moveChunk.commit", ns: "test.test", details: { min: { test: MinKey }, max: { test: "baz" }, from: "shard01", to: "shard02" } }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:04.318+0000 I SHARDING [conn8] forking for cleanup of chunk data
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:04.318+0000 I SHARDING [conn8] about to log metadata event into changelog: { _id: "m312-2019-04-25T18:00:04.318+0000-5cc1f5a4b15e94cf35372e57", server: "m312", clientAddr: "127.0.0.1:53555", time: new Date(1556215204318), what: "moveChunk.from", ns: "test.test", details: { min: { test: MinKey }, max: { test: "baz" }, step 1 of7: 0, step 2 of 7: 2, step 3 of 7: 17, step 4 of 7: 134, step 5 of 7: 14, step 6 of 7: 12, step 7 of 7: 0, to: "shard02", from: "shard01", note: "success" } }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:04.322+0000 I COMMAND  [conn8] command admin.$cmd appName: "MongoDB Shell" command: moveChunk { moveChunk: "test.test", shardVersion: [ Timestamp 1000|3, ObjectId('5cc1f21c7d2ebf0a897998eb') ], epoch: ObjectId('5cc1f21c7d2ebf0a897998eb'), configdb: "configRepl/localhost:27021", fromShard: "shard01", toShard: "shard02", min: { test: MinKey }, max: { test: "baz" }, chunkVersion: [ Timestamp 1000|1, ObjectId('5cc1f21c7d2ebf0a897998eb') ], maxChunkSizeBytes: 1048576, waitForDelete: false, takeDistLock: false } numYields:0 reslen:74 locks:{ Global: { acquireCount: { r: 19, w: 7 } }, Database: { acquireCount: { r: 6, w: 5, W: 2 } }, Collection: { acquireCount: { r: 6, W: 5 } } } protocol:op_command 187ms
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:11.999+0000 I SHARDING [conn8] Starting chunk migration ns: test.test, [{ test: "baz" }, { test: "foo" }), fromShard: shard01, toShard: shard03 with expected collection version 2|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:11.999+0000 I SHARDING [conn8] MetadataLoader loading chunks for test.test based on: 2|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:12.004+0000 I SHARDING [conn8] about to log metadata event into changelog: { _id: "m312-2019-04-25T18:00:12.004+0000-5cc1f5abb15e94cf35372e5e", server: "m312", clientAddr: "127.0.0.1:53555", time: new Date(1556215212004), what: "moveChunk.start", ns: "test.test", details: { min: { test: "baz" }, max: { test: "foo" }, from: "shard01", to: "shard03" } }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:12.020+0000 I SHARDING [conn8] moveChunk data transfer progress: { active: true, sessionId: "shard01_shard03_5cc1f5acb15e94cf35372e5f", ns: "test.test", from: "localhost:27018", min: { test: "baz" }, max: { test: "foo" }, shardKeyPattern: { test: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0}, ok: 1.0 } mem used: 0 documents remaining to clone: 1
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:12.022+0000 I SHARDING [conn8] moveChunk data transfer progress: { active: true, sessionId: "shard01_shard03_5cc1f5acb15e94cf35372e5f", ns: "test.test", from: "localhost:27018", min: { test: "baz" }, max: { test: "foo" }, shardKeyPattern: { test: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0}, ok: 1.0 } mem used: 0 documents remaining to clone: 1
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:12.026+0000 I SHARDING [conn8] moveChunk data transfer progress: { active: true, sessionId: "shard01_shard03_5cc1f5acb15e94cf35372e5f", ns: "test.test", from: "localhost:27018", min: { test: "baz" }, max: { test: "foo" }, shardKeyPattern: { test: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0}, ok: 1.0 } mem used: 0 documents remaining to clone: 1
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:12.035+0000 I SHARDING [conn8] moveChunk data transfer progress: { active: true, sessionId: "shard01_shard03_5cc1f5acb15e94cf35372e5f", ns: "test.test", from: "localhost:27018", min: { test: "baz" }, max: { test: "foo" }, shardKeyPattern: { test: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0}, ok: 1.0 } mem used: 0 documents remaining to clone: 1
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:12.052+0000 I SHARDING [conn8] moveChunk data transfer progress: { active: true, sessionId: "shard01_shard03_5cc1f5acb15e94cf35372e5f", ns: "test.test", from: "localhost:27018", min: { test: "baz" }, max: { test: "foo" }, shardKeyPattern: { test: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0}, ok: 1.0 } mem used: 0 documents remaining to clone: 1
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:12.085+0000 I SHARDING [conn8] moveChunk data transfer progress: { active: true, sessionId: "shard01_shard03_5cc1f5acb15e94cf35372e5f", ns: "test.test", from: "localhost:27018", min: { test: "baz" }, max: { test: "foo" }, shardKeyPattern: { test: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0}, ok: 1.0 } mem used: 0 documents remaining to clone: 1
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:12.149+0000 I SHARDING [conn8] moveChunk data transfer progress: { active: true, sessionId: "shard01_shard03_5cc1f5acb15e94cf35372e5f", ns: "test.test", from: "localhost:27018", min: { test: "baz" }, max: { test: "foo" }, shardKeyPattern: { test: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 196877, catchup: 0, steady: 0 }, ok: 1.0 } mem used: 0 documents remaining to clone: 0
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:12.165+0000 I SHARDING [conn8] MetadataLoader loading chunks for test.test based on: 2|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:12.166+0000 I SHARDING [conn8] about to log metadata event into changelog: { _id: "m312-2019-04-25T18:00:12.166+0000-5cc1f5acb15e94cf35372e6e", server: "m312", clientAddr: "127.0.0.1:53555", time: new Date(1556215212166), what: "moveChunk.commit", ns: "test.test", details: { min: { test: "baz" }, max: { test: "foo" }, from: "shard01", to: "shard03" } }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:12.176+0000 I SHARDING [conn8] forking for cleanup of chunk data
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:12.176+0000 I SHARDING [conn8] about to log metadata event into changelog: { _id: "m312-2019-04-25T18:00:12.176+0000-5cc1f5acb15e94cf35372e6f", server: "m312", clientAddr: "127.0.0.1:53555", time: new Date(1556215212176), what: "moveChunk.from", ns: "test.test", details: { min: { test: "baz" }, max: { test: "foo" }, step 1 of 7: 0, step 2 of 7: 4, step 3 of 7: 14, step 4 of 7: 131, step 5 of 7: 9, step 6 of 7: 16, step 7 of 7: 0, to: "shard03", from: "shard01", note: "success" } }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:00:12.181+0000 I COMMAND  [conn8] command admin.$cmd appName: "MongoDB Shell" command: moveChunk { moveChunk: "test.test", shardVersion: [ Timestamp 2000|1, ObjectId('5cc1f21c7d2ebf0a897998eb') ], epoch: ObjectId('5cc1f21c7d2ebf0a897998eb'), configdb: "configRepl/localhost:27021", fromShard: "shard01", toShard: "shard03", min: { test: "baz" }, max: { test: "foo" }, chunkVersion: [ Timestamp 2000|1, ObjectId('5cc1f21c7d2ebf0a897998eb') ], maxChunkSizeBytes: 1048576, waitForDelete: false, takeDistLock: false } numYields:0 reslen:74 locks:{ Global: { acquireCount: { r: 19, w: 7 } }, Database: { acquireCount: { r: 6, w: 5, W: 2 } }, Collection: { acquireCount: { r: 6, W: 5 } } } protocol:op_command 182ms
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:07:01.424+0000 I SHARDING [conn14] request split points lookup for chunk test.test { : "foo" } -->> { : MaxKey }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:07:37.644+0000 I SHARDING [conn14] request split points lookup for chunk test.test { : "foo" } -->> { : MaxKey }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:08:10.666+0000 I SHARDING [conn14] request split points lookup for chunk test.test { : "foo" } -->> { : MaxKey }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:08:39.938+0000 I SHARDING [conn14] request split points lookup for chunk test.test { : "foo" } -->> { : MaxKey }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:08:39.939+0000 I SHARDING [conn14] received splitChunk request: { splitChunk: "test.test", configdb: "configRepl/localhost:27021", from: "shard01", keyPattern: { test: 1.0 }, shardVersion: [ Timestamp 3000|1, ObjectId('5cc1f21c7d2ebf0a897998eb') ], min: { test: "foo" }, max: { test: MaxKey }, splitKeys: [ { test: "goo" }, { test:"gooo" } ] }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:08:39.953+0000 I SHARDING [conn14] distributed lock 'test.test' acquired for 'splitting chunk [{ test: "foo" }, { test: MaxKey }) in test.test', ts : 5cc1f7a7b15e94cf35372fbb
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:08:39.953+0000 I SHARDING [conn14] MetadataLoader loading chunks for test.test based on: 3|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:08:39.961+0000 I SHARDING [conn14] MetadataLoader loading chunks for test.test based on: 3|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:08:39.992+0000 I SHARDING [conn8] Starting chunk migration ns: test.test, [{ test: "gooo" }, { test: MaxKey }), fromShard: shard01, toShard: shard02 with expected collection version 3|4||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:08:39.992+0000 I SHARDING [conn8] MetadataLoader loading chunks for test.test based on: 3|4||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:08:39.994+0000 I SHARDING [conn8] about to log metadata event into changelog: { _id: "m312-2019-04-25T18:08:39.994+0000-5cc1f7a7b15e94cf35372fbf", server: "m312", clientAddr: "127.0.0.1:53555", time: new Date(1556215719994), what: "moveChunk.start", ns: "test.test", details: { min: { test: "gooo" }, max: { test: MaxKey }, from: "shard01", to: "shard02" } }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:08:40.011+0000 I SHARDING [conn8] moveChunk data transfer progress: { active: true, sessionId: "shard01_shard02_5cc1f7a8b15e94cf35372fc0", ns: "test.test", from: "localhost:27018", min: { test: "gooo" }, max: { test: MaxKey }, shardKeyPattern: { test: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady:0 }, ok: 1.0 } mem used: 0 documents remaining to clone: 0
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:08:40.014+0000 I SHARDING [conn8] moveChunk data transfer progress: { active: true, sessionId: "shard01_shard02_5cc1f7a8b15e94cf35372fc0", ns: "test.test", from: "localhost:27018", min: { test: "gooo" }, max: { test: MaxKey }, shardKeyPattern: { test: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 196878, catchup: 0, steady: 0 }, ok: 1.0 } mem used: 0 documents remaining to clone: 0
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:08:40.033+0000 I SHARDING [conn8] MetadataLoader loading chunks for test.test based on: 3|4||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:08:40.034+0000 I SHARDING [conn8] about to log metadata event into changelog: { _id: "m312-2019-04-25T18:08:40.034+0000-5cc1f7a8b15e94cf35372fca", server: "m312", clientAddr: "127.0.0.1:53555", time: new Date(1556215720034), what: "moveChunk.commit", ns: "test.test", details: { min: { test: "gooo" }, max: { test: MaxKey }, from: "shard01", to: "shard02" } }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:08:40.041+0000 I SHARDING [conn8] forking for cleanup of chunk data
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:08:40.042+0000 I SHARDING [conn8] about to log metadata event into changelog: { _id: "m312-2019-04-25T18:08:40.042+0000-5cc1f7a8b15e94cf35372fcb", server: "m312", clientAddr: "127.0.0.1:53555", time: new Date(1556215720042), what: "moveChunk.from", ns: "test.test", details: { min: { test: "gooo" }, max: { test: MaxKey }, step 1 of 7: 0, step 2 of 7: 1, step 3 of 7: 16, step 4 of 7: 4, step 5 of 7: 13, step 6 of 7: 14, step 7 of 7: 0, to: "shard02", from: "shard01", note: "success" } }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:09:55.347+0000 I SHARDING [conn14] request split points lookup for chunk test.test { : "foo" } -->> { : "goo" }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:09:56.098+0000 I SHARDING [conn14] request split points lookup for chunk test.test { : "foo" } -->> { : "goo" }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:09:56.719+0000 I SHARDING [conn14] request split points lookup for chunk test.test { : "foo" } -->> { : "goo" }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:10:25.846+0000 I SHARDING [conn14] request split points lookup for chunk test.test { : "foo" } -->> { : "goo" }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:10:26.371+0000 I SHARDING [conn14] request split points lookup for chunk test.test { : "foo" } -->> { : "goo" }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:10:39.447+0000 I SHARDING [conn14] request split points lookup for chunk test.test { : "foo" } -->> { : "goo" }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:10:41.891+0000 I SHARDING [conn14] request split points lookup for chunk test.test { : "foo" } -->> { : "goo" }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:10:42.588+0000 I SHARDING [conn14] request split points lookup for chunk test.test { : "foo" } -->> { : "goo" }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:18:38.491+0000 I SHARDING [conn20] request split points lookup for chunk test.test { : "foo" } -->> { : "goo" }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:24:43.010+0000 I SHARDING [conn21] request split points lookup for chunk test.test { : "foo" } -->> { : "goo" }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:24:44.305+0000 I SHARDING [conn21] request split points lookup for chunk test.test { : "foo" } -->> { : "goo" }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:24:44.937+0000 I SHARDING [conn21] request split points lookup for chunk test.test { : "foo" } -->> { : "goo" }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:25:02.495+0000 I SHARDING [conn21] request split points lookup for chunk test.test { : "foo" } -->> { : "goo" }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:25:02.496+0000 I SHARDING [conn21] received splitChunk request: { splitChunk: "test.test", configdb: "configRepl/localhost:27021", from: "shard01", keyPattern: { test: 1.0 }, shardVersion: [ Timestamp 5000|1, ObjectId('5cc1f21c7d2ebf0a897998eb') ], min: { test: "foo" }, max: { test: "goo" }, splitKeys: [ { test: "fooo" } ] }
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:25:02.504+0000 I SHARDING [conn21] distributed lock 'test.test' acquired for 'splitting chunk [{ test: "foo" }, { test: "goo" }) in test.test', ts : 5cc1fb7eb15e94cf35373201
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:25:02.504+0000 I SHARDING [conn21] MetadataLoader loading chunks for test.test based on: 4|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:25:02.510+0000 I SHARDING [conn21] MetadataLoader loading chunks for test.test based on: 5|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/shard01/mongod.log:2019-04-25T18:25:07.762+0000 I SHARDING [conn21] request split points lookup for chunk test.test { : "foo" } -->> { : "fooo" }
/home/vagrant/data/mongos.log:2019-04-25T17:41:40.035+0000 I SHARDING [Uptime reporter] MaxChunkSize changing from 64MB to 1MB
/home/vagrant/data/mongos.log:2019-04-25T17:45:00.004+0000 I SHARDING [conn9] about to log metadata event into changelog: { _id: "m312-2019-04-25T17:45:00.004+0000-5cc1f21c7d2ebf0a897998ea", server: "m312", clientAddr: "127.0.0.1:57352", time: new Date(1556214300004), what: "shardCollection.start", ns: "test.test", details: { shardKey: { test: 1.0 }, collection: "test.test", primary: "shard01:localhost:27018", initShards: [], numChunks: 1 } }
/home/vagrant/data/mongos.log:2019-04-25T17:45:00.011+0000 I SHARDING [conn9] going to create 1 chunk(s) for: test.test using new epoch 5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/mongos.log:2019-04-25T17:45:00.018+0000 I SHARDING [conn9] ChunkManager loading chunks for test.test sequenceNumber: 2 based on: (empty)
/home/vagrant/data/mongos.log:2019-04-25T17:45:00.019+0000 I SHARDING [conn9] ChunkManager load took 0 ms and found version 1|0||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/mongos.log:2019-04-25T17:45:00.118+0000 I SHARDING [conn9] ChunkManager loading chunks for test.test sequenceNumber: 3 based on: (empty)
/home/vagrant/data/mongos.log:2019-04-25T17:45:00.119+0000 I SHARDING [conn9] ChunkManager load took 0 ms and found version 1|0||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/mongos.log:2019-04-25T18:00:04.109+0000 I SHARDING [conn10] ChunkManager loading chunks for test.test sequenceNumber: 4 based on: 1|0||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/mongos.log:2019-04-25T18:00:04.110+0000 I SHARDING [conn10] ChunkManager load took 1 ms and found version 1|3||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/mongos.log:2019-04-25T18:00:04.334+0000 I SHARDING [conn10] ChunkManager loading chunks for test.test sequenceNumber: 5 based on: 1|3||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/mongos.log:2019-04-25T18:00:04.334+0000 I SHARDING [conn10] ChunkManager load took 0 ms and found version 2|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/mongos.log:2019-04-25T18:00:06.869+0000 I SHARDING [conn10] ChunkManager loading chunks for test.test sequenceNumber: 6 based on: (empty)
/home/vagrant/data/mongos.log:2019-04-25T18:00:06.870+0000 I SHARDING [conn10] ChunkManager load took 0 ms and found version 2|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/mongos.log:2019-04-25T18:02:04.475+0000 I SHARDING [conn10] ChunkManager loading chunks for test.test sequenceNumber: 7 based on: 2|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/mongos.log:2019-04-25T18:02:04.475+0000 I SHARDING [conn10] ChunkManager load took 0 ms and found version 3|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/mongos.log:2019-04-25T18:08:39.972+0000 I SHARDING [conn10] ChunkManager loading chunks for test.test sequenceNumber: 8 based on: 3|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/mongos.log:2019-04-25T18:08:39.972+0000 I SHARDING [conn10] ChunkManager load took 0 ms and found version 3|4||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/mongos.log:2019-04-25T18:08:40.060+0000 I SHARDING [conn10] ChunkManager loading chunks for test.test sequenceNumber: 9 based on: 3|4||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/mongos.log:2019-04-25T18:08:40.061+0000 I SHARDING [conn10] ChunkManager load took 0 ms and found version 4|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/mongos.log:2019-04-25T18:18:38.489+0000 I COMMAND  [conn10] Splitting chunk [{ test: "foo" }, { test: "goo" }) in collection test.test on shard shard01
/home/vagrant/data/mongos.log:2019-04-25T18:18:49.270+0000 I COMMAND  [conn10] Splitting chunk [{ test: MinKey }, { test: "baz" }) in collection test.test on shard shard02
/home/vagrant/data/mongos.log:2019-04-25T18:18:49.295+0000 I SHARDING [conn10] ChunkManager loading chunks for test.test sequenceNumber: 10 based on: 4|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/mongos.log:2019-04-25T18:18:49.296+0000 I SHARDING [conn10] ChunkManager load took 0 ms and found version 4|3||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/mongos.log:2019-04-25T18:25:02.494+0000 I SHARDING [conn10] ChunkManager loading chunks for test.test sequenceNumber: 11 based on: 4|3||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/mongos.log:2019-04-25T18:25:02.495+0000 I SHARDING [conn10] ChunkManager load took 0 ms and found version 5|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/mongos.log:2019-04-25T18:25:02.495+0000 I COMMAND  [conn10] Splitting chunk [{ test: "foo" }, { test: "goo" }) in collection test.test on shard shard01
/home/vagrant/data/mongos.log:2019-04-25T18:25:02.517+0000 I SHARDING [conn10] ChunkManager loading chunks for test.test sequenceNumber: 12 based on: 5|1||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/mongos.log:2019-04-25T18:25:02.517+0000 I SHARDING [conn10] ChunkManager load took 0 ms and found version 5|3||5cc1f21c7d2ebf0a897998eb
/home/vagrant/data/mongos.log:2019-04-25T18:25:07.761+0000 I COMMAND  [conn10] Splitting chunk [{ test: "foo" }, { test: "fooo" }) in collection test.test on shard shard01

According to the sh.status() output I’d say no?
Where else should I look?
Does this help?

$ find /home/vagrant/data/ -type f -name '*.log' -exec grep -i 'failed\|round' {} +
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T17:39:17.339+0000 I SHARDING [shard registry reload] Periodic reload of shard registry failed  :: caused by :: 134 could not get updated shard list from config server due to Read concern majority reads are currently not possible.; will retry after 30s
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T17:39:47.352+0000 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: LockStateChangeFailed: findAndModify query predicate didn't match any lock document
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:00:12.271+0000 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "m312-2019-04-25T18:00:12.271+0000-5cc1f5ac7d35b46480837b72", server: "m312", clientAddr: "", time: new Date(1556215212271), what: "balancer.round", ns: "", details: { executionTimeMillis: 217, errorOccured: false, candidateChunks: 1, chunksMoved: 1 } }
/home/vagrant/data/configRepl/rs1/mongod.log:2019-04-25T18:18:55.660+0000 I SHARDING [Balancer] about to log metadata event into actionlog: { _id: "m312-2019-04-25T18:18:55.660+0000-5cc1fa0f7d35b46480838156", server: "m312", clientAddr: "", time: new Date(1556216335660), what: "balancer.round", ns: "", details: { executionTimeMillis: 135, errorOccured: false, candidateChunks: 1, chunksMoved: 1 } }
/home/vagrant/data/shard03/mongod.log:2019-04-25T17:39:20.378+0000 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: LockStateChangeFailed: findAndModify query predicate didn't match any lock document
/home/vagrant/data/shard02/mongod.log:2019-04-25T17:39:20.267+0000 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: LockStateChangeFailed: findAndModify query predicate didn't match any lock document
/home/vagrant/data/shard01/mongod.log:2019-04-25T17:39:20.050+0000 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: LockStateChangeFailed: findAndModify query predicate didn't match any lock document
/home/vagrant/data/mongos.log:2019-04-25T17:39:19.836+0000 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: LockStateChangeFailed: findAndModify query predicate didn't match any lock document

Seems so to me:

MongoDB Enterprise mongos> sh.getBalancerState()
true
MongoDB Enterprise mongos> sh.isBalancerRunning()
false
MongoDB Enterprise mongos> db.getSiblingDB("config").settings.find()
{ "_id" : "chunksize", "value" : 1 }
{ "_id" : "balancer", "stopped" : false, "mode" : "full" }
MongoDB Enterprise mongos> sh.getBalancerWindow()
null
MongoDB Enterprise mongos> db.getSiblingDB("config").collections.find().pretty()
{
        "_id" : "test.test",
        "lastmodEpoch" : ObjectId("5cc1f21c7d2ebf0a897998eb"),
        "lastmod" : ISODate("1970-02-19T17:02:47.296Z"),
        "dropped" : false,
        "key" : {
                "test" : 1
        },
        "unique" : false
}

The chunk definitely is jumbo.
I only put identical documents into it.
All have the shard key foo.
Also trying to split it manually says that it cannot be split:

MongoDB Enterprise mongos> sh.splitFind("test.test", {test: "foo"})
{
        "code" : 87,
        "ok" : 0,
        "errmsg" : "cannot find median in chunk, possibly empty"
}

@Vampire0

Well, everything looks correct, except that the balancer has no reason to try to move this chunk yet since the chunks are reasonably well-balanced now (3, 2, 2). Notice that the documentation here says

The balancer stops running on the target collection when the difference between the number of chunks on any two shards for that collection is less than two , or a chunk migration fails.

So I think (and this is, I will admit a guess – an educated guess, but I’m not a developer) that this chunk is not yet marked “jumbo” because the balancer has yet not tried to move it. Notice that there are no failed balancer rounds and there are 4 successful ones. However, according to the sh.status() output, the balancer is still running. So at some point I think the balancer will try to split that chunk and it will fail. When it does, then it should notice that the chunk cannot be split and will mark it as “jumbo”. You can check the actual status with the commands

mongos> use config
mongos> db.chunks.find({shard:"shard01"}).pretty()

Basically, I think you’re going to have to wait until the balancer finishes running to know whether the chunk is “jumbo” or not.

I don’t think I get you.

The docs and videos say, the chunks are tried to be split on inserts and updates if they are bigger than the max chunk size. And if it cannot be split it is marked as jumbo and cannot be moved by the balancer.

Whether the balancer is running or not should not matter, as the balancer just moves the chunks around to balance them, not tries to split them. Also the balancer does not mark chunks it cannot move as jumbo, but it cannot move chunks that are marked as jumbo or fulfill the properties for that.

Also, when I already had this potential jumbo chunk I added docs that were able to be split and weren’t because the chunk was already jumbo so I split it manually. And also I added docs to other chunks that then were split and balanced while the presumably jumbo chunk was not touched or marked as jumbo.

Also sh.shatus() does not show that the balancer is running. It shows that the balancer is not running but is enabled, as do the dedicated commands for the balancer status.

So unfortunatly it seems to me your educated guess goes against everything I read in the docs and seen in the videos and posted here about the situation and I’m as far as in the beginning. :-/

Also now I filled one of the non-jumbo chunks on the same shard as the jumbo in a splittable manner until it was split multiple times, those chunks were split and migrated so now the distribution is 5/5/5, but the seemingly jumbo one is still not touched or marked.

@Vampire0

Yep, you’re correct and I was wrong. The chunk should be marked “jumbo” as soon as the database can’t split it – and we know it cannot be split because you tried to do that manually and it rejected it. Just as a question, did you check the “chunks” collection to confirm that the chunk is not marked “jumbo” there? I’m assuming not, as it doesn’t show up in the display, but that would be good to check.

However, although this is a most interesting issue, it’s waaay off the point of the Forum and (clearly :slightly_smiling_face: ) past my level of expertise. As a result, I’d suggest that you post this on one of the regular MongoDB forums or on StackOverflow and see if you can get some more help there. But continuing the discussion here isn’t going anywhere useful I’m afraid. Sorry I can’t help…

No, it is also not marked as jumbo there, already looked there. :frowning:
Thanks for your suggestions though.