Lab - Shard a Collection // Script validate_lab_shard_collection fails

Hi, I’m having trouble with the Shard Lab Collection.

I have the following error:
vagrant @ m103: ~ $ validate_lab_shard_collection

Replica set ‘m103-repl-2’ not configured correctly - make sure each node is started with
a wiredTiger cache size of 0.1 GB. Your cluster will crash in the following lab
if you don’t do this!

But when I check apparently everything looks good:
MongoDB Enterprise m103-repl-2: PRIMARY> rs.status ()
{
“set”: “m103-repl-2”,
“date”: ISODate (“2019-09-05T17: 52: 44.909Z”),
“myState”: 1,
“term”: NumberLong (3),
“syncingTo”: “”,
“syncSourceHost”: “”,
“syncSourceId”: -1,
“heartbeatIntervalMillis”: NumberLong (2000),
“optimes”: {
“lastCommittedOpTime”: {
“ts”: Timestamp (1567705964, 1),
“t”: NumberLong (3)
},
“readConcernMajorityOpTime”: {
“ts”: Timestamp (1567705964, 1),
“t”: NumberLong (3)
},
“appliedOpTime”: {
“ts”: Timestamp (1567705964, 1),
“t”: NumberLong (3)
},
“durableOpTime”: {
“ts”: Timestamp (1567705964, 1),
“t”: NumberLong (3)
}
},
“members”: [
{
“_id”: 0,
“name”: “192.168.103.100:27004”,
“health”: 1,
“state”: 1,
“stateStr”: “PRIMARY”,
“uptime”: 1119,
“optime”: {
“ts”: Timestamp (1567705964, 1),
“t”: NumberLong (3)
},
“optimeDate”: ISODate (“2019-09-05T17: 52: 44Z”),
“syncingTo”: “”,
“syncSourceHost”: “”,
“syncSourceId”: -1,
“infoMessage”: “”,
“electionTime”: Timestamp (1567704907, 1),
“electionDate”: ISODate (“2019-09-05T17: 35: 07Z”),
“configVersion”: 3,
“self”: true,
“lastHeartbeatMessage”: “”
},
{
“_id”: 1,
“name”: “192.168.103.100:27005”,
“health”: 1,
“state”: 2,
“stateStr”: “SECONDARY”,
“uptime”: 1064,
“optime”: {
“ts”: Timestamp (1567705954, 2),
“t”: NumberLong (3)
},
“optimeDurable”: {
“ts”: Timestamp (1567705954, 2),
“t”: NumberLong (3)
},
“optimeDate”: ISODate (“2019-09-05T17: 52: 34Z”),
“optimeDurableDate”: ISODate (“2019-09-05T17: 52: 34Z”),
“lastHeartbeat”: ISODate (“2019-09-05T17: 52: 44.380Z”),
“lastHeartbeatRecv”: ISODate (“2019-09-05T17: 52: 43.795Z”),
“pingMs”: NumberLong (29),
“lastHeartbeatMessage”: “”,
“syncingTo”: “192.168.103.100:27004”,
“syncSourceHost”: “192.168.103.100:27004”,
“syncSourceId”: 0,
“infoMessage”: “”,
“configVersion”: 3
},
{
“_id”: 2,
“name”: “192.168.103.100:27006”,
“health”: 1,
“state”: 2,
“stateStr”: “SECONDARY”,
“uptime”: 1009,
“optime”: {
“ts”: Timestamp (1567705954, 2),
“t”: NumberLong (3)
},
“optimeDurable”: {
“ts”: Timestamp (1567705954, 2),
“t”: NumberLong (3)
},
“optimeDate”: ISODate (“2019-09-05T17: 52: 34Z”),
“optimeDurableDate”: ISODate (“2019-09-05T17: 52: 34Z”),
“lastHeartbeat”: ISODate (“2019-09-05T17: 52: 43.774Z”),
“lastHeartbeatRecv”: ISODate (“2019-09-05T17: 52: 42.941Z”),
“pingMs”: NumberLong (48),
“lastHeartbeatMessage”: “”,
“syncingTo”: “192.168.103.100:27005”,
“syncSourceHost”: “192.168.103.100:27005”,
“syncSourceId”: 1,
“infoMessage”: “”,
“configVersion”: 3
}
],
“ok”: 1,
“operationTime”: Timestamp (1567705964, 1),
" gleStats": { "lastOpTime": Timestamp (0, 0), "electionId": ObjectId ("7fffffff0000000000000003") }, " configServerState": {
“opTime”: {
“ts”: Timestamp (1567705959, 3),
“t”: NumberLong (6)
}
},
“$ clusterTime”: {
“clusterTime”: Timestamp (1567705964, 1),
“signature”: {
“hash”: BinData (0, “tdIgw / crs + ixb5x0gZTWYWnQI08 =”),
“keyId”: NumberLong (“6732458620568469512”)
}
}
}
MongoDB Enterprise m103-repl-2: PRIMARY>

This is the status of my shared:
vagrant@m103:~$ mongo --port 26000 --username m103-admin --password m103-pass --authenticationDatabase admin
MongoDB shell version v3.6.14
connecting to: mongodb://127.0.0.1:26000/?authSource=admin&gssapiServiceName=mongodb
Implicit session: session { “id” : UUID(“8bea616e-2ccd-4491-9a83-4c5e1b4a7df7”) }
MongoDB server version: 3.6.14
MongoDB Enterprise mongos> sh.status()
— Sharding Status —
sharding version: {
“_id” : 1,
“minCompatibleVersion” : 5,
“currentVersion” : 6,
“clusterId” : ObjectId(“5d6e7f7277bba9b091b3cd6a”)
}
shards:
{ “_id” : “m103-repl”, “host” : “m103-repl/192.168.103.100:27001,192.168.103.100:27002,m103:27003”, “state” : 1 }
{ “_id” : “m103-repl-2”, “host” : “m103-repl-2/192.168.103.100:27004,192.168.103.100:27005,192.168.103.100:27006”, “state” : 1 }
active mongoses:
“3.6.14” : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: yes
Collections with active migrations:
m103.products started at Thu Sep 05 2019 17:35:10 GMT+0000 (UTC)
Failed balancer rounds in last 5 attempts: 5
Last reported error: Could not find host matching read preference { mode: “primary” } for set m103-repl-2
Time of Reported error: Thu Sep 05 2019 17:34:50 GMT+0000 (UTC)
Migration Results for the last 24 hours:
No recent migrations
databases:
{ “_id” : “applicationData”, “primary” : “m103-repl”, “partitioned” : false }
{ “_id” : “config”, “primary” : “config”, “partitioned” : true }
config.system.sessions
shard key: { “_id” : 1 }
unique: false
balancing: true
chunks:
m103-repl 1
{ “_id” : { “$minKey” : 1 } } -->> { “_id” : { “$maxKey” : 1 } } on : m103-repl Timestamp(1, 0)
{ “_id” : “m103”, “primary” : “m103-repl-2”, “partitioned” : true }
m103.products
shard key: { “sku” : 1 }
unique: false
balancing: true
chunks:
m103-repl-2 3
{ “sku” : { “$minKey” : 1 } } -->> { “sku” : 23153496 } on : m103-repl-2 Timestamp(1, 0)
{ “sku” : 23153496 } -->> { “sku” : 28928914 } on : m103-repl-2 Timestamp(1, 1)
{ “sku” : 28928914 } -->> { “sku” : { “$maxKey” : 1 } } on : m103-repl-2 Timestamp(1, 2)
{ “_id” : “test”, “primary” : “m103-repl-2”, “partitioned” : false }
{ “_id” : “testDatabase”, “primary” : “m103-repl”, “partitioned” : false }

It has already happened to me in most of the previous laboratories that it was necessary to run the validation several times until finally the validation code was returning, but this time it is not working.
Can someone please guide me and help me understand what the mistake is?

why all the fragments remained in shard m103.repl-2
chunks:
m103-repl-2 3

MongoDB Enterprise mongos> db.chunks.find().pretty()
{
“_id” : “config.system.sessions-_id_MinKey”,
“ns” : “config.system.sessions”,
“min” : {
“_id” : { “$minKey” : 1 }
},
“max” : {
“_id” : { “$maxKey” : 1 }
},
“shard” : “m103-repl”,
“lastmod” : Timestamp(1, 0),
“lastmodEpoch” : ObjectId(“5d6ea492205fe7d547da177e”)
}
{
“_id” : “m103.products-sku_MinKey”,
“ns” : “m103.products”,
“min” : {
“sku” : { “$minKey” : 1 }
},
“max” : {
“sku” : 23153496
},
“shard” : “m103-repl-2”,
“lastmod” : Timestamp(1, 0),
“lastmodEpoch” : ObjectId(“5d712f17d70fa0781f565005”)
}
{
“_id” : “m103.products-sku_23153496”,
“ns” : “m103.products”,
“min” : {
“sku” : 23153496
},
“max” : {
“sku” : 28928914
},
“shard” : “m103-repl-2”,
“lastmod” : Timestamp(1, 1),
“lastmodEpoch” : ObjectId(“5d712f17d70fa0781f565005”)
}
{
“_id” : “m103.products-sku_28928914”,
“ns” : “m103.products”,
“min” : {
“sku” : 28928914
},
“max” : {
“sku” : { “$maxKey” : 1 }
},
“shard” : “m103-repl-2”,
“lastmod” : Timestamp(1, 2),
“lastmodEpoch” : ObjectId(“5d712f17d70fa0781f565005”)
}
MongoDB Enterprise mongos>

Thank you in advance.

Hi @Noelia_37348,

Kindly share a screenshot of your config file - we need to make sure that wiredTiger configuration is correctly specified as mentioned in lab notes.

Thanks,
Muskan
Curriculum Support Engineer

I can’t understand why today the error is another.
Can you help me check?
Incorrect number of documents imported - make sure you import the entire
dataset

vagrant@m103:~$ mongo --port 26000 --username m103-admin --password m103-pass --authenticationDatabase admin
MongoDB shell version v3.6.14
connecting to: mongodb://127.0.0.1:26000/?authSource=admin&gssapiServiceName=mongodb
Implicit session: session { “id” : UUID(“34c31197-b9c4-4c0f-82e8-217e032958e6”) }
MongoDB server version: 3.6.14
MongoDB Enterprise mongos> sh.status()
— Sharding Status —
sharding version: {
“_id” : 1,
“minCompatibleVersion” : 5,
“currentVersion” : 6,
“clusterId” : ObjectId(“5d6e7f7277bba9b091b3cd6a”)
}
shards:
{ “_id” : “m103-repl”, “host” : “m103-repl/192.168.103.100:27001,192.168.103.100:27002,m103:27003”, “state” : 1 }
{ “_id” : “m103-repl-2”, “host” : “m103-repl-2/192.168.103.100:27004,192.168.103.100:27005,192.168.103.100:27006”, “state” : 1 }
active mongoses:
“3.6.14” : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 5
Last reported error: Could not find host matching read preference { mode: “primary” } for set m103-repl-2
Time of Reported error: Fri Sep 06 2019 11:42:22 GMT+0000 (UTC)
Migration Results for the last 24 hours:
1 : Success
databases:
{ “_id” : “applicationData”, “primary” : “m103-repl”, “partitioned” : false }
{ “_id” : “config”, “primary” : “config”, “partitioned” : true }
config.system.sessions
shard key: { “_id” : 1 }
unique: false
balancing: true
chunks:
m103-repl 1
{ “_id” : { “$minKey” : 1 } } -->> { “_id” : { “$maxKey” : 1 } } on : m103-repl Timestamp(1, 0)
{ “_id” : “m103”, “primary” : “m103-repl”, “partitioned” : true }
m103.products
shard key: { “sku” : 1 }
unique: false
balancing: true
chunks:
m103-repl 1
m103-repl-2 2
{ “sku” : { “$minKey” : 1 } } -->> { “sku” : 23153496 } on : m103-repl Timestamp(2, 0)
{ “sku” : 23153496 } -->> { “sku” : 28928914 } on : m103-repl-2 Timestamp(2, 1)
{ “sku” : 28928914 } -->> { “sku” : { “$maxKey” : 1 } } on : m103-repl-2 Timestamp(1, 2)
{ “_id” : “test”, “primary” : “m103-repl-2”, “partitioned” : false }
{ “_id” : “testDatabase”, “primary” : “m103-repl”, “partitioned” : false }

MongoDB Enterprise mongos> quit()
vagrant@m103:~$ validate_lab_shard_collection

Incorrect number of documents imported - make sure you import the entire
dataset.
vagrant@m103:~$ validate_lab_shard_collection

Incorrect number of documents imported - make sure you import the entire
dataset.
vagrant@m103:~$ validate_lab_shard_collection

Incorrect number of documents imported - make sure you import the entire
dataset.
vagrant@m103:~$ validate_lab_shard_collection

Incorrect number of documents imported - make sure you import the entire
dataset.
vagrant@m103:~$ validate_lab_shard_collection

Incorrect number of documents imported - make sure you import the entire
dataset.
vagrant@m103:~$ validate_lab_shard_collection

Incorrect number of documents imported - make sure you import the entire
dataset.
vagrant@m103:~$ validate_lab_shard_collection

Incorrect number of documents imported - make sure you import the entire
dataset.
vagrant@m103:~$ validate_lab_shard_collection

Incorrect number of documents imported - make sure you import the entire
dataset.
vagrant@m103:~$ mongo --port 26000 --username m103-admin --password m103-pass --authenticationDatabase admin
MongoDB shell version v3.6.14
connecting to: mongodb://127.0.0.1:26000/?authSource=admin&gssapiServiceName=mongodb
Implicit session: session { “id” : UUID(“c77ac4d5-35ee-4986-ac4c-403264c01255”) }
MongoDB server version: 3.6.14
MongoDB Enterprise mongos> use m103
switched to db m103
MongoDB Enterprise mongos> db.products.count()
725872
MongoDB Enterprise mongos> db.products.count()
725872
MongoDB Enterprise mongos>

Hi @Noelia_37348,

As mentioned in the lab notes, the collection products should contain exactly 516784 documents and in your case, I can see it’s 725872 which is not right.
Please drop your current collection using the below set of commands and import it again.

    use m103
    db.products.drop()

After importing the collection and confirming the count, try validating your lab again.

Please get back to me if it still doesn’t work.

Thanks,
Muskan
Curriculum Support Engineer