Lab - Configure a Sharded Cluster

Hello everybody

I’ve been trying to do the lab several times and always had the same error.

Can lauch all three nodes and mongos without any problem.

MongoDB Enterprise mongos> sh.status()
— Sharding Status —
sharding version: {
“_id” : 1,
“minCompatibleVersion” : 5,
“currentVersion” : 6,
“clusterId” : ObjectId(“5bd975d8f40b052f609ffed6”)
}
shards:
active mongoses:
“3.6.8” : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ “_id” : “config”, “primary” : “config”, “partitioned” : true }

My primary detects the other two nodes:

MongoDB Enterprise m103-csrs:PRIMARY> rs.status()
{
“set” : “m103-csrs”,
“date” : ISODate(“2018-11-04T16:18:32.350Z”),
“myState” : 1,
“term” : NumberLong(12),
“syncingTo” : “”,
“syncSourceHost” : “”,
“syncSourceId” : -1,
“configsvr” : true,
“heartbeatIntervalMillis” : NumberLong(2000),
“optimes” : {
“lastCommittedOpTime” : {
“ts” : Timestamp(1541348308, 1),
“t” : NumberLong(12)
},
“readConcernMajorityOpTime” : {
“ts” : Timestamp(1541348308, 1),
“t” : NumberLong(12)
},
“appliedOpTime” : {
“ts” : Timestamp(1541348308, 1),
“t” : NumberLong(12)
},
“durableOpTime” : {
“ts” : Timestamp(1541348308, 1),
“t” : NumberLong(12)
}
},
“members” : [
{
“_id” : 0,
“name” : “192.168.103.100:26001”,
“health” : 1,
“state” : 1,
“stateStr” : “PRIMARY”,
“uptime” : 159,
“optime” : {
“ts” : Timestamp(1541348308, 1),
“t” : NumberLong(12)
},
“optimeDate” : ISODate(“2018-11-04T16:18:28Z”),
“syncingTo” : “”,
“syncSourceHost” : “”,
“syncSourceId” : -1,
“infoMessage” : “”,
“electionTime” : Timestamp(1541348165, 1),
“electionDate” : ISODate(“2018-11-04T16:16:05Z”),
“configVersion” : 3,
“self” : true,
“lastHeartbeatMessage” : “”
},
{
“_id” : 1,
“name” : “192.168.103.100:26002”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 152,
“optime” : {
“ts” : Timestamp(1541348308, 1),
“t” : NumberLong(12)
},
“optimeDurable” : {
“ts” : Timestamp(1541348308, 1),
“t” : NumberLong(12)
},
“optimeDate” : ISODate(“2018-11-04T16:18:28Z”),
“optimeDurableDate” : ISODate(“2018-11-04T16:18:28Z”),
“lastHeartbeat” : ISODate(“2018-11-04T16:18:31.827Z”),
“lastHeartbeatRecv” : ISODate(“2018-11-04T16:18:31.762Z”),
“pingMs” : NumberLong(0),
“lastHeartbeatMessage” : “”,
“syncingTo” : “192.168.103.100:26001”,
“syncSourceHost” : “192.168.103.100:26001”,
“syncSourceId” : 0,
“infoMessage” : “”,
“configVersion” : 3
},
{
“_id” : 2,
“name” : “192.168.103.100:26003”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 148,
“optime” : {
“ts” : Timestamp(1541348308, 1),
“t” : NumberLong(12)
},
“optimeDurable” : {
“ts” : Timestamp(1541348308, 1),
“t” : NumberLong(12)
},
“optimeDate” : ISODate(“2018-11-04T16:18:28Z”),
“optimeDurableDate” : ISODate(“2018-11-04T16:18:28Z”),
“lastHeartbeat” : ISODate(“2018-11-04T16:18:31.911Z”),
“lastHeartbeatRecv” : ISODate(“2018-11-04T16:18:31.142Z”),
“pingMs” : NumberLong(0),
“lastHeartbeatMessage” : “”,
“syncingTo” : “192.168.103.100:26002”,
“syncSourceHost” : “192.168.103.100:26002”,
“syncSourceId” : 1,
“infoMessage” : “”,
“configVersion” : 3
}
],
“ok” : 1,
“operationTime” : Timestamp(1541348308, 1),
“$gleStats” : {
“lastOpTime” : Timestamp(0, 0),
“electionId” : ObjectId(“7fffffff000000000000000c”)
},
“$clusterTime” : {
“clusterTime” : Timestamp(1541348308, 1),
“signature” : {
“hash” : BinData(0,“CIukv3CyYZiE4dnQPP9G4+sg63A=”),
“keyId” : NumberLong(“6618450697971040262”)
}
}
}

After shutting down the two secondary nodes, the primary switches to secondary.

This is my shard configuration file. Started using port 26001 and now 27011 .

vagrant@m103:~/data$ cat node1.conf
sharding:
clusterRole: shardsvr
storage:
dbPath: /var/mongodb/db/node1
wiredTiger:
engineConfig:
cacheSizeGB: .1
net:
bindIp: 192.168.103.100,localhost
port: 27011
security:
keyFile: /var/mongodb/pki/m103-keyfile
systemLog:
destination: file
path: /var/mongodb/db/node1/mongod.log
logAppend: true
processManagement:
fork: true
replication:
replSetName: m103-repl

Any advice?

Thanks in advance

Well, what’s the error? :slight_smile: You didn’t mention that…

If it’s about the validation script, it could very well be a known issue that we’ve been running into.

Sorry, thought had write it.

When I connect to the primary node after shutting down the two secondary nodes, it says that the node has become a secondary.

Also, I can’t connect to mongo using the shard configuration. Doesn’t matter which port I use.

This is normal behaviour and was covered in the course. When only one node remains, it will fallback to secondary status, because the cluster can no longer guarantee that written data is safeguarded. So basically, the cluster becomes read-only until at least a majority of hosts are back.

One you have made a shared cluster, you should be able to:

  1. Use the mongo shell to connect to the m103-repl replica set (to get rs.status()).
  2. Use the mongo shell to connect to the m103-repl-2 replica set (to get rs.status()).
  3. Use the mongo shell to connect to mongos (to get sh.status()).

Mongos was configured to run on 26000, which is what you should be able to connect to.

Why don’t we run the following?

  • ps -ef | grep -i mongo
  • netstat -an | grep LISTEN | grep “:2”

I don’t know if it is important to the validation scripts but m103-repl-2 mongod should be on port 27004, 27005 and 27006 not on 26001, 26002, and 26003. Same thing with m103-repl, the instructions say to use 27001, 27002 and 27003.

I do not know if the scripts connect to specific ports for replica sets mongod or only for mongos. I would tried with the ports specified in the instructions first.

1 Like

That’s a great catch @steevej-1495! I didn’t notice the port numbers were off. Yeah, I agree with you that the validator probably expects things to be perfectly as the assignments outline them.

The validators do expect things to be set up exactly as they are in the lab instructions.

That said, message received. It may be of little help but we do plan to update the validators to accept arguments in the future to allow for different setups.

1 Like