Hello everybody
I’ve been trying to do the lab several times and always had the same error.
Can lauch all three nodes and mongos without any problem.
MongoDB Enterprise mongos> sh.status()
— Sharding Status —
sharding version: {
“_id” : 1,
“minCompatibleVersion” : 5,
“currentVersion” : 6,
“clusterId” : ObjectId(“5bd975d8f40b052f609ffed6”)
}
shards:
active mongoses:
“3.6.8” : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ “_id” : “config”, “primary” : “config”, “partitioned” : true }
My primary detects the other two nodes:
MongoDB Enterprise m103-csrs:PRIMARY> rs.status()
{
“set” : “m103-csrs”,
“date” : ISODate(“2018-11-04T16:18:32.350Z”),
“myState” : 1,
“term” : NumberLong(12),
“syncingTo” : “”,
“syncSourceHost” : “”,
“syncSourceId” : -1,
“configsvr” : true,
“heartbeatIntervalMillis” : NumberLong(2000),
“optimes” : {
“lastCommittedOpTime” : {
“ts” : Timestamp(1541348308, 1),
“t” : NumberLong(12)
},
“readConcernMajorityOpTime” : {
“ts” : Timestamp(1541348308, 1),
“t” : NumberLong(12)
},
“appliedOpTime” : {
“ts” : Timestamp(1541348308, 1),
“t” : NumberLong(12)
},
“durableOpTime” : {
“ts” : Timestamp(1541348308, 1),
“t” : NumberLong(12)
}
},
“members” : [
{
“_id” : 0,
“name” : “192.168.103.100:26001”,
“health” : 1,
“state” : 1,
“stateStr” : “PRIMARY”,
“uptime” : 159,
“optime” : {
“ts” : Timestamp(1541348308, 1),
“t” : NumberLong(12)
},
“optimeDate” : ISODate(“2018-11-04T16:18:28Z”),
“syncingTo” : “”,
“syncSourceHost” : “”,
“syncSourceId” : -1,
“infoMessage” : “”,
“electionTime” : Timestamp(1541348165, 1),
“electionDate” : ISODate(“2018-11-04T16:16:05Z”),
“configVersion” : 3,
“self” : true,
“lastHeartbeatMessage” : “”
},
{
“_id” : 1,
“name” : “192.168.103.100:26002”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 152,
“optime” : {
“ts” : Timestamp(1541348308, 1),
“t” : NumberLong(12)
},
“optimeDurable” : {
“ts” : Timestamp(1541348308, 1),
“t” : NumberLong(12)
},
“optimeDate” : ISODate(“2018-11-04T16:18:28Z”),
“optimeDurableDate” : ISODate(“2018-11-04T16:18:28Z”),
“lastHeartbeat” : ISODate(“2018-11-04T16:18:31.827Z”),
“lastHeartbeatRecv” : ISODate(“2018-11-04T16:18:31.762Z”),
“pingMs” : NumberLong(0),
“lastHeartbeatMessage” : “”,
“syncingTo” : “192.168.103.100:26001”,
“syncSourceHost” : “192.168.103.100:26001”,
“syncSourceId” : 0,
“infoMessage” : “”,
“configVersion” : 3
},
{
“_id” : 2,
“name” : “192.168.103.100:26003”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 148,
“optime” : {
“ts” : Timestamp(1541348308, 1),
“t” : NumberLong(12)
},
“optimeDurable” : {
“ts” : Timestamp(1541348308, 1),
“t” : NumberLong(12)
},
“optimeDate” : ISODate(“2018-11-04T16:18:28Z”),
“optimeDurableDate” : ISODate(“2018-11-04T16:18:28Z”),
“lastHeartbeat” : ISODate(“2018-11-04T16:18:31.911Z”),
“lastHeartbeatRecv” : ISODate(“2018-11-04T16:18:31.142Z”),
“pingMs” : NumberLong(0),
“lastHeartbeatMessage” : “”,
“syncingTo” : “192.168.103.100:26002”,
“syncSourceHost” : “192.168.103.100:26002”,
“syncSourceId” : 1,
“infoMessage” : “”,
“configVersion” : 3
}
],
“ok” : 1,
“operationTime” : Timestamp(1541348308, 1),
“$gleStats” : {
“lastOpTime” : Timestamp(0, 0),
“electionId” : ObjectId(“7fffffff000000000000000c”)
},
“$clusterTime” : {
“clusterTime” : Timestamp(1541348308, 1),
“signature” : {
“hash” : BinData(0,“CIukv3CyYZiE4dnQPP9G4+sg63A=”),
“keyId” : NumberLong(“6618450697971040262”)
}
}
}
After shutting down the two secondary nodes, the primary switches to secondary.
This is my shard configuration file. Started using port 26001 and now 27011 .
vagrant@m103:~/data$ cat node1.conf
sharding:
clusterRole: shardsvr
storage:
dbPath: /var/mongodb/db/node1
wiredTiger:
engineConfig:
cacheSizeGB: .1
net:
bindIp: 192.168.103.100,localhost
port: 27011
security:
keyFile: /var/mongodb/pki/m103-keyfile
systemLog:
destination: file
path: /var/mongodb/db/node1/mongod.log
logAppend: true
processManagement:
fork: true
replication:
replSetName: m103-repl
Any advice?
Thanks in advance