Issue with Chapter: 2 Lab - Initiate a Replica Set Locally

Hi Kanika, I have followed all the required steps and every commands worked fine without any issue getting the replica set up and running, but get the following message during validation.

./validate_lab_initialize_local_replica_set

Client experienced a timeout when connecting to the database - check that mongod
processes are running on ports 27001, 27002 and 27003, and that the ‘m103-admin’
user authenticates against the admin database.

I have double checked and confirmed the suggested areas “running ports on 27001, 27002 and 27003” and user ‘m103-admin’ has been created and used starting mongoDB script.
What lease might be wrong?

MongoDB Enterprise m103-repl:PRIMARY> rs.status()
{
"set" : "m103-repl",
"date" : ISODate("2018-11-29T23:36:50.218Z"),
"myState" : 1,
"term" : NumberLong(2),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1543534604, 1),
"t" : NumberLong(2)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1543534604, 1),
"t" : NumberLong(2)
},
"appliedOpTime" : {
"ts" : Timestamp(1543534604, 1),
"t" : NumberLong(2)
},
"durableOpTime" : {
"ts" : Timestamp(1543534604, 1),
"t" : NumberLong(2)
}
},
"members" : [
{
"_id" : 0,
"name" : "192.168.103.100:27001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 1158,
"optime" : {
"ts" : Timestamp(1543534604, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2018-11-29T23:36:44Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1543533469, 1),
"electionDate" : ISODate("2018-11-29T23:17:49Z"),
"configVersion" : 3,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "192.168.103.100:27002",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 1141,
"optime" : {
"ts" : Timestamp(1543534604, 1),
"t" : NumberLong(2)
},
"optimeDurable" : {
"ts" : Timestamp(1543534604, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2018-11-29T23:36:44Z"),
"optimeDurableDate" : ISODate("2018-11-29T23:36:44Z"),
"lastHeartbeat" : ISODate("2018-11-29T23:36:48.267Z"),
"lastHeartbeatRecv" : ISODate("2018-11-29T23:36:48.263Z"),
"pingMs" : NumberLong(14),
"lastHeartbeatMessage" : "",
"syncingTo" : "192.168.103.100:27001",
"syncSourceHost" : "192.168.103.100:27001",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 3
},
{
"_id" : 2,
"name" : "192.168.103.100:27003",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 1129,
"optime" : {
"ts" : Timestamp(1543534604, 1),
"t" : NumberLong(2)
},
"optimeDurable" : {
"ts" : Timestamp(1543534604, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2018-11-29T23:36:44Z"),
"optimeDurableDate" : ISODate("2018-11-29T23:36:44Z"),
"lastHeartbeat" : ISODate("2018-11-29T23:36:49.839Z"),
"lastHeartbeatRecv" : ISODate("2018-11-29T23:36:49.134Z"),
"pingMs" : NumberLong(9),
"lastHeartbeatMessage" : "",
"syncingTo" : "192.168.103.100:27001",
"syncSourceHost" : "192.168.103.100:27001",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 3
}
],
"ok" : 1,
"operationTime" : Timestamp(1543534604, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1543534604, 1),
"signature" : {
"hash" : BinData(0,"CsQhdT6nDH6hvgFb1OjW1Ety/Uw="),
"keyId" : NumberLong("6629408267644698625")
}
}
}

MongoDB Enterprise m103-repl:PRIMARY> db.getUsers()
[
{
"_id" : "admin.m103-admin",
"user" : "m103-admin",
"db" : "admin",
"roles" : [
{
"role" : "root",
"db" : "admin"
}
]
}
]

Just in case it might help here is rs.isMastet() output: MongoDB Enterprise m103-repl:PRIMARY> rs.isMaster()
{
"hosts" : [
"192.168.103.100:27001",
"192.168.103.100:27002",
"192.168.103.100:27003"
],
"setName" : "m103-repl",
"setVersion" : 3,
"ismaster" : true,
"secondary" : false,
"primary" : "192.168.103.100:27001",
"me" : "192.168.103.100:27001",
"electionId" : ObjectId("7fffffff0000000000000002"),
"lastWrite" : {
"opTime" : {
"ts" : Timestamp(1543535175, 1),
"t" : NumberLong(2)
},
"lastWriteDate" : ISODate("2018-11-29T23:46:15Z"),
"majorityOpTime" : {
"ts" : Timestamp(1543535175, 1),
"t" : NumberLong(2)
},
"majorityWriteDate" : ISODate("2018-11-29T23:46:15Z")
},
"maxBsonObjectSize" : 16777216,
"maxMessageSizeBytes" : 48000000,
"maxWriteBatchSize" : 100000,
"localTime" : ISODate("2018-11-29T23:46:24.367Z"),
"logicalSessionTimeoutMinutes" : 30,
"minWireVersion" : 0,
"maxWireVersion" : 6,
"readOnly" : false,
"ok" : 1,
"operationTime" : Timestamp(1543535175, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1543535175, 1),
"signature" : {
"hash" : BinData(0,"Cj2XwiRsld4mI2Mzt+eGHEwQnDw="),
"keyId" : NumberLong("6629408267644698625")
}
}
}

That looks good. I am puzzled.

The first thing that comes to mind is the servers are not running with authentication enabled. Please verify the security: section of your configuration files.

1 Like

Hi Octavian_90298,

steevej-1495 is right, everything is perfect. Just check for authentication. If it’s not enabled, then restart mongod after enabling it.

Let me know if I can help.

Kanika

I am having the same problem! I can login to the replica set with authentication

mongo --host “m103-example/m103.mongodb.university:27001” -u “m103-admin” -p m103-pass --authenticationDatabase “admin”

and the replicas are listening on the expected ports:

vagrant@m103:/var/mongodb$ netstat -antp | grep LISTEN | grep :270 (Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 127.0.0.1:27001 0.0.0.0:* LISTEN 5142/mongod
tcp 0 0 192.168.103.100:27001 0.0.0.0:* LISTEN 5142/mongod
tcp 0 0 127.0.0.1:27002 0.0.0.0:* LISTEN 5171/mongod
tcp 0 0 192.168.103.100:27002 0.0.0.0:* LISTEN 5171/mongod
tcp 0 0 127.0.0.1:27003 0.0.0.0:* LISTEN 5200/mongod
tcp 0 0 192.168.103.100:27003 0.0.0.0:* LISTEN 5200/mongod

Any idea why the validation script validate_lab_initialize_local_replica_set fails?

Hi Kanika and Thanks much for checking both of you. Just to confirm, the authentication has been enabled in configuration file as described in the Lab and has been explicitly used when the mongo shell has been started:
security:
authorization: enabled
keyFile: /var/mongodb/pki/m103-keyfile
mongo --host “m103-repl/192.168.to cnfirm.100:27001” -u “m103-admin”
-p “m103-pass” --authenticationDatabase “admin”
I have had monogd restared couple times, but not much luck yet. Very curious

@Peter_75012 and @Octavian_90298, the output of rs.isMaster() would be helpful.

MongoDB Enterprise m103-example:PRIMARY> rs.isMaster()
{
“hosts” : [
“192.168.103.100:27001”,
“m103.mongodb.university:27002”,
“m103.mongodb.university:27003”
],
“setName” : “m103-example”,
“setVersion” : 3,
“ismaster” : true,
“secondary” : false,
“primary” : “m103.mongodb.university:27002”,
“me” : “m103.mongodb.university:27002”,
“electionId” : ObjectId(“7fffffff0000000000000002”),
“lastWrite” : {
“opTime” : {
“ts” : Timestamp(1543675659, 1),
“t” : NumberLong(2)
},
“lastWriteDate” : ISODate(“2018-12-01T14:47:39Z”),
“majorityOpTime” : {
“ts” : Timestamp(1543675659, 1),
“t” : NumberLong(2)
},
“majorityWriteDate” : ISODate(“2018-12-01T14:47:39Z”)
},
“maxBsonObjectSize” : 16777216,
“maxMessageSizeBytes” : 48000000,
“maxWriteBatchSize” : 100000,
“localTime” : ISODate(“2018-12-01T14:47:44.970Z”),
“logicalSessionTimeoutMinutes” : 30,
“minWireVersion” : 0,
“maxWireVersion” : 6,
“readOnly” : false,
“ok” : 1,
“operationTime” : Timestamp(1543675659, 1),
“$clusterTime” : {
“clusterTime” : Timestamp(1543675659, 1),
“signature” : {
“hash” : BinData(0,“K04/dNInadvr1ujpo3fNB4YcIfw=”),
“keyId” : NumberLong(“6629764208764387329”)
}
}
}

@Peter_75012, is the IP address of m103.mongodb.university 192.168.103.100?

@Peter_75012 and @Octavian_90298, you both write that validate_lab_initialize_local_replica_set fails but one of you has m103-example as replica set name while the other has m103-repl. One of you has the wrong replica set name or is calling the wrong validation script.

For validate_lab_initialize_local_replica_set the replica set name must be m103_repl. I could not find back which lab requires m103-example, but I also have it in some my configuration files from the course.

Thank you! I obviously never considered that the configuration in the lab would be different than the configuration we used in the previous lecture that described how to set up the replication cluster. Now it works…

Good. Hopefully @Octavian_90298 will have as much success.

Thanks for you suggestion @steevej-1495 to check the replica set name, it shows m103-repl
as expected:
Anyway, here are couple of outputs which look good to me. I am wondering id this is a platform specific issue. I have build this environment on clod based RHET 7.4 system

uname -a
Linux twisp1.xyx.com 3.10.0-862.2.3.el7.x86_64 #1 SMP Mon Apr 30 12:37:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
vagrant ssh
mongo --port 27001 -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"
or
mongo --host "m103-repl/192.168.103.100:27001" -u "m103-admin" -p "m103-pass" --authenticationDatabase "admin"

MongoDB Enterprise m103-repl:PRIMARY> rs.isMaster()
{
"hosts" : [
"192.168.103.100:27001",
"192.168.103.100:27002",
"192.168.103.100:27003"
],
"setName" : "m103-repl",
"setVersion" : 3,
"ismaster" : true,
"secondary" : false,
"primary" : "192.168.103.100:27001",
"me" : "192.168.103.100:27001",
"electionId" : ObjectId("7fffffff0000000000000004"),
"lastWrite" : {
"opTime" : {
"ts" : Timestamp(1543876127, 1),
"t" : NumberLong(4)
},
"lastWriteDate" : ISODate("2018-12-03T22:28:47Z"),
"majorityOpTime" : {
"ts" : Timestamp(1543876127, 1),
"t" : NumberLong(4)
},
"majorityWriteDate" : ISODate("2018-12-03T22:28:47Z")
},
"maxBsonObjectSize" : 16777216,
"maxMessageSizeBytes" : 48000000,
"maxWriteBatchSize" : 100000,
"localTime" : ISODate("2018-12-03T22:28:49.247Z"),
"logicalSessionTimeoutMinutes" : 30,
"minWireVersion" : 0,
"maxWireVersion" : 6,
"readOnly" : false,
"ok" : 1,
"operationTime" : Timestamp(1543876127, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1543876127, 1),
"signature" : {
"hash" : BinData(0,"EOzF61snCYHchTlls0CzXN2McSY="),
"keyId" : NumberLong("6629408267644698625")
}
}
}

vagrant@m103:~ grep m103-repl mongod-repl-* mongod-repl-1.conf: replSetName: m103-repl mongod-repl-2.conf: replSetName: m103-repl mongod-repl-3.conf: replSetName: m103-repl vagrant@m103:~ ps -feal|grep mongod
1 S vagrant 2461 1 19 80 0 - 401457 futex_ 22:27 ? 00:02:12 mongod -f mongod-repl-1.conf
1 S vagrant 2538 1 19 80 0 - 383652 futex_ 22:27 ? 00:02:09 mongod -f mongod-repl-2.conf
1 S vagrant 2619 1 18 80 0 - 381322 futex_ 22:27 ? 00:02:04 mongod -f mongod-repl-3.conf
0 S vagrant 2768 2291 0 80 0 - 2217 pipe_w 22:38 pts/0 00:00:00 grep --color=auto mongod
vagrant@m103:~$ uname -a
Linux m103 3.13.0-161-generic #211-Ubuntu SMP Wed Oct 3 14:52:35 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Just in case I have missed the point, the replica set name must be m103-repl or m103_repl?

Thx much!

Hi Octavian_90298,

The replica set name should be m103-repl , the one with the hyphen.

And I am sorry what is the problem here exactly? Can you share what is the error or concern you have in this lab, so that I can help you better.

Kanika

Hi Kanika,

The initial issue is not resolved, the validation script still timeout connecting, but the replica set name, ports #, user and functionality all work fine.

validate_lab_initialize_local_replica_set

Client experienced a timeout when connecting to the database - check that mongod
processes are running on ports 27001, 27002 and 27003, and that the ‘m103-admin’
user authenticates against the admin database.

I have double checked the running ports use are 27001, 27002 and 27003 and user ‘m103-admin’ has been created and used starting mongoDB script.
What lease might be wrong? Any options to trace the validation script on the mongo side?

@Octavian_90298, could you please provide the output of the command netstat -tnlup from the vagrant machine?

Could you also provide the output of db.getUsers() from the mongo shell?

I know you said that you double checked them but since the script is not able to validate your setup, there must be something wrong with your setup since the script worked for many others.

The last 20 or 30 lines of the log file of the primary could also help us.

Hi Octavian_90298,

As suggested by @steevej-1495, please check for mongod-logs and provide the output of db.getusers().

Also, check for the db on which user has been created: m103-admin should be created on admin database and mongod should be running with authentication enabled.

Kanika

Hi,

The user has been created under admin database, authentication is enabled and mongo script connection is working properly.

vagrant@m103:~$ netstat -tnlup
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 192.168.103.100:27001 0.0.0.0:* LISTEN 12065/mongod
tcp 0 0 127.0.0.1:27001 0.0.0.0:* LISTEN 12065/mongod
tcp 0 0 192.168.103.100:27002 0.0.0.0:* LISTEN 11983/mongod
tcp 0 0 127.0.0.1:27002 0.0.0.0:* LISTEN 11983/mongod
tcp 0 0 192.168.103.100:27003 0.0.0.0:* LISTEN 12146/mongod
tcp 0 0 127.0.0.1:27003 0.0.0.0:* LISTEN 12146/mongod
tcp 0 0 0.0.0.0:56906 0.0.0.0:* LISTEN -
tcp6 0 0 :::111 :::* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 :::59457 :::* LISTEN -
udp 0 0 0.0.0.0:111 0.0.0.0:* -
udp 0 0 0.0.0.0:32969 0.0.0.0:* -
udp 0 0 0.0.0.0:58645 0.0.0.0:* -
udp 0 0 0.0.0.0:801 0.0.0.0:* -
udp 0 0 0.0.0.0:17212 0.0.0.0:* -
udp 0 0 127.0.0.1:843 0.0.0.0:* -
udp 0 0 0.0.0.0:68 0.0.0.0:* -
udp 0 0 0.0.0.0:68 0.0.0.0:* -
udp6 0 0 :::111 :::* -
udp6 0 0 :::3250 :::* -
udp6 0 0 :::2353 :::* -
udp6 0 0 :::801 :::* -
udp6 0 0 :::50033 :::* -

vagrant@m103:~$ mongo --host “m103-repl/192.168.103.100:27001” -u “m103-admin” -p “m103-pass” --authenticationDatabase “admin”
MongoDB shell version v3.6.9
connecting to: mongodb://192.168.103.100:27001/?replicaSet=m103-repl
2018-12-05T05:28:47.239+0000 I NETWORK [thread1] Starting new replica set monitor for m103-repl/192.168.103.100:27001
2018-12-05T05:28:47.290+0000 I NETWORK [thread1] Successfully connected to 192.168.103.100:27001 (1 connections now open to 192.168.103.100:27001 with a 5 second timeout)
2018-12-05T05:28:47.322+0000 I NETWORK [thread1] Successfully connected to 192.168.103.100:27002 (1 connections now open to 192.168.103.100:27002 with a 5 second timeout)
2018-12-05T05:28:47.330+0000 I NETWORK [thread1] changing hosts to m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003 from m103-repl/192.168.103.100:27001
2018-12-05T05:28:47.358+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor-0] Successfully connected to 192.168.103.100:27003 (1 connections now open to 192.168.103.100:27003 with a 5 second timeout)
Implicit session: session { “id” : UUID(“6aa5e14c-03c7-4582-9433-37e905b70ebd”) }
MongoDB server version: 3.6.9
Server has startup warnings:
2018-12-04T18:06:55.134+0000 I STORAGE [initandlisten]
2018-12-04T18:06:55.135+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2018-12-04T18:06:55.135+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
MongoDB Enterprise m103-repl:PRIMARY> use admin
switched to db admin
MongoDB Enterprise m103-repl:PRIMARY> db.getUsers()
[
{
“_id” : “admin.m103-admin”,
“user” : “m103-admin”,
“db” : “admin”,
“roles” : [
{
“role” : “root”,
“db” : “admin”
}
]
}
]

Regards

We also asked for the mongod logs Please provide the content of the logs for the time period you are running the validation script.

Here we go regarding the logs:

vi /var/mongodb/db/mongod1.log

2018-12-05T20:56:40.414+0000 I COMMAND [conn232] command local.oplog.rs command: getMore { getMore: 11342764498, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 5, lastKnownCommittedOpTime: { ts: Timestamp(1544043389, 1), t: 5 }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1544043399, 1), signature: { hash: BinData(0, B62CFC4ED328761013271000829E86CC0C97844F), keyId: 6629408267644698625 } }, $db: "local" } originatingCommand: { find: "oplog.rs", filter: { ts: { $gte: Timestamp(1543943312, 1) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, batchSize: 13981010, term: 5, readConcern: { afterOpTime: { ts: Timestamp(1543943312, 1), t: 4 } }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1543946865, 1), signature: { hash: BinData(0, 4D20C8DA21DBF3965B78B1AC48E371C54F5995EA), keyId: 6629408267644698625 } }, $db: "local" } planSummary: COLLSCAN cursorid:11342764498 keysExamined:0 docsExamined:0 numYields:4 nreturned:0 reslen:576 locks:{ Global: { acquireCount: { r: 10 } }, Database: { acquireCount: { r: 5 } }, oplog: { acquireCount: { r: 5 } } } protocol:op_msg 199ms
… Start baseline entry …
No entries recorded while running validate_lab_initialize_local_replica_set

vi /var/mongodb/db/mongod2.log

2018-12-05T20:57:38.382+0000 I ACCESS [conn11] Successfully authenticated as principal __system on local
2018-12-05T20:57:38.447+0000 I ACCESS [conn11] Successfully authenticated as principal __system on local
2018-12-05T20:57:38.559+0000 I ACCESS [conn11] Successfully authenticated as principal __system on local
2018-12-05T20:57:38.637+0000 I ACCESS [conn11] Successfully authenticated as principal __system on local
2018-12-05T20:57:53.571+0000 I NETWORK [listener] connection accepted from 192.168.103.100:37584 #112 (10 connections now open)
2018-12-05T20:57:53.686+0000 I NETWORK [conn112] received client metadata from 192.168.103.100:37584 conn112: { driver: { name: "mongo-go-driver", version: "local build" }, os: { architecture: "amd64", type: "linux" }, platform: "go1.9.2" }
2018-12-05T20:57:54.170+0000 I ACCESS [conn112] Successfully authenticated as principal m103-admin on admin
2018-12-05T20:57:54.214+0000 I NETWORK [listener] connection accepted from 192.168.103.100:37585 #113 (11 connections now open)
2018-12-05T20:57:54.262+0000 I NETWORK [conn113] received client metadata from 192.168.103.100:37585 conn113: { driver: { name: "mongo-go-driver", version: "local build" }, os: { type: "linux", architecture: "amd64" }, platform: "go1.9.2" }
2018-12-05T20:57:54.265+0000 I ACCESS [conn13] Successfully authenticated as principal __system on local
2018-12-05T20:57:54.327+0000 I ACCESS [conn13] Successfully authenticated as principal __system on local
2018-12-05T20:57:54.435+0000 I ACCESS [conn13] Successfully authenticated as principal __system on local
2018-12-05T20:57:54.450+0000 I ACCESS [conn113] Successfully authenticated as principal m103-admin on admin
2018-12-05T20:57:54.571+0000 I ACCESS [conn113] Successfully authenticated as principal m103-admin on admin
2018-12-05T20:57:54.577+0000 I ACCESS [conn13] Successfully authenticated as principal __system on local
2018-12-05T20:57:54.682+0000 I ACCESS [conn113] Successfully authenticated as principal m103-admin on admin
2018-12-05T20:57:54.719+0000 I NETWORK [listener] connection accepted from 192.168.103.100:37586 #114 (12 connections now open)
2018-12-05T20:57:54.755+0000 I NETWORK [conn114] received client metadata from 192.168.103.100:37586 conn114: { driver: { name: "mongo-go-driver", version: "local build" }, os: { type: "linux", architecture: "amd64" }, platform: "go1.9.2" }
2018-12-05T20:57:54.906+0000 I ACCESS [conn114] Successfully authenticated as principal m103-admin on admin
2018-12-05T20:57:55.082+0000 I ACCESS [conn114] Successfully authenticated as principal m103-admin on admin
2018-12-05T20:57:55.295+0000 I ACCESS [conn114] Successfully authenticated as principal m103-admin on admin
2018-12-05T20:57:55.523+0000 I NETWORK [conn114] end connection 192.168.103.100:37586 (11 connections now open)
2018-12-05T20:57:55.526+0000 I NETWORK [conn113] end connection 192.168.103.100:37585 (10 connections now open)
2018-12-05T20:57:55.527+0000 I NETWORK [conn112] end connection 192.168.103.100:37584 (9 connections now open)

vi /var/mongodb/db/mongod3.log

2018-12-05T20:49:18.403+0000 I ASIO [NetworkInterfaceASIO-RS-0] Connecting to 192.168.103.100:27001
2018-12-05T20:49:18.526+0000 I ASIO [NetworkInterfaceASIO-RS-0] Successfully connected to 192.168.103.100:27001, took 127ms (3 connections now open to 192.168.103.100:27001)
2018-12-05T20:52:54.155+0000 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003
2018-12-05T20:52:54.255+0000 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003
2018-12-05T20:52:54.346+0000 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003
2018-12-05T20:52:54.455+0000 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003
2018-12-05T20:57:53.518+0000 I NETWORK [listener] connection accepted from 192.168.103.100:48783 #8 (5 connections now open)
2018-12-05T20:57:53.627+0000 I NETWORK [conn8] received client metadata from 192.168.103.100:48783 conn8: { driver: { name: "mongo-go-driver", version: "local build" }, os: { type: "linux", architecture: "amd64" }, platform: "go1.9.2" }
2018-12-05T20:57:53.930+0000 I ACCESS [conn8] Successfully authenticated as principal m103-admin on admin
2018-12-05T20:57:54.158+0000 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003
2018-12-05T20:57:54.281+0000 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003
2018-12-05T20:57:54.339+0000 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003
2018-12-05T20:57:54.481+0000 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003
2018-12-05T20:57:55.537+0000 I NETWORK [conn8] end connection 192.168.103.100:48783 (4 connections now open)
2018-12-05T21:00:20.966+0000 I ASIO [NetworkInterfaceASIO-RS-0] Ending idle connection to host 192.168.103.100:27001 because the pool meets constraints; 2 connections to that host remain open
2018-12-05T21:00:31.103+0000 I ASIO [NetworkInterfaceASIO-RS-0] Connecting to 192.168.103.100:27001
2018-12-05T21:00:31.402+0000 I ASIO [NetworkInterfaceASIO-RS-0] Successfully connected to 192.168.103.100:27001, took 298ms (3 connections now open to 192.168.103.100:27001)
2018-12-05T21:02:54.171+0000 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003
2018-12-05T21:02:54.245+0000 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003
2018-12-05T21:02:54.359+0000 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003
2018-12-05T21:02:54.471+0000 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003

vagrant@m103:~ date Wed Dec 5 21:06:01 UTC 2018 vagrant@m103:~ validate_lab_initialize_local_replica_set

Client experienced a timeout when connecting to the database - check that mongod
processes are running on ports 27001, 27002 and 27003, and that the ‘m103-admin’
user authenticates against the admin database.

tail /var/mongodb/db/mongod1.log

2018-12-05T20:56:40.414+0000 I COMMAND [conn232] command local.oplog.rs command: getMore { getMore: 11342764498, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 5, lastKnownCommittedOpTime: { ts: Timestamp(1544043389, 1), t: 5 }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1544043399, 1), signature: { hash: BinData(0, B62CFC4ED328761013271000829E86CC0C97844F), keyId: 6629408267644698625 } }, $db: "local" } originatingCommand: { find: "oplog.rs", filter: { ts: { $gte: Timestamp(1543943312, 1) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, batchSize: 13981010, term: 5, readConcern: { afterOpTime: { ts: Timestamp(1543943312, 1), t: 4 } }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1543946865, 1), signature: { hash: BinData(0, 4D20C8DA21DBF3965B78B1AC48E371C54F5995EA), keyId: 6629408267644698625 } }, $db: "local" } planSummary: COLLSCAN cursorid:11342764498 keysExamined:0 docsExamined:0 numYields:4 nreturned:0 reslen:576 locks:{ Global: { acquireCount: { r: 10 } }, Database: { acquireCount: { r: 5 } }, oplog: { acquireCount: { r: 5 } } } protocol:op_msg 199ms
…Start baseline entry …

tail /var/mongodb/db/mongod2.log
2018-12-05T21:06:06.955+0000 I NETWORK [listener] connection accepted from 192.168.103.100:37593 #117 (11 connections now open)
2018-12-05T21:06:06.990+0000 I NETWORK [conn117] received client metadata from 192.168.103.100:37593 conn117: { platform: "go1.9.2", driver: { name: "mongo-go-driver", version: "local build" }, os: { type: "linux", architecture: "amd64" } }
2018-12-05T21:06:07.109+0000 I COMMAND [conn117] command admin.$cmd command: buildInfo { buildInfo: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:1499 locks:{} protocol:op_query 101ms
2018-12-05T21:06:07.298+0000 I ACCESS [conn117] Successfully authenticated as principal m103-admin on admin
2018-12-05T21:06:07.507+0000 I ACCESS [conn117] Successfully authenticated as principal m103-admin on admin
2018-12-05T21:06:07.710+0000 I ACCESS [conn117] Successfully authenticated as principal m103-admin on admin
2018-12-05T21:06:08.370+0000 I COMMAND [conn115] command local.oplog.rs command: getMore { getMore: 11740161817, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 5, lastKnownCommittedOpTime: { ts: Timestamp(1544043961, 1), t: 5 }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1544043967, 1), signature: { hash: BinData(0, C71709F8612840E9250F42B97624983435EE0F9E), keyId: 6629408267644698625 } }, $db: "local" } originatingCommand: { find: "oplog.rs", filter: { ts: { $gte: Timestamp(1543943312, 1) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, batchSize: 13981010, term: 5, readConcern: { afterOpTime: { ts: Timestamp(1543943312, 1), t: 4 } }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1543946865, 1), signature: { hash: BinData(0, 4D20C8DA21DBF3965B78B1AC48E371C54F5995EA), keyId: 6629408267644698625 } }, $db: "local" } planSummary: COLLSCAN cursorid:11740161817 keysExamined:0 docsExamined:0 numYields:2 nreturned:0 reslen:576 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_msg 111ms
2018-12-05T21:06:08.563+0000 I NETWORK [conn117] end connection 192.168.103.100:37593 (10 connections now open)
2018-12-05T21:06:08.587+0000 I NETWORK [conn116] end connection 192.168.103.100:37592 (9 connections now open)
2018-12-05T21:06:32.086+0000 I COMMAND [conn106] command local.oplog.rs command: getMore { getMore: 11740161817, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 5, lastKnownCommittedOpTime: { ts: Timestamp(1544043981, 1), t: 5 }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1544043981, 1), signature: { hash: BinData(0, 543DDA5B0DBBC20C6D5910373369FF62D369755E), keyId: 6629408267644698625 } }, $db: "local" } originatingCommand: { find: "oplog.rs", filter: { ts: { $gte: Timestamp(1543943312, 1) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, batchSize: 13981010, term: 5, readConcern: { afterOpTime: { ts: Timestamp(1543943312, 1), t: 4 } }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1543946865, 1), signature: { hash: BinData(0, 4D20C8DA21DBF3965B78B1AC48E371C54F5995EA), keyId: 6629408267644698625 } }, $db: "local" } planSummary: COLLSCAN cursorid:11740161817 keysExamined:0 docsExamined:1 numYields:6 nreturned:1 reslen:689 locks:{ Global: { acquireCount: { r: 14 } }, Database: { acquireCount: { r: 7 } }, oplog: { acquireCount: { r: 7 } } } protocol:op_msg 192ms

vagrant@m103:~$
tail /var/mongodb/db/mongod3.log
2018-12-05T21:02:54.359+0000 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003
2018-12-05T21:02:54.471+0000 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003
2018-12-05T21:06:06.450+0000 I NETWORK [listener] connection accepted from 192.168.103.100:48790 #9 (5 connections now open)
2018-12-05T21:06:06.571+0000 I NETWORK [conn9] received client metadata from 192.168.103.100:48790 conn9: { driver: { name: "mongo-go-driver", version: "local build" }, os: { type: "linux", architecture: "amd64" }, platform: "go1.9.2" }
2018-12-05T21:06:07.063+0000 I ACCESS [conn9] Successfully authenticated as principal m103-admin on admin
2018-12-05T21:06:08.585+0000 I NETWORK [conn9] end connection 192.168.103.100:48790 (4 connections now open)
2018-12-05T21:07:54.179+0000 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003
2018-12-05T21:07:54.337+0000 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003
2018-12-05T21:07:54.415+0000 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003
2018-12-05T21:07:54.530+0000 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003

Regards