Help! - I have a problem with Lab - Initiate a Replica Set Locally

Hello, i am trying to make the lab but i cant, i don’t know why please help
To me i can’t find some error…
This is my cluster status

MongoDB Enterprise m103-example:PRIMARY> rs.status()
{
“set” : “m103-example”,
“date” : ISODate(“2019-07-28T00:31:36.007Z”),
“myState” : 1,
“term” : NumberLong(1),
“syncingTo” : “”,
“syncSourceHost” : “”,
“syncSourceId” : -1,
“heartbeatIntervalMillis” : NumberLong(2000),
“optimes” : {
“lastCommittedOpTime” : {
“ts” : Timestamp(1564273895, 1),
“t” : NumberLong(1)
},
“readConcernMajorityOpTime” : {
“ts” : Timestamp(1564273895, 1),
“t” : NumberLong(1)
},
“appliedOpTime” : {
“ts” : Timestamp(1564273895, 1),
“t” : NumberLong(1)
},
“durableOpTime” : {
“ts” : Timestamp(1564273895, 1),
“t” : NumberLong(1)
}
},
“members” : [
{
“_id” : 0,
“name” : “192.168.103.100:27001”,
“health” : 1,
“state” : 1,
“stateStr” : “PRIMARY”,
“uptime” : 124,
“optime” : {
“ts” : Timestamp(1564273895, 1),
“t” : NumberLong(1)
},
“optimeDate” : ISODate(“2019-07-28T00:31:35Z”),
“syncingTo” : “”,
“syncSourceHost” : “”,
“syncSourceId” : -1,
“infoMessage” : “could not find member to sync from”,
“electionTime” : Timestamp(1564273803, 2),
“electionDate” : ISODate(“2019-07-28T00:30:03Z”),
“configVersion” : 3,
“self” : true,
“lastHeartbeatMessage” : “”
},
{
“_id” : 1,
“name” : “m103:27002”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 29,
“optime” : {
“ts” : Timestamp(1564273895, 1),
“t” : NumberLong(1)
},
“optimeDurable” : {
“ts” : Timestamp(1564273895, 1),
“t” : NumberLong(1)
},
“optimeDate” : ISODate(“2019-07-28T00:31:35Z”),
“optimeDurableDate” : ISODate(“2019-07-28T00:31:35Z”),
“lastHeartbeat” : ISODate(“2019-07-28T00:31:35.585Z”),
“lastHeartbeatRecv” : ISODate(“2019-07-28T00:31:34.079Z”),
“pingMs” : NumberLong(0),
“lastHeartbeatMessage” : “”,
“syncingTo” : “192.168.103.100:27001”,
“syncSourceHost” : “192.168.103.100:27001”,
“syncSourceId” : 0,
“infoMessage” : “”,
“configVersion” : 3
},
{
“_id” : 2,
“name” : “m103:27003”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 24,
“optime” : {
“ts” : Timestamp(1564273895, 1),
“t” : NumberLong(1)
},
“optimeDurable” : {
“ts” : Timestamp(1564273895, 1),
“t” : NumberLong(1)
},
“optimeDate” : ISODate(“2019-07-28T00:31:35Z”),
“optimeDurableDate” : ISODate(“2019-07-28T00:31:35Z”),
“lastHeartbeat” : ISODate(“2019-07-28T00:31:35.584Z”),
“lastHeartbeatRecv” : ISODate(“2019-07-28T00:31:34.830Z”),
“pingMs” : NumberLong(0),
“lastHeartbeatMessage” : “”,
“syncingTo” : “m103:27002”,
“syncSourceHost” : “m103:27002”,
“syncSourceId” : 1,
“infoMessage” : “”,
“configVersion” : 3
}
],
“ok” : 1,
“operationTime” : Timestamp(1564273895, 1),
“$clusterTime” : {
“clusterTime” : Timestamp(1564273895, 1),
“signature” : {
“hash” : BinData(0,“1jZ4KFx66TM7fTkdrG8KDvq0NQs=”),
“keyId” : NumberLong(“6718504834464481282”)
}
}
}

And now the validate output
vagrant@m103:~$ validate_lab_initialize_local_replica_set

Client experienced a timeout when connecting to the database - check that mongod
processes are running on ports 27001, 27002 and 27003, and that the ‘m103-admin’
user authenticates against the admin database.

The ports are ok, the user too… included the location of the files /paths

vagrant@m103:~$ cat mongod-repl-1.conf
net:
bindIp: localhost,192.168.103.100
port: 27001
storage:
dbPath: /var/mongodb/db/1
systemLog:
destination: file
path: /var/mongodb/db/mongod1.log
logAppend: true
security:
authorization: enabled
keyFile: /var/mongodb/pki/m103-keyfile
replication:
replSetName: m103-example
processManagement:
fork: true

Does anyone knows what i am doing wrong?
Best regards

Are the ports correct?
Are you able to login with the user?
Please follow the steps as per your lab.Sometimes while copy config files from previous labs you may be using wrong values
Please double check

Hello, i used this to create the user
use admin
db.createUser({
user: “m103-admin”,
pwd: “m103-pass”,
roles: [
{role: “root”, db: “admin”}
]
})
and this string to connect to the cluster
mongo --host “m103-example/192.168.103.100:27001” -u “m103-admin” -p “m103-pass” --authenticationDatabase “admin”

and i think that is connecting ok …
including if you see the log… you can see the secondary nodes and they ports

vagrant@m103:~$ mongo --host “m103-example/192.168.103.100:27001” -u “m103-admin” -p “m103-pass” --authenticationDatabase “admin”
MongoDB shell version v3.6.12
connecting to: mongodb://192.168.103.100:27001/?authSource=admin&gssapiServiceName=mongodb&replicaSet=m103-example
2019-07-28T02:29:37.149+0000 I NETWORK [thread1] Starting new replica set monitor for m103-example/192.168.103.100:27001
2019-07-28T02:29:37.150+0000 I NETWORK [thread1] Successfully connected to 192.168.103.100:27001 (1 connections now open to 192.168.103.100:27001 with a 5 second timeout)
2019-07-28T02:29:37.151+0000 I NETWORK [thread1] changing hosts to m103-example/192.168.103.100:27001,m103:27002,m103:27003 from m103-example/192.168.103.100:27001
2019-07-28T02:29:37.152+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor-0] Successfully connected to m103:27002 (1 connections now open to m103:27002 with a 5 second timeout)
2019-07-28T02:29:37.153+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor-0] Successfully connected to m103:27003 (1 connections now open to m103:27003 with a 5 second timeout)
Implicit session: session { “id” : UUID(“cde8fa41-6e1e-4b34-968d-8682247dc699”) }
MongoDB server version: 3.6.12
Server has startup warnings:
2019-07-28T00:29:32.994+0000 I STORAGE [initandlisten]
2019-07-28T00:29:32.994+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-07-28T00:29:32.994+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
2019-07-28T00:29:33.537+0000 I CONTROL [initandlisten]
2019-07-28T00:29:33.537+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is ‘always’.
2019-07-28T00:29:33.537+0000 I CONTROL [initandlisten] ** We suggest setting it to ‘never’
2019-07-28T00:29:33.537+0000 I CONTROL [initandlisten]
2019-07-28T00:29:33.537+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is ‘always’.
2019-07-28T00:29:33.537+0000 I CONTROL [initandlisten] ** We suggest setting it to ‘never’
2019-07-28T00:29:33.537+0000 I CONTROL [initandlisten]
MongoDB Enterprise m103-example:PRIMARY>

Is your replSetName correct?
Please check again
If you don’t follow lab requirements validation will fail

3 Likes

ups! ill modify the name, you’re right!
the lab ask for m103-repl and i put m103-example! sorry! i was looking for ports issues and user/pass issues…

1 Like