Homework 1.2 setup-hw-1.2.sh initialize fails

DISREGARD - I WAS ON THE WRONG VM.

setup-hw-1.2.sh initialize fails:

vagrant@m103:/shared$ ./setup-hw-1.2.sh
about to fork child process, waiting until server is ready for connections.
forked process: 9321
child process started successfully, parent exiting
about to fork child process, waiting until server is ready for connections.
forked process: 9350
child process started successfully, parent exiting
about to fork child process, waiting until server is ready for connections.
forked process: 9379
child process started successfully, parent exiting
MongoDB shell version v3.6.10
connecting to: mongodb://127.0.0.1:31120/?gssapiServiceName=mongodb
Implicit session: session { “id” : UUID(“29c4ec03-8957-471c-9700-0be87ea36758”) }
MongoDB server version: 3.6.10
{
“ok” : 0,
“errmsg” : “replSetInitiate quorum check failed because not all proposed set members responded affirmatively: m1
03:31121 failed with Connection refused, m103:31122 failed with Connection refused”,
“code” : 74,
“codeName” : “NodeNotFound”
}

All 3 processes had started successfully:

vagrant@m103:/shared$
vagrant@m103:/shared$ ps -ef|grep mongo
vagrant 9321 1 1 21:18 ? 00:00:01 mongod --dbpath /home/vagrant/M310-HW-1.2/r0 --logpath /home/vagrant/M31
0-HW-1.2/r0/mongo.log.log --port 31120 --replSet TO_BE_SECURED --fork
vagrant 9350 1 0 21:18 ? 00:00:01 mongod --dbpath /home/vagrant/M310-HW-1.2/r1 --logpath /home/vagrant/M31
0-HW-1.2/r1/mongo.log.log --port 31121 --replSet TO_BE_SECURED --fork
vagrant 9379 1 1 21:18 ? 00:00:01 mongod --dbpath /home/vagrant/M310-HW-1.2/r2 --logpath /home/vagrant/M31
0-HW-1.2/r2/mongo.log.log --port 31122 --replSet TO_BE_SECURED --fork

I shutdown all 3 and went through the process manually per the lesson:

openssl rand -base64 755 > /home/vagrant/M310-HW-1.2/mongodb-keyfile
chmod 400 /home/vagrant/M310-HW-1.2/mongodb-keyfile

mongod --dbpath “/home/vagrant/M310-HW-1.2/r0” --logpath “/home/vagrant/M310-HW-1.2/r0/mongo.log.log” --port 31120 --replSet TO_BE_SECURED --fork --keyFile /home/vagrant/M310-HW-1.2/mongodb-keyfile

mongod --dbpath “/home/vagrant/M310-HW-1.2/r1” --logpath “/home/vagrant/M310-HW-1.2/r1/mongo.log.log” --port 31121 --replSet TO_BE_SECURED --fork --keyFile /home/vagrant/M310-HW-1.2/mongodb-keyfile

mongod --dbpath “/home/vagrant/M310-HW-1.2/r2” --logpath “/home/vagrant/M310-HW-1.2/r2/mongo.log.log” --port 31122 --replSet TO_BE_SECURED --fork --keyFile /home/vagrant/M310-HW-1.2/mongodb-keyfile

db.createUser({user: ‘admin’, pwd: ‘webscale’, roles: [‘root’]})

db.auth(‘admin’, ‘webscale’)

rs.add(“127.0.0.1:31121”)
rs.add(“127.0.0.1:31122”)
rs.status()

Everything worked perfectly, however the verification fails:

{ unauthorizedStatus: {“operationTime”:{"$timestamp":{“t”:1555280790,“i”:1}},“ok”:0,“errmsg”:“there are no users authenticated”,“code”:13,"$clusterTime":{“clusterTime”:{"$timestamp":{“t”:1555280790,“i”:1}},“signature”:{“hash”:{"$binary":“H4REz17tnyekTP6b3bLvlwfkPXA=”,"$type":“00”},“keyId”:{"$numberLong":“6679866183388233730”}}}}, memberStatuses: [“PRIMARY”,“SECONDARY”,“SECONDARY”] }

rs.status() output:

MongoDB Enterprise TO_BE_SECURED:PRIMARY> rs.status()
{
“set” : “TO_BE_SECURED”,
“date” : ISODate(“2019-04-14T22:43:30.238Z”),
“myState” : 1,
“term” : NumberLong(2),
“syncingTo” : “”,
“syncSourceHost” : “”,
“syncSourceId” : -1,
“heartbeatIntervalMillis” : NumberLong(2000),
“optimes” : {
“lastCommittedOpTime” : {
“ts” : Timestamp(1555281801, 1),
“t” : NumberLong(2)
},
“readConcernMajorityOpTime” : {
“ts” : Timestamp(1555281801, 1),
“t” : NumberLong(2)
},
“appliedOpTime” : {
“ts” : Timestamp(1555281801, 1),
“t” : NumberLong(2)
},
“durableOpTime” : {
“ts” : Timestamp(1555281801, 1),
“t” : NumberLong(2)
}
},
“members” : [
{
“_id” : 0,
“name” : “localhost:31120”,
“health” : 1,
“state” : 1,
“stateStr” : “PRIMARY”,
“uptime” : 1933,
“optime” : {
“ts” : Timestamp(1555281801, 1),
“t” : NumberLong(2)
},
“optimeDate” : ISODate(“2019-04-14T22:43:21Z”),
“syncingTo” : “”,
“syncSourceHost” : “”,
“syncSourceId” : -1,
“infoMessage” : “”,
“electionTime” : Timestamp(1555279878, 1),
“electionDate” : ISODate(“2019-04-14T22:11:18Z”),
“configVersion” : 3,
“self” : true,
“lastHeartbeatMessage” : “”
},
{
“_id” : 1,
“name” : “127.0.0.1:31121”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 1095,
“optime” : {
“ts” : Timestamp(1555281801, 1),
“t” : NumberLong(2)
},
“optimeDurable” : {
“ts” : Timestamp(1555281801, 1),
“t” : NumberLong(2)
},
“optimeDate” : ISODate(“2019-04-14T22:43:21Z”),
“optimeDurableDate” : ISODate(“2019-04-14T22:43:21Z”),
“lastHeartbeat” : ISODate(“2019-04-14T22:43:29.970Z”),
“lastHeartbeatRecv” : ISODate(“2019-04-14T22:43:28.690Z”),
“pingMs” : NumberLong(0),
“lastHeartbeatMessage” : “”,
“syncingTo” : “localhost:31120”,
“syncSourceHost” : “localhost:31120”,
“syncSourceId” : 0,
“infoMessage” : “”,
“configVersion” : 3
},
{
“_id” : 2,
“name” : “127.0.0.1:31122”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 1081,
“optime” : {
“ts” : Timestamp(1555281801, 1),
“t” : NumberLong(2)
},
“optimeDurable” : {
“ts” : Timestamp(1555281801, 1),
“t” : NumberLong(2)
},
“optimeDate” : ISODate(“2019-04-14T22:43:21Z”),
“optimeDurableDate” : ISODate(“2019-04-14T22:43:21Z”),
“lastHeartbeat” : ISODate(“2019-04-14T22:43:30.196Z”),
“lastHeartbeatRecv” : ISODate(“2019-04-14T22:43:28.803Z”),
“pingMs” : NumberLong(0),
“lastHeartbeatMessage” : “”,
“syncingTo” : “localhost:31120”,
“syncSourceHost” : “localhost:31120”,
“syncSourceId” : 0,
“infoMessage” : “”,
“configVersion” : 3
}
],
“ok” : 1,
“operationTime” : Timestamp(1555281801, 1),
“$clusterTime” : {
“clusterTime” : Timestamp(1555281801, 1),
“signature” : {
“hash” : BinData(0,"/uG+KKo3A1QmoOT/zf18bm3C1OE="),
“keyId” : NumberLong(“6679866183388233730”)
}
}
}

Has the same with 3.6
It looks like IP binding policy has changed.
I did use --bind_ip_all parameter for mongod.