Cannot ad members to replica set?

I am attempting to setup a 3 node replica set.
Alll 3 mongod are running on separate hosts and name resolution configured vi /etc/hosts on all nodes.
I created mongo-admin, mongo-root and cluster-admin users in the admin db on node1.
Authentication is enabled in the mongod.conf on each node.
I was able run rs.initiate() on the first node ok wihtout issue. logged in as (mongo-root)
Now when trying to adding the other nodes with rs.add I am getting below error?
MongoDB Enterprise rep1:PRIMARY> rs.add(“”)
“operationTime” : Timestamp(1558367519, 1),
“ok” : 0,
“errmsg” : “Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded:; the following nodes did not respond affirmatively: failed with Authentication failed.”,
“code” : 74,
“codeName” : “NodeNotFound”,
“$clusterTime” : {
“clusterTime” : Timestamp(1558367519, 1),
“signature” : {
“hash” : BinData(0,“ABmzlIOz94CegUlFfqsWYjSK3gw=”),
“keyId” : NumberLong(“6692111607595532289”)

I tried running rs.add while logged in as cluster-admin which has clusterAdmin role but still I get same error?

When running on different hosts mongod processes needs to authenticate between them selves. That’s another level of complexity that, I think, made the mongo staff to write the course using different ports on the same machine. In this case, cluster membership does not need to be authenticated.

You may read to get some insight on how to do that.

yes, I have setup the keyfile whcih exists on all 3 hosts in same location and permissions are set for the mongod user.
The keyfile is referenced in my mongod.conf
I am not sure if I am having issue because on first attempt to run rs.add to add the second node I was initally getting a host unreachable error in which I realized that firewalld was blcoking the port so I have since disabled firewalld and not gett this authentication error?
Thought this was suppose to be easy to setup?
Oh well, been at this since friday

I am out of idea. I would verify that the replica set names are all the same. A typo is always possible.

thanks they are all the same.
when I login to mongo shell on second and third node they think they are secondary’s but status is failed
MongoDB Enterprise > show dbs
2019-05-20T13:23:15.234-0400 E QUERY [js] Error: listDatabases failed:{
“ok” : 0,
“errmsg” : “not master and slaveOk=false”,
“code” : 13435,
“codeName” : “NotMasterNoSlaveOk”
} :

I would try to :

  1. Restart all mongod without authentication.
  2. Do the rs.add calls
  3. Restart all mongod with authentication

when using keyfile authentication is enabled automatically,
I had already tried that route, thanks though

the key files were different, I am not sure how because I generated them on node1 and simply scp’d the file to the other nodes.
For now on when using key files I be sure to do an md5 checksum on the keyfiles before trying to initialize the replica set.

Is everything working now?