Replica set with ssl and x509 on mongodb v 3.2.22

Hi everyone,
I am facing an issue while configuring a 3 nodes replica set for ssl and x509 authentication mechanism.
I am getting an error after initiating the replica set add want to add members to it:
{
“ok” : 0,
“errmsg” : “Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: database:31130; the following nodes did not respond affirmatively: database:31131 failed with SSLHandshakeFailed”,
“code” : 74
}

should I upgrade the mongodb or I am missing something else,
here are the running scripts I used to launch the nodes:

mongod --replSet myReplSet --dbpath M310-HW-1.3/r0 --logpath M310-HW-1.3/r0/mongodb.log --port 31130 --fork --sslMode requireSSL --clusterAuthMode x509 --sslPEMKeyFile shared/certs/server.pem --sslCAFile shared/certs/ca.pem

mongod --replSet myReplSet --dbpath M310-HW-1.3/r1 --logpath M310-HW-1.3/r1/mongodb.log --port 31131 --fork --sslMode requireSSL --clusterAuthMode x509 --sslPEMKeyFile shared/certs/server.pem --sslCAFile shared/certs/ca.pem

mongod --replSet myReplSet --dbpath M310-HW-1.3/r2 --logpath M310-HW-1.3/r2/mongodb.log --port 31132 --fork --sslMode requireSSL --clusterAuthMode x509 --sslPEMKeyFile shared/certs/server.pem --sslCAFile shared/certs/ca.pem

Does your mongod.log give more details?
I think you are missing auth parameter

here is the log of the supposed primary node:

2020-09-14T12:13:30.217+0000 I CONTROL  [initandlisten] MongoDB starting : pid=29918 port=31130 dbpath=/home/vagrant/M310-HW-1.3/r0 64-bit host=database
2020-09-14T12:13:30.217+0000 I CONTROL  [initandlisten] db version v3.2.22
2020-09-14T12:13:30.217+0000 I CONTROL  [initandlisten] git version: 105acca0d443f9a47c1a5bd608fd7133840a58dd
2020-09-14T12:13:30.217+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014
2020-09-14T12:13:30.217+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2020-09-14T12:13:30.217+0000 I CONTROL  [initandlisten] modules: enterprise
2020-09-14T12:13:30.217+0000 I CONTROL  [initandlisten] build environment:
2020-09-14T12:13:30.217+0000 I CONTROL  [initandlisten]     distmod: ubuntu1404
2020-09-14T12:13:30.217+0000 I CONTROL  [initandlisten]     distarch: x86_64
2020-09-14T12:13:30.217+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2020-09-14T12:13:30.217+0000 I CONTROL  [initandlisten] options: { net: { port: 31130, ssl: { CAFile: "shared/certs/ca.pem", PEMKeyFile: "shared/certs/server.pem", mode: "requireSSL" } }, processManagement: { fork: true }, replication: { replSet: "myReplSet" }, security: { clusterAuthMode: "x509" }, storage: { dbPath: "M310-HW-1.3/r0" }, systemLog: { destination: "file", path: "M310-HW-1.3/r0/mongodb.log" } }
2020-09-14T12:13:30.233+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=1G,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),verbose=(recovery_progress),
2020-09-14T12:13:30.248+0000 I CONTROL  [initandlisten]
2020-09-14T12:13:30.248+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2020-09-14T12:13:30.248+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2020-09-14T12:13:30.248+0000 I CONTROL  [initandlisten]
2020-09-14T12:13:30.248+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2020-09-14T12:13:30.248+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2020-09-14T12:13:30.248+0000 I CONTROL  [initandlisten]
2020-09-14T12:13:30.254+0000 I REPL     [initandlisten] Did not find local voted for document at startup.
2020-09-14T12:13:30.254+0000 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument: Did not find replica set configuration document in local.system.replset
2020-09-14T12:13:30.255+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/home/vagrant/M310-HW-1.3/r0/diagnostic.data'
2020-09-14T12:13:30.255+0000 I NETWORK  [initandlisten] waiting for connections on port 31130 ssl
2020-09-14T12:13:30.255+0000 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2020-09-14T12:14:57.019+0000 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:58723 #1 (1 connection now open)
2020-09-14T12:14:57.024+0000 I ACCESS   [conn1] note: no users configured in admin.system.users, allowing localhost access
2020-09-14T12:14:57.025+0000 I ACCESS   [conn1] Unauthorized: not authorized on admin to execute command { getLog: "startupWarnings" }
2020-09-14T12:19:47.060+0000 I COMMAND  [conn1] initiate : no configuration specified. Using a default configuration for the set
2020-09-14T12:19:47.060+0000 I COMMAND  [conn1] created this configuration for initiation : { _id: "myReplSet", version: 1, members: [ { _id: 0, host: "database:31130" } ] }
2020-09-14T12:19:47.060+0000 I REPL     [conn1] replSetInitiate admin command received from client
2020-09-14T12:19:47.061+0000 I REPL     [conn1] replSetInitiate config object with 1 members parses ok
2020-09-14T12:19:47.066+0000 I REPL     [conn1] ******
2020-09-14T12:19:47.067+0000 I REPL     [conn1] creating replication oplog of size: 1736MB...
2020-09-14T12:19:47.072+0000 I STORAGE  [conn1] Starting WiredTigerRecordStoreThread local.oplog.rs
2020-09-14T12:19:47.072+0000 I STORAGE  [conn1] The size storer reports that the oplog contains 0 records totaling to 0 bytes
2020-09-14T12:19:47.072+0000 I STORAGE  [conn1] Scanning the oplog to determine where to place markers for truncation
2020-09-14T12:19:47.075+0000 I REPL     [conn1] ******
2020-09-14T12:19:47.081+0000 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "myReplSet", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "database:31130", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5f5f5fe3b9d7abc5d895a8e8') } }
2020-09-14T12:19:47.081+0000 I REPL     [ReplicationExecutor] This node is database:31130 in the config
2020-09-14T12:19:47.081+0000 I REPL     [ReplicationExecutor] transition to STARTUP2
2020-09-14T12:19:47.083+0000 I REPL     [conn1] Starting replication applier threads
2020-09-14T12:19:47.084+0000 I REPL     [ReplicationExecutor] transition to RECOVERING
2020-09-14T12:19:47.084+0000 I REPL     [ReplicationExecutor] transition to SECONDARY
2020-09-14T12:19:47.084+0000 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
2020-09-14T12:19:47.084+0000 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
2020-09-14T12:19:47.089+0000 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
2020-09-14T12:19:47.089+0000 I REPL     [ReplicationExecutor] transition to PRIMARY
2020-09-14T12:19:48.088+0000 I REPL     [rsSync] transition to primary complete; database writes are now permitted
2020-09-14T12:20:35.748+0000 I ACCESS   [conn1] Unauthorized: not authorized on local to execute command { count: "system.replset", query: {}, fields: {} }
2020-09-14T12:20:49.032+0000 I ACCESS   [conn1] Unauthorized: not authorized on local to execute command { count: "system.replset", query: {}, fields: {} }

Have you tried to restart your mongods with --auth?

Hi @relmaazouz,

Please check your discourse inbox and feel free to reach out if you have any questions.

Thanks,
Sonali