Hi @Ramachandra_37567, I commented the dbpath because with the dbpath doesn’t run.
I have reviewed the nodo 2 config file, it run but in seconds finish. Because when I review the active mongo process with ps … doesn’t appear in the results.

The node 2 log show me this (below), I don’t understand the problem and I would like understand it. It suggest run the command rs.reconfig(), is it the suitable solution?
Thx
2019-12-19T09:29:24.497+0000 I CONTROL [main] ***** SERVER RESTARTED *****
2019-12-19T09:29:24.515+0000 I CONTROL [initandlisten] MongoDB starting : pid=3291 port=26002 dbpath=/var/mongodb/db/csrs2 64-bit host=m103
2019-12-19T09:29:24.515+0000 I CONTROL [initandlisten] db version v3.6.15
2019-12-19T09:29:24.515+0000 I CONTROL [initandlisten] git version: 18934fb5c814e87895c5e38ae1515dd6cb4c00f7
2019-12-19T09:29:24.515+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014
2019-12-19T09:29:24.515+0000 I CONTROL [initandlisten] allocator: tcmalloc
2019-12-19T09:29:24.515+0000 I CONTROL [initandlisten] modules: enterprise
2019-12-19T09:29:24.515+0000 I CONTROL [initandlisten] build environment:
2019-12-19T09:29:24.515+0000 I CONTROL [initandlisten] distmod: ubuntu1404
2019-12-19T09:29:24.515+0000 I CONTROL [initandlisten] distarch: x86_64
2019-12-19T09:29:24.515+0000 I CONTROL [initandlisten] target_arch: x86_64
2019-12-19T09:29:24.515+0000 I CONTROL [initandlisten] options: { config: “/etc/mongod-shard-2.conf”, net: { bindIp: “localhost,192.168.103.100”, port: 26002 }, processManagement: { fork: true }, replication: { replSetName: “m103-csrs” }, security: { keyFile: “/var/mongodb/pki/m103-keyfile” }, sharding: { clusterRole: “shardsvr” }, storage: { dbPath: “/var/mongodb/db/csrs2”, wiredTiger: { engineConfig: { cacheSizeGB: 0.1 } } }, systemLog: { destination: “file”, logAppend: true, path: “/var/mongodb/db/csrs2/mongod.log” } }
2019-12-19T09:29:24.515+0000 W - [initandlisten] Detected unclean shutdown - /var/mongodb/db/csrs2/mongod.lock is not empty.
2019-12-19T09:29:24.515+0000 I - [initandlisten] Detected data files in /var/mongodb/db/csrs2 created by the ‘wiredTiger’ storage engine, so setting the active storage engine to ‘wiredTiger’.
2019-12-19T09:29:24.515+0000 W STORAGE [initandlisten] Recovering data from the last clean checkpoint.
2019-12-19T09:29:24.515+0000 I STORAGE [initandlisten]
2019-12-19T09:29:24.515+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-12-19T09:29:24.515+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
2019-12-19T09:29:24.515+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=102M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),compatibility=(release=“3.0”,require_max=“3.0”),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2019-12-19T09:29:25.056+0000 I STORAGE [initandlisten] WiredTiger message [1576747765:56608][3291:0x7f8fc1e6dac0], txn-recover: Main recovery loop: starting at 11/256
2019-12-19T09:29:25.057+0000 I STORAGE [initandlisten] WiredTiger message [1576747765:57022][3291:0x7f8fc1e6dac0], txn-recover: Recovering log 11 through 12
2019-12-19T09:29:25.104+0000 I STORAGE [initandlisten] WiredTiger message [1576747765:104123][3291:0x7f8fc1e6dac0], txn-recover: Recovering log 12 through 12
2019-12-19T09:29:25.145+0000 I STORAGE [initandlisten] WiredTiger message [1576747765:145392][3291:0x7f8fc1e6dac0], txn-recover: Set global recovery timestamp: 0
2019-12-19T09:29:25.155+0000 I STORAGE [initandlisten] Starting WiredTigerRecordStoreThread local.oplog.rs
2019-12-19T09:29:25.155+0000 I STORAGE [initandlisten] The size storer reports that the oplog contains 3658 records totaling to 527885 bytes
2019-12-19T09:29:25.155+0000 I STORAGE [initandlisten] Scanning the oplog to determine where to place markers for truncation
2019-12-19T09:29:25.157+0000 I CONTROL [initandlisten]
2019-12-19T09:29:25.157+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is ‘always’.
2019-12-19T09:29:25.157+0000 I CONTROL [initandlisten] ** We suggest setting it to ‘never’
2019-12-19T09:29:25.157+0000 I CONTROL [initandlisten]
2019-12-19T09:29:25.157+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is ‘always’.
2019-12-19T09:29:25.157+0000 I CONTROL [initandlisten] ** We suggest setting it to ‘never’
2019-12-19T09:29:25.157+0000 I CONTROL [initandlisten]
2019-12-19T09:29:25.160+0000 W SHARDING [initandlisten] Started with --shardsvr, but no shardIdentity document was found on disk in admin.system.version. This most likely means this server has not yet been added to a sharded cluster.
2019-12-19T09:29:25.161+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory ‘/var/mongodb/db/csrs2/diagnostic.data’
2019-12-19T09:29:25.162+0000 I REPL [initandlisten] Rollback ID is 1
2019-12-19T09:29:25.163+0000 I REPL [initandlisten] No oplog entries to apply for recovery. appliedThrough and checkpointTimestamp are both null.
2019-12-19T09:29:25.164+0000 I NETWORK [initandlisten] listening via socket bound to 127.0.0.1
2019-12-19T09:29:25.164+0000 I NETWORK [initandlisten] listening via socket bound to 192.168.103.100
2019-12-19T09:29:25.164+0000 I NETWORK [initandlisten] listening via socket bound to /tmp/mongodb-26002.sock
2019-12-19T09:29:25.164+0000 I NETWORK [initandlisten] waiting for connections on port 26002
2019-12-19T09:29:25.164+0000 E REPL [replexec-0] Locally stored replica set configuration is invalid; See http://www.mongodb.org/dochub/core/recover-replica-set-from-invalid-config for information on how to recover from this. Got “BadValue: Nodes being used for config servers must be started with the --configsvr flag” while validating { _id: “m103-csrs”, version: 3, configsvr: true, protocolVersion: 1, members: [ { _id: 0, host: “192.168.103.100:26001”, arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: “192.168.103.100:26002”, arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: “192.168.103.100:26003”, arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId(‘5dfa1748d580fdc47cd7bdbd’) } }
2019-12-19T09:29:25.164+0000 F - [replexec-0] Fatal Assertion 28544 at src/mongo/db/repl/replication_coordinator_impl.cpp 554
2019-12-19T09:29:25.164+0000 F - [replexec-0]