How to change a replica set name?

In my original node1.conf file i had the repelSet name as “m103-example”. However, when I shutdown the nodes and update the repelSet name it does not update the replica set name when i start up the mongod’s but instead does not connect to replica set and when i do “rs.isMaster” it says “Does not have a valid replica set config”.

I have tried to use:

  1. Start all of the nodes in non-replicated mode
  • Stop mongod on each server
  • Start mongod back up. If you use /etc/mongod.conf , remove the replication section. If you don’t, omit the --replSet option to mongod
  1. Flush the local database where the replication set configuration is cached
  • On each server, open a mongo shell as the admin user and run use local; db.dropDatabase()
  1. Start all of the nodes again in replicated mode
  • Stop mongod on each server
  • If you use /etc/mongod.conf , add the replication section back in with the new name and start mongod . If not, start mongod with --replSet <new-name>
  1. Initialize the replica set
  • Open a mongo shell as the admin user on one of the nodes. (It will become the new primary)
  • Run rs.initiate() . DO NOT pass any arguments to rs.initiate() . (It’ll fail with an error) Any other config you want to set can be changed using rs.reconfig() later.
  • For each secondary, run rs.add('[secondary.host.name]') to add it to the replica set.
  • Wait for the secondaries to come in sync

to change the replica set name but it did not work.

original node1.conf

storage:
dbPath: /var/mongodb/db/node1
net:
bindIp: 192.168.103.100,localhost
port: 27011
security:
keyFile: /var/mongodb/pki/m103-keyfile
systemLog:
destination: file
path: /var/mongodb/db/node1/mongod.log
logAppend: true
processManagement:
fork: true
replication:
replSetName: m103-example

new node1.conf

sharding:
clusterRole: shardsvr
storage:
dbPath: /var/mongodb/db/node1
wiredTiger:
engineConfig:
cacheSizeGB: .1
net:
bindIp: 192.168.103.100,localhost
port: 27011
security:
keyFile: /var/mongodb/pki/m103-keyfile
systemLog:
destination: file
path: /var/mongodb/db/node1/mongod.log
logAppend: true
processManagement:
fork: true
replication:
replSetName: m103-repl

The easiest way is:

  1. Make sure all mongod are stopped
  2. Remove all the files from the directories specified in the config files
  3. Recreate the user.
  4. Do the normal rs.initiate () and rs.add ()

Please note that you lose all the data in your databases. I would not do that in a production environment without doing a backup first.