Wrong replica set name = replSetName

Hi,

For Lab - Initiate a Replica Set Locally, I specified the wrong replSetName on the config file and I believe that is why the validate script is failing.

Do I have to remove all of them and re-start from the beginning or can I just stop the replication and do it from that end?

Please check this link
Last section

Removing a Member from Replica Set or Replica Set as Whole

Easiest would be to stop/kill mongod and remove all log/db dirs and start again

Thanks, I’ll give that a go

Hi,

Tried to follow the instruction in

Removing a Member from Replica Set or Replica Set as Whole

Unfortunately, after I shutdown NODE2 and NODE3, NODE1 for some reason decided it is a SECONDARY and so I can’t run rs.remove. Not sure what makes it to be. I started NODE2 and NODE3 and NODE1 now shows as PRIMARY again :face_with_raised_eyebrow:

Anyway, I’ve decided to just do what you suggested below :slight_smile:

Easiest would be to stop/kill mongod and remove all log/db dirs and start again

Hopefully, on a real environment, it isn’t going to be like this. There is definitely something that is making it flip between PRIMARY and SECONDARY. All I am doing at that time is stopping/starting NODE2 and NODE3

FYI, I’ve passed the validation already.

Out of curiosity though, here’s similar to what’s happening when I shutdown NODE2 and NODE3 when attempting to do a remove. Hopefully, this is not an issue for this course.

vagrant@m103:~ ps -ef | grep mongo vagrant 10321 1 4 03:28 ? 00:07:05 mongod --config /var/mongodb/db/1/mongod-repl-1.conf vagrant 13009 1 5 06:20 ? 00:00:11 mongod --config /var/mongodb/db/2/mongod-repl-2.conf vagrant 13099 1 5 06:20 ? 00:00:11 mongod --config /var/mongodb/db/3/mongod-repl-3.conf vagrant 13225 7659 0 06:23 pts/3 00:00:00 grep --color=auto mongo vagrant@m103:~ mongo --port 27001
MongoDB shell version v3.6.12
connecting to: mongodb://127.0.0.1:27001/?gssapiServiceName=mongodb
Implicit session: session { “id” : UUID(“465ea130-8421-40d5-a606-78b4eb89fa73”) }
MongoDB server version: 3.6.12
MongoDB Enterprise m103-repl:PRIMARY> exit
bye
vagrant@m103:~ mongod --config /var/mongodb/db/2/mongod-repl-2.conf --shutdown killing process with pid: 13009 vagrant@m103:~ mongo --port 27001
MongoDB shell version v3.6.12
connecting to: mongodb://127.0.0.1:27001/?gssapiServiceName=mongodb
Implicit session: session { “id” : UUID(“44fcc93f-fa13-4fc4-af59-9fddf0a97a2a”) }
MongoDB server version: 3.6.12
MongoDB Enterprise m103-repl:PRIMARY>
MongoDB Enterprise m103-repl:PRIMARY> exit
bye
vagrant@m103:~ mongo --port 27001 MongoDB shell version v3.6.12 connecting to: mongodb://127.0.0.1:27001/?gssapiServiceName=mongodb Implicit session: session { "id" : UUID("9c193712-2d10-432a-a806-86ced74a07ab") } MongoDB server version: 3.6.12 MongoDB Enterprise m103-repl:PRIMARY> exit bye vagrant@m103:~ mongod --config /var/mongodb/db/3/mongod-repl-3.conf --shutdown
killing process with pid: 13099
vagrant@m103:~ mongo --port 27001 MongoDB shell version v3.6.12 connecting to: mongodb://127.0.0.1:27001/?gssapiServiceName=mongodb Implicit session: session { "id" : UUID("a5007d14-95aa-4799-8ebf-d25af62b1896") } MongoDB server version: 3.6.12 MongoDB Enterprise m103-repl:PRIMARY> MongoDB Enterprise > exit bye 2019-04-21T06:27:44.379+0000 I NETWORK [thread1] trying reconnect to 127.0.0.1:27001 (127.0.0.1) failed 2019-04-21T06:27:44.382+0000 I NETWORK [thread1] reconnect 127.0.0.1:27001 (127.0.0.1) ok vagrant@m103:~
vagrant@m103:~ mongo --port 27001 MongoDB shell version v3.6.12 connecting to: mongodb://127.0.0.1:27001/?gssapiServiceName=mongodb Implicit session: session { "id" : UUID("fb7df738-da22-498f-b635-0f489e773c6d") } MongoDB server version: 3.6.12 MongoDB Enterprise m103-repl:SECONDARY> exit bye vagrant@m103:~

vagrant@m103:~ vagrant@m103:~ ps -ef | grep mongo
vagrant 10321 1 4 03:28 ? 00:07:31 mongod --config /var/mongodb/db/1/mongod-repl-1.conf
vagrant 13329 7659 0 06:30 pts/3 00:00:00 grep --color=auto mongo
vagrant@m103:~ mongo --port 27001 MongoDB shell version v3.6.12 connecting to: mongodb://127.0.0.1:27001/?gssapiServiceName=mongodb Implicit session: session { "id" : UUID("df4bd1b3-6f56-4cb3-9529-bdb0bfaba30d") } MongoDB server version: 3.6.12 MongoDB Enterprise m103-repl:SECONDARY> exit bye vagrant@m103:~ mongod --config /var/mongodb/db/2/mongod-repl-2.conf
about to fork child process, waiting until server is ready for connections.
forked process: 13336
child process started successfully, parent exiting
vagrant@m103:~ mongod --config /var/mongodb/db/3/mongod-repl-3.conf about to fork child process, waiting until server is ready for connections. forked process: 13426 child process started successfully, parent exiting vagrant@m103:~ mongo --port 27001
MongoDB shell version v3.6.12
connecting to: mongodb://127.0.0.1:27001/?gssapiServiceName=mongodb
Implicit session: session { “id” : UUID(“c5506839-15cf-4f3f-b5a9-7818b64050d4”) }
MongoDB server version: 3.6.12
MongoDB Enterprise m103-repl:PRIMARY> exit
bye
vagrant@m103:~$

Yes this is expected behaviour of cluster
In a 3 node cluster min 2 nodes should be active 3/2=1.5(which is 2)
Since you have shutdown 2 nodes primary automatically steps down to secondary
When you brought up other 2 nodes you saw primary again

In prod we don’t do this.There are proper procedures to add/remove nodes or recconfigure

How to tear down the whole cluster

Thanks Rama, I thought maybe there is just a rename command of some sort or a reconfig thing to change the name or port number.

All good then, I can proceed furjter