sh.addShard() errmsg" Could not find host matching read preference..."

As in title, can’t add shard to the mongos. I’ve tried
MongoDB Enterprise mongos> sh.addShard(“m103-repl/192.168.103.100:27001”)
for each of the nodes
MongoDB Enterprise mongos> sh.addShard(“m103-repl/192.168.103.100:27002”)
and
MongoDB Enterprise mongos> sh.addShard(“m103-repl/192.168.103.100:27003”)
Any advice? Can I change the read preference to “primary preferred” and how do I do that?

1 Like

HI Brian_18814,

Its hard to help you with this much information. I would re-check my replica set configurations and see if they are accurate according to the requirements mentioned in the Lab. Few things that you can check:

  • Name of your replica set is correct
  • Is replica set initiated? And all nodes are added in the replica set.
  • bind_ip field is correct
  • all nodes are up and running
  • check for authentication in mongos configuration file.
  • Check in mongos log file.

If nothing works, then please share the configuration so that I can help more.

Kanika

1 Like

I’ve reconstructed my replica set: removed two nodes and added two more. Have a functional replica shad now I believe. And the config server replica set. My mongos won’t sh.addShard though. “errmsg” : “Could not find host matching read preference { mode: “primary” } for set m103-repl”
I’m trying variously to shutdown all shard servers and restarting the mongos. I’ve checked and rechecked rs.status and rs.config that I have a primary. Strangely, the primary might have had an election without my intervention. Can that happen?

I built another replicate set with cluster role and successfully added as the primary shard. I’ll see what I can do about affirming the lab. I need to clean up a lot of nodes occupying ports 27001through 27006. So hope we learn how to clean up in coming lectures.I did learn how to cause a secondary to become a primary through force assignment of the priority reconfiguration and to force removal of nodes from an rs.

Now i need to rename my shard replica set for validation. :-<

How I would do is:

  • killall mongod processes and just create new configuration files from scratch.
  • If you have no data that you need to save, clean the db path.
  • Cleanup replica set using config db.
  • With replica set, shutdown each server and remove from replica set using rs.remove('...')
  • With sharding, remove all the shards either using sh.remove('...') or use config;db.shards.remove('...')

Good Luck!

Kanika

1 Like

Hi guys,
I have the same problem and it’s really frustrating… Did you managed to figure it out?

Regards,
Mihai

I’m still trying to figure it out. I created a sharded replica set with other ports but I need to have the designated ports working in order to validate the lab. Kanic says to clean up the replica set using config db and so on. I don’t know where to begin. But each time I add the troubled ports to my replica set, they fail, becoming “unreachable.” I can get rid of them through rs.conf() using the variable and the splice command I think it was, but I need to add them back when I’m confident I’m starting fresh. |-{

Hi Brian_18814,

I am sorry that you are having tough time with the course. I have tried creating full step post, here is the first one for replica set:

And ofcourse, if you find anything missing do let me know.

To remove a shard from a sharded cluster, follow the steps:

  • Connect to mongos

  • Authenticate using m103-admin

  • Run these commands:

       > use config
       > db.shards.find()
       { "_id" : "m103-repl", "host" : "m103-repl/192.168.103.100:27001,192.168.103.100:27003,m103.mongodb.university:27002", "state" : 1 }
       { "_id" : "m103-repl-2", "host" : "m103-repl-2/192.168.103.100:27004,192.168.103.100:27005,192.168.103.100:27006", "state" : 1 }
    

If you see your shards listed here, and you want to remove one shard suppose: m103-repl-2, then:

     > use admin
     > db.adminCommand( { removeShard: "m103-repl-2" } )

And I would encourage you to look in the mongodb documentation for more information, here are some useful links:

I hope it helps!

Kanika

I’ve finally got the successive ports 27001-3 in my shard replica set along with 27004 and 27006. But Mongos isn’t accepting my sh.addShard(“m103-repl/192.168.103.100:27002”). Says can’t find host matching read preference { mode: “primary”…This is an err message I’m familiar with. All my node servers are running. Any ideas from here?

I did come across a funny case where rs.status() was reporting 27002 as the primary but when I was in the 27002 mongo it was presenting a secondary prompt… I couldn’t find a primary as being anywhere else, so I forced 27002 to be primary through giving it a priority of 1.5. It seems to be working ok.

I reset the priority for the primary in the rs back to 1. I didn’t do an election, so it’s still the primary. I’ll try stepDown and see if I can have mongos recognize the new primary.

I caused an election. 27004 is the new primary. More important, the mongos has accepted my shard. I now will attempt to remove the first shard.

Can’t do it. Can’t remove the first shard because it’s the primary. I can’t move the primary with db.adminCommand( { movePrimary: <databaseName>, to: <newPrimaryShard> } ). My command is
db.adminCommand({movePrimary: “m103-repl-sh”, to: “m103-repl”})

but mongos says database m103-repl-sh can’t be found. It’s right there when I db.shards.find().
I’m calling it a day. Another long one.

I can’t remove the extra shard in my cluster. It seems to be in a transition or “draining” state of “ongoing” indefinitely. Is there some way I can clean out the mongos and rebuild the shard cluster with the same ports?

1 Like