Mlaunch error on "Understanding explain pt 2"

Hi, I get an error when running the launch command from “Understanding Explain Part 2” in M201.

$ mlaunch init --single --sharded 2

  • In MongoDB 3.6 and above a Shard must be made up of a replica set. Please use --replicaset option when starting a sharded cluster.*

What’s happening?

There might be some existing mongod instances running in the ports that mlaunch is trying to use. Check what mongod processes are running and shutdown those servers:
ps -ef | grep "[m]ongo"

Yeah, nothing is running but still this error.

$ ps -ef | grep “[m]ongo”

$ mlaunch init --single --sharded 2

  • In MongoDB 3.6 and above a Shard must be made up of a replica set. Please use --replicaset option when starting a sharded cluster.*

Let me try it out on my end.

It looks like that video is a bit outdated. Replace --single with --replicaset.

2 Likes

Okay, done. It worked with --replicaset but I didn’t get two shards (only a single) like demonstrated in the video. Thanks for your help!

Interesting! I got two shards when I ran it. It might still be prepping the second shard. Give it a moment.

1 Like

Sorry, I do get two shards, but the distribution only uses the single shard.

MongoDB Enterprise mongos> db.people.getShardDistribution()

Shard shard01 at shard01/localhost:27018,localhost:27019,localhost:27020
data : 18.66MiB docs : 50474 chunks : 1
estimated data per chunk : 18.66MiB
estimated docs per chunk : 50474

Totals
data : 18.66MiB docs : 50474 chunks : 1
Shard shard01 contains 100% data, 100% docs in cluster, avg obj size on shard : 387B

The balancer might still be in flight. Check again.

Hi @Kyle_89075,

Please share the output of sh.status() command.

Let me know, if you have any questions.

Thanks,
Sonali

Sure, it’s:

MongoDB Enterprise mongos> sh.status()
— Sharding Status —
sharding version: {
“_id” : 1,
“minCompatibleVersion” : 5,
“currentVersion” : 6,
“clusterId” : ObjectId(“5d9b5d6d0565040775cfe6a9”)
}
shards:
{ “_id” : “shard01”, “host” : “shard01/localhost:27018,localhost:27019,localhost:27020”, “state” : 1 }
{ “_id” : “shard02”, “host” : “shard02/localhost:27021,localhost:27022,localhost:27023”, “state” : 1 }
active mongoses:
“4.2.0” : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ “_id” : “config”, “primary” : “config”, “partitioned” : true }
{ “_id” : “m201”, “primary” : “shard01”, “partitioned” : true, “version” : { “uuid” : UUID(“84d26570-7e48-4b7d-af49-a0c940d33366”), “lastMod” : 1 } }
m201.people
shard key: { “_id” : 1 }
unique: false
balancing: true
chunks:
shard01 1
{ “_id” : { “$minKey” : 1 } } -->> { “_id” : { “$maxKey” : 1 } } on : shard01 Timestamp(1, 0)

Still looks like only one shard is being used:

MongoDB Enterprise mongos> db.people.getShardDistribution()

Shard shard01 at shard01/localhost:27018,localhost:27019,localhost:27020
data : 18.66MiB docs : 50474 chunks : 1
estimated data per chunk : 18.66MiB
estimated docs per chunk : 50474

Totals
data : 18.66MiB docs : 50474 chunks : 1
Shard shard01 contains 100% data, 100% docs in cluster, avg obj size on shard : 387B

Hi @Kyle_89075,

Please try to restart the process from scratch. You will need to delete the --dbpath directories of mongod processes. Then restart the mongod processes, create shards and re-import the data.

Then check the sh.status() output.

I hope it helps!!
Please let me know, if you have any questions.

Thanks,
Sonali

The problem also is that _id is a monotonically increasing value so it’s not the best choice for a shard key… this was also mentioned in the “Basic Cluster Administration” course. So there’s a chance of this happening again. In this lecture you’ll notice that the split is 98% vs 2% between shards.

No harm in trying again though.

1 Like

Yeah, I deleted the dbpath directories and did it over multiple times but same result. Not a huge deal. I was just curious why it wasn’t balanced on both shards. Thanks.

I see. I will take “Basic Cluster Administration” from next week. I tried a few times but never got a 98/2 split between shards. Oh well.

Try one more time and instead of _id use last_name and see what distribution you get.

Tried that. Still 100% on shard01.

sh.shardCollection("m201.people", {last_name: 1})
{
	"collectionsharded" : "m201.people",
	"collectionUUID" : UUID("38d500f8-e097-4d38-9725-8a0f1c43aa58"),
	"ok" : 1,
	"operationTime" : Timestamp(1570517800, 10),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1570517800, 10),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
MongoDB Enterprise mongos> db.people.getShardDistribution()

Shard shard01 at shard01/localhost:27018,localhost:27019,localhost:27020
 data : 18.66MiB docs : 50474 chunks : 1
 estimated data per chunk : 18.66MiB
 estimated docs per chunk : 50474

Totals
 data : 18.66MiB docs : 50474 chunks : 1
 Shard shard01 contains 100% data, 100% docs in cluster, avg obj size on shard : 387B
1 Like

What do you get with this:
use admin
db.adminCommand({ listShards: 1 })

Here’s what I get:

MongoDB Enterprise mongos> use admin
switched to db admin
MongoDB Enterprise mongos> db.adminCommand({ listShards: 1 })
{
	"shards" : [
		{
			"_id" : "shard01",
			"host" : "shard01/localhost:27018,localhost:27019,localhost:27020",
			"state" : 1
		},
		{
			"_id" : "shard02",
			"host" : "shard02/localhost:27021,localhost:27022,localhost:27023",
			"state" : 1
		}
	],
	"ok" : 1,
	"operationTime" : Timestamp(1570518367, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1570518367, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
1 Like

You definitely have two shards. Let’s see sh.status().
And if you could wrap the result in triple single quotes it would preserve the formatting to make it easier to read.