Lab 3.2. Shard a Collection: problem sharding the collection

I have chosen “sku” as shard key:

MongoDB Enterprise mongos> db.products.createIndex({“sku”: 1})

{
        "raw" : {
                "m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003" : {
                        "createdCollectionAutomatically" : true,
                        "numIndexesBefore" : 1,
                        "numIndexesAfter" : 2,
                        "ok" : 1
                }
        },
        "ok" : 1,
        "operationTime" : Timestamp(1575979233, 8),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1575979233, 8),
                "signature" : {
                        "hash" : BinData(0,"vWDvKJbfRM1FHTvrb7X8gK83Zig="),
                        "keyId" : NumberLong("6766618509813219355")
                }
        }
}

But now I get an error when trying to shard the collection:

MongoDB Enterprise mongos> db.adminCommand({shardCollection: “m103.products”, key: {sku: 1}})

{
        "ok" : 0,
        "errmsg" : "Please create an index that starts with the proposed shard key before sharding the collection",
        "code" : 72,
        "codeName" : "InvalidOptions",
        "operationTime" : Timestamp(1575979318, 7),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1575979318, 7),
                "signature" : {
                        "hash" : BinData(0,"PFLqbQTcYoRJRU6VSfmxAyFfI3U="),
                        "keyId" : NumberLong("6766618509813219355")
                }
        }
}
1 Like

OK, I forgot to use m103. Solved.

But now, when running the validation script:

Incorrect number of documents imported - make sure you import the entire
dataset.

I think I got them right:

2019-12-10T11:49:40.300+0000 imported 516784 documents

It just needs a little time to balance out. Give it around 10 minutes.

Just to make sure:

MongoDB Enterprise mongos> sh.status()

--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("5de7dbafc9665f24c05ef223")
  }
  shards:
        {  "_id" : "m103-repl",  "host" : "m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003",  "state" : 1 }
        {  "_id" : "m103-repl-2",  "host" : "m103-repl-2/192.168.103.100:27004,192.168.103.100:27005,192.168.103.100:27006",  "state" : 1 }
  active mongoses:
        "3.6.15" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                2 : Success
  databases:
        {  "_id" : "applicationData",  "primary" : "m103-repl",  "partitioned" : false }
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                m103-repl       1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : m103-repl Timestamp(1, 0)
        {  "_id" : "m103",  "primary" : "m103-repl-2",  "partitioned" : true }
                m103.products
                        shard key: { "sku" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                m103-repl       1
                                m103-repl-2     2
                        { "sku" : { "$minKey" : 1 } } -->> { "sku" : 23153496 } on : m103-repl Timestamp(2, 0)
                        { "sku" : 23153496 } -->> { "sku" : 28928914 } on : m103-repl-2 Timestamp(2, 1)
                        { "sku" : 28928914 } -->> { "sku" : { "$maxKey" : 1 } } on : m103-repl-2 Timestamp(1, 2)
        {  "_id" : "test",  "primary" : "m103-repl",  "partitioned" : false }
        {  "_id" : "testDatabase",  "primary" : "m103-repl",  "partitioned" : false }

Does everything look OK?

Based on what you’ve seen in the lectures, what do you think?

Not sure if config needs a shard key and, moreover, an _id shard key.

Yeah it’s fine. It happens internally.

Hi @JavierBlanco,

We do not recommend to share the answers or any information which will lead to the answer in our discussion forum.

Please re-run the validation script after some time so that the collection metadata gets updated. We are working on fixing this issue in our validation script.

Thanks,
Shubham Ranjan
Curriculum Support Engineer

do I just need to wait 10 mins after I create the index for this to work?

Hi @pmuthyala,

In your case, you are not able to shard the collection because you have not created an index on the shard key field.

You must create an index on the shard key field before sharding the collection.

Please execute the commands in the right sequence as it has been mentioned in the lab.

Hope it helps!

If you have any other query then please feel free to get back to us.

Happy Learning :slight_smile:

Thanks,
Shubham Ranjan
Curriculum Support Engineer

I don’t understand, how can you say that (that I did not create an index), did you see all my commands? Let me try it again just for you:

Hi @pmuthyala,

This is what the error message says in the screenshot.

You have not created the index on the right database. As I can see in the screenshot it is on the admin database. However, you are supposed to do it on the m103 database.

Thanks,
Shubham Ranjan
Curriculum Support Engineer

Pretty much what @Shubham_Ranjan said, and error messages don’t lie.
image
You created the index on the admin database.

o ok, so I was missing the switch db step to m103 from admin, now looks like it worked, pardon me for being verbose, just want to be explicit to avoid confusion :), now am stuck here.

Hi @pmuthyala,

I would suggest you to run the validator after 15-20 minutes so that the metadata gets updated and it will reflect the correct count. The time duration could vary so be a little patient with it.

We are aware of this issue and we are working on it.

Thanks,
Shubham Ranjan
Curriculum Support Engineer