Duplicate key for shardIdentity

Hi all,
I made the mistake of running the example lectures with the labs and now have an issue with the Sharding lab.

I added the m103-example cluster to the sharding server and then added the m103-repl cluster to the sharing server. So I ended up with 2 clusters which the validation didn’t like.

I then attempted to remove the m103-example cluster following the example in this discussion but that failed to complete.

I then decided to rebuild the csrsX cluster by removing the csrsX folders and deleting the logs, restarting the vagrant environment, and starting the csrsX cluster.

That worked up to a point, however I now need to remove a duplicate key so I can add the shards.

Error seen now is
MongoDB Enterprise mongos> sh.addShard(“m103-repl/192.168.103.100:27001”)
{
“ok” : 0,
“errmsg” : “E11000 duplicate key error collection: admin.system.version index: id dup key: { : “shardIdentity” }”,
“code” : 11000,
“codeName” : “DuplicateKey”,
“operationTime” : Timestamp(1548468891, 1),
“$clusterTime” : {
“clusterTime” : Timestamp(1548468891, 1),
“signature” : {
“hash” : BinData(0,“Z7cG4sx9W5MtnYHhZoo0MRZ+Y2E=”),
“keyId” : NumberLong(“6650616652943589402”)
}
}
}

I’ve been looking through the documentation but I’m not getting far.
Any help appreciated.
Cheers
Peter

Could you drop the collection which you have shareded, then reload the data and create a new index and then a new shard key?

@aussiepete2016 A couple things:

  1. Post the output of “sh.status()” to see what the system thinks are your current shards

  2. Try sh.removeShard(“m103-repl/192.168.103.100:27001”) to see if that works.

@NMullins I think this maybe the shard itself not a collection that is having issues.

HTH,
Mike

Hi Mike,
I dropped the csrs dbs and the m103-repl dbs and started from scratch or so I thought.
When I run the sh.status() I see the shards as below

MongoDB Enterprise mongos> sh.status()
— Sharding Status —
sharding version: {
“_id” : 1,
“minCompatibleVersion” : 5,
“currentVersion” : 6,
“clusterId” : ObjectId(“5c4bbc9cb4085acc10b5c817”)
}
shards:
{ “_id” : “m103-repl”, “host” : “m103-repl/192.168.103.100:27001,192.168.103.100:27002,192.168.103.100:27003”, “state” : 1 }
active mongoses:
“3.6.9” : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ “_id” : “applicationData”, “primary” : “m103-repl”, “partitioned” : false }
{ “_id” : “config”, “primary” : “config”, “partitioned” : true }
config.system.sessions
shard key: { “_id” : 1 }
unique: false
balancing: true
chunks:
m103-repl 1
{ “_id” : { “$minKey” : 1 } } -->> { “_id” : { “$maxKey” : 1 } } on : m103-repl Timestamp(1, 0)

MongoDB Enterprise mongos>

But when I run the validation script I get the same error as before.

vagrant@m103:~$ validate_lab_first_sharded_cluster

Replica set ‘m103-csrs’ not configured correctly - make sure only three nodes have been
added to the replica set.

I’m now running out of time for this course so will watch the rest of the video’s then complete the exam questions just to get through.

Not very happy with this course compared to the last.

Cheers
Peter

Hi aussiepete2016,

Sorry for your experience. But as per the error you have shared, it is showing wrong with your config server replica set and not your shards.

If you can try to share m103-csrs status, then I can take a look and we can debug the issue.

Start mogno with port 26001 (one node for config server) and run rs.status() and share it here.

Make sure it only has 3 nodes: 26001, 26002 and 26003

Kanika

Hi Kanika

Thanks for the response. I decided to destroy and remove the virtual environment and re-provision so I could start from scratch.

When I did run an investigation, prior, I realised I didn’t have m103-repl/m103:27001 in a healthy state

It was quicker to just start again so I could get through the course. The exam questions were quite easy considering. Maybe because I had to run through this course a couple of times :wink:

Cheers
Peter

Hi everyone!

I’m going through exactly the same scenerio as aussiepete2016, but I think my csrs repl-set is configured correctly. Is there a way to debug this instead of removing vagrant and virtual box and re-provisioning?

Here are some details from my terminal:

  1. Initial error message when trying to run sh.addShard() + sh.status():

  1. My m103-csrs rs.status():

MongoDB Enterprise m103-csrs:PRIMARY> rs.status()
{
“set” : “m103-csrs”,
“date” : ISODate(“2020-01-21T17:29:35.014Z”),
“myState” : 1,
“term” : NumberLong(1),
“syncingTo” : “”,
“syncSourceHost” : “”,
“syncSourceId” : -1,
“configsvr” : true,
“heartbeatIntervalMillis” : NumberLong(2000),
“optimes” : {
“lastCommittedOpTime” : {
“ts” : Timestamp(1579627771, 1),
“t” : NumberLong(1)
},
“readConcernMajorityOpTime” : {
“ts” : Timestamp(1579627771, 1),
“t” : NumberLong(1)
},
“appliedOpTime” : {
“ts” : Timestamp(1579627771, 1),
“t” : NumberLong(1)
},
“durableOpTime” : {
“ts” : Timestamp(1579627771, 1),
“t” : NumberLong(1)
}
},
“members” : [
{
“_id” : 0,
“name” : “192.168.103.100:26001”,
“health” : 1,
“state” : 1,
“stateStr” : “PRIMARY”,
“uptime” : 2101,
“optime” : {
“ts” : Timestamp(1579627771, 1),
“t” : NumberLong(1)
},
“optimeDate” : ISODate(“2020-01-21T17:29:31Z”),
“syncingTo” : “”,
“syncSourceHost” : “”,
“syncSourceId” : -1,
“infoMessage” : “”,
“electionTime” : Timestamp(1579625759, 2),
“electionDate” : ISODate(“2020-01-21T16:55:59Z”),
“configVersion” : 3,
“self” : true,
“lastHeartbeatMessage” : “”
},
{
“_id” : 1,
“name” : “192.168.103.100:26002”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 1929,
“optime” : {
“ts” : Timestamp(1579627771, 1),
“t” : NumberLong(1)
},
“optimeDurable” : {
“ts” : Timestamp(1579627771, 1),
“t” : NumberLong(1)
},
“optimeDate” : ISODate(“2020-01-21T17:29:31Z”),
“optimeDurableDate” : ISODate(“2020-01-21T17:29:31Z”),
“lastHeartbeat” : ISODate(“2020-01-21T17:29:33.979Z”),
“lastHeartbeatRecv” : ISODate(“2020-01-21T17:29:34.505Z”),
“pingMs” : NumberLong(0),
“lastHeartbeatMessage” : “”,
“syncingTo” : “192.168.103.100:26001”,
“syncSourceHost” : “192.168.103.100:26001”,
“syncSourceId” : 0,
“infoMessage” : “”,
“configVersion” : 3
},
{
“_id” : 2,
“name” : “192.168.103.100:26003”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 1923,
“optime” : {
“ts” : Timestamp(1579627771, 1),
“t” : NumberLong(1)
},
“optimeDurable” : {
“ts” : Timestamp(1579627771, 1),
“t” : NumberLong(1)
},
“optimeDate” : ISODate(“2020-01-21T17:29:31Z”),
“optimeDurableDate” : ISODate(“2020-01-21T17:29:31Z”),
“lastHeartbeat” : ISODate(“2020-01-21T17:29:34.106Z”),
“lastHeartbeatRecv” : ISODate(“2020-01-21T17:29:34.692Z”),
“pingMs” : NumberLong(0),
“lastHeartbeatMessage” : “”,
“syncingTo” : “192.168.103.100:26001”,
“syncSourceHost” : “192.168.103.100:26001”,
“syncSourceId” : 0,
“infoMessage” : “”,
“configVersion” : 3
}
],
“ok” : 1,
“operationTime” : Timestamp(1579627771, 1),
“$gleStats” : {
“lastOpTime” : Timestamp(0, 0),
“electionId” : ObjectId(“7fffffff0000000000000001”)
},
“$clusterTime” : {
“clusterTime” : Timestamp(1579627771, 1),
“signature” : {
“hash” : BinData(0,“Z1OSWvIhkuI9swOEwTP66MMGhKE=”),
“keyId” : NumberLong(“6784440979119144986”)
}
}
}
MongoDB Enterprise m103-csrs:PRIMARY>

Quick update:

I have solved the issue myself - turned out that when I was rebuilding csrs repl set and mongos, I should have also rebuilt the actual repl set to be added as shared. When I rebuilt this one as well, everything worked.

Happy learning people!
:slight_smile:

1 Like

Hi @krasy8,

I’m glad your issue got resolved. Please feel free to get back to us if you have any other query :slight_smile: .

Thanks,
Shubham Ranjan
Curriculum Services Engineer