Chapter 3 - Lab - Shard a Collection

As part of the lab exercise to shard a collection, in the step 2 I need to import data onto primary shard

It says

dataset products.json is contained in your Vagrant box, in the /dataset/ directory

But if I do ls on /dataset

vagrant@m103:~$ ls /dataset

ls: cannot access /dataset: Protocol error

vagrant@m103:~$

I am getting above error and I don’t see this file.

can someone help me regarding it.

regards,
chakri

Where it says dataset make that /shared/ or where ever you have put your products.json file

in my local system, I do have the products.json in dataset directory and also in shared directory.

Ban-1Chakrapani-m:m103-vagrant-env ckanamarlapudi$ ls

Vagrantfile dataset provision-mongod shared

Ban-1Chakrapani-m:m103-vagrant-env ckanamarlapudi$ ls dataset

products.json

Ban-1Chakrapani-m:m103-vagrant-env ckanamarlapudi$ ls shared/

products.json restaurants.json

but if goto vagrant ssh

Ban-1Chakrapani-m:m103-vagrant-env ckanamarlapudi$ vagrant ssh
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-164-generic x86_64)

System information as of Sat Jan 26 17:36:27 UTC 2019

System load: 0.54 Processes: 95
Usage of /: 14.3% of 39.34GB Users logged in: 0
Memory usage: 60% IP address for eth0: 10.0.2.15
Swap usage: 0% IP address for eth1: 192.168.103.100

Graph this data and manage this system at:
https://landscape.canonical.com/

Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud

New release ‘16.04.5 LTS’ available.
Run ‘do-release-upgrade’ to upgrade to it.

Last login: Sat Jan 26 17:36:27 2019 from 10.0.2.2
vagrant@m103:~ ls /dataset ls: cannot access /dataset: Protocol error vagrant@m103:~ ls shared
ls: cannot access shared: No such file or directory
vagrant@m103:~ ls /shared ls: cannot access /shared: Protocol error vagrant@m103:~

Can you tell me the reason why I am getting this protocol error and how to resolve it.

its a vagrant issue.

doing vagrant halt and vagrant up resolved the issue.

When you talk about local, you mean on the PC, like windows C:\blah blah\etc\etc ?

Which directory is the products.json in on your vagrant environment?

Excellent, glad to hear it is solved.

Hi I have one more question

I started mongos and I see in sh.status() two shards which I created

vagrant@m103:~$ mongo --port 26000 -u “m103-admin” -p “m103-pass” --authenticationDatabase “admin”
MongoDB shell version v3.6.9
connecting to: mongodb://127.0.0.1:26000/
Implicit session: session { “id” : UUID(“10d8e934-42d1-4f0a-9808-b42198cc5e03”) }
MongoDB server version: 3.6.9
MongoDB Enterprise mongos> sh.status()
— Sharding Status —
sharding version: {
“_id” : 1,
“minCompatibleVersion” : 5,
“currentVersion” : 6,
“clusterId” : ObjectId(“5c4bd411cc93628af232bdc3”)
}
shards:
{ “_id” : “m103-repl”, “host” : “m103-repl/192.168.103.100:27001,192.168.103.100:27003,m103:27002”, “state” : 1 }
{ “_id” : “m103-repl-2”, “host” : “m103-repl-2/192.168.103.100:27004,192.168.103.100:27005,192.168.103.100:27006”, “state” : 1 }
active mongoses:
“3.6.9” : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 5
Last reported error: Could not find host matching read preference { mode: “primary” } for set m103-repl
Time of Reported error: Sat Jan 26 2019 18:02:33 GMT+0000 (UTC)
Migration Results for the last 24 hours:
No recent migrations
databases:
{ “_id” : “applicationData”, “primary” : “m103-repl”, “partitioned” : false }
{ “_id” : “config”, “primary” : “config”, “partitioned” : true }
config.system.sessions
shard key: { “_id” : 1 }
unique: false
balancing: true
chunks:
m103-repl 1
{ “_id” : { “$minKey” : 1 } } -->> { “_id” : { “$maxKey” : 1 } } on : m103-repl Timestamp(1, 0)
{ “_id” : “testDatabase”, “primary” : “m103-repl”, “partitioned” : false }

MongoDB Enterprise mongos>

Then I actually killed all the Mongod instances of the replica set “m103-repl-2” I mean

All the three mongd process 192.168.103.100:27004,192.168.103.100:27005,192.168.103.100:27006"

at this point of time, if I goto mongos and see sh.status()

it is still showing both the shards with the nodes

192.168.103.100:27004,192.168.103.100:27005,192.168.103.100:27006"

which are actually not running.

can you explain me on it.

I believe it is telling what is in the sharded cluster, not what is in actually running.

2 Likes