Homework 1.3: Enabling Internal Authentication using X.509

When creating replication set with SSL enabled ,but the first mongod service got hung with error is mongodb.log


2019-10-22T08:35:34.392+0000 I CONTROL [initandlisten] MongoDB starting : pid=18980 port=31130 dbpath=/home/vagrant/M310-HW-1.3/ri0 64-bit host=database
2019-10-22T08:35:34.392+0000 I CONTROL [initandlisten] db version v3.2.22
2019-10-22T08:35:34.392+0000 I CONTROL [initandlisten] git version: 105acca0d443f9a47c1a5bd608fd7133840a58dd
2019-10-22T08:35:34.392+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014
2019-10-22T08:35:34.392+0000 I CONTROL [initandlisten] allocator: tcmalloc
2019-10-22T08:35:34.392+0000 I CONTROL [initandlisten] modules: enterprise
2019-10-22T08:35:34.392+0000 I CONTROL [initandlisten] build environment:
2019-10-22T08:35:34.392+0000 I CONTROL [initandlisten] distmod: ubuntu1404
2019-10-22T08:35:34.392+0000 I CONTROL [initandlisten] distarch: x86_64
2019-10-22T08:35:34.392+0000 I CONTROL [initandlisten] target_arch: x86_64
2019-10-22T08:35:34.392+0000 I CONTROL [initandlisten] options: { net: { port: 31130, ssl: { CAFile: “certs/ca.pem”, PEMKeyFile: “certs/server.pem”, mode: “requireSSL” } }, replication: { replSet: “–fork” }, security: { clusterAuthMode: “x509” }, storage: { dbPath: “/home/vagrant/M310-HW-1.3/ri0” }, systemLog: { destination: “file”, path: “/home/vagrant/M310-HW-1.3/ri0/mongodb.log.log” } }
2019-10-22T08:35:34.414+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1G,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),verbose=(recovery_progress),
2019-10-22T08:35:34.448+0000 I CONTROL [initandlisten]
2019-10-22T08:35:34.448+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is ‘always’.
2019-10-22T08:35:34.448+0000 I CONTROL [initandlisten] ** We suggest setting it to ‘never’
2019-10-22T08:35:34.448+0000 I CONTROL [initandlisten]
2019-10-22T08:35:34.448+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is ‘always’.
2019-10-22T08:35:34.448+0000 I CONTROL [initandlisten] ** We suggest setting it to ‘never’
2019-10-22T08:35:34.448+0000 I CONTROL [initandlisten]
2019-10-22T08:35:34.466+0000 I REPL [initandlisten] Did not find local voted for document at startup.
2019-10-22T08:35:34.467+0000 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset
2019-10-22T08:35:34.467+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory ‘/home/vagrant/M310-HW-1.3/ri0/diagnostic.data’
2019-10-22T08:35:34.467+0000 I NETWORK [initandlisten] waiting for connections on port 31130 ssl
2019-10-22T08:35:34.467+0000 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker


this is in hold from past 13 hours,any one help me here

anyone can help me here

What do you mean when you say it’s on hold?

Hi @Veerendranath,

I am looking into it.

Kanika

thanks @kanikasingla

Validates the homework.

sh validate-hw-1.3.sh
vagrant@database:~/shared$ ./setup-hw-1.3.sh

---- not status from past 24 hours , I mean the command did not came out, all I see in logs in 2019-10-22T08:35:34.392+0000 I CONTROL [initandlisten] options: { net: { port: 31130, ssl: { CAFile: “certs/ca.pem”, PEMKeyFile: “certs/server.pem”, mode: “requireSSL” } }, replication: { replSet: “–fork” }, security: { clusterAuthMode: “x509” }, storage: { dbPath: “/home/vagrant/M310-HW-1.3/ri0” }, systemLog: { destination: “file”, path: “/home/vagrant/M310-HW-1.3/ri0/mongodb.log.log” } }
2019-10-22T08:35:34.414+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1G,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),verbose=(recovery_progress),
2019-10-22T08:35:34.448+0000 I CONTROL [initandlisten]
2019-10-22T08:35:34.448+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is ‘always’.
2019-10-22T08:35:34.448+0000 I CONTROL [initandlisten] ** We suggest setting it to ‘never’
2019-10-22T08:35:34.448+0000 I CONTROL [initandlisten]
2019-10-22T08:35:34.448+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is ‘always’.
2019-10-22T08:35:34.448+0000 I CONTROL [initandlisten] ** We suggest setting it to ‘never’
2019-10-22T08:35:34.448+0000 I CONTROL [initandlisten]
2019-10-22T08:35:34.466+0000 I REPL [initandlisten] Did not find local voted for document at startup.
2019-10-22T08:35:34.467+0000 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset
2019-10-22T08:35:34.467+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory ‘/home/vagrant/M310-HW-1.3/ri0/diagnostic.data’
2019-10-22T08:35:34.467+0000 I NETWORK [initandlisten] waiting for connections on port 31130 ssl
2019-10-22T08:35:34.467+0000 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker

vagrant@database:~/shared$ cat setup-hw-1.3.sh
#!/bin/bash

course=“M310”
exercise=“HW-1.3”
workingDir="HOME/{course}-${exercise}"
dbDir="$workingDir/db"
logName=“mongodb.log”

ports=(31130 31131 31132)

host=hostname -f
initiateStr=“rs.initiate({
_id: ‘$replSetName’,
members: [
{ _id: 1, host: ‘$host:31130’,priority: 1 },
{ _id: 2, host: ‘$host:31131’ },
{ _id: 3, host: ‘$host:31132’ }
]
})”

create working folder

mkdir -p “$workingDir/”{ri0,ri1,ri2}

launch mongod’s

for ((i=0; i < ${#ports[@]}; i++))
do
mongod --dbpath “$workingDir/ri$i” --logpath "$workingDir/ri$i/logName.log" --port {ports[$i]} --replSet $replSetName --fork --sslMode requireSSL --clusterAuthMode x509 --sslPEMKeyFile “certs/server.pem” --sslCAFile “certs/ca.pem”

done

wait for all the mongods to exit

sleep 10

Initializes the replica set.

mongo --host database.m310.mongodb.university --sslPEMKeyFile “certs/client.pem” --sslCAFile “certs/ca.pem” --ssl --port ${ports[0]} --eval “$initiateStr”

Creates the user for the certificate.

printf 'use external\ndb.createUser({user:"C=US,ST=New York,L=New York City,O=MongoDB,OU=University2,CN=M310 Client",roles:[{ role: "root", db: "admin" }]})' | mongo --host database.m310.mongodb.university --sslPEMKeyFile "certs/client.pem" --sslCAFile "certs/ca.pem" --ssl --port {ports[0]}

Validates the homework.

sh validate-hw-1.3.sh

Above is shell script used

Found it, it is saying it could not find replSetName.

I have this line with port line, try appending this in the setup file:

 ports=(31130 31131 31132)
 replSetName="<REPL_SET_NAME>"

Let me know if it works.

Kanika

I should add, there’s no setup script for this lab 1.3, and I can see that @Veerendranath is trying to re-use the setup script from 1.2 instead of doing it manually. You’ll find it difficult debugging issues this way.

Suggest you create a config file for each node and fire it up one-by-one. Plus, from your log, the dbpath and logpath don’t meet the requirements.

Thanks for correcting me there @007_jb. :slight_smile: You are absolutely right.

@Veerendranath, this is no doubt absolutely correct. Moreover, you can find all meaningful information in the MongoDB logs.

Kanika

Thanks @kanikasingla

It worked after adding the replSetName.

Thanks @007_jb, I understand, my course was supposed to EXPIRE today , I thought it will be really good to reuse the 1.2 :slight_smile:

Anyways everything looks good for now.

1 Like

Shortcuts… you’re not alone :wink:

Labs in this chapter were due yesterday. Not unless you’re 19 hours behind? :wink:
image

I struggled with that as well in one of course if I remember it correctly and I believe that google said me that it is mixture of IP and hostname and not super nice /etc/hosts for me it was the case