Homework 2.5 {"doc":null,"isEnabled":true}

I have the replicaset setup, created a key and updated my mongo config files with encrytion enablement settings then rolling encryption enablement (wipe existing dirs, start secondary and so on). Validate returns this: {“doc”:null,“isEnabled”:true} - not a valid answer but I do not know what it possibly is not liking.

With no logs or config files to show us we also do not know what it possibly is not liking.

“logs files” : null ,
“config files” : null ,
“help” : “impossible”

My apologies -

My r0.conf - r1 and r2 are the same with db dir and port differences.
#t mongod.conf

dbPath: /home/vagrant/M310-HW-2.5/r0
enabled: true
engine: wiredTiger

destination: file
logAppend: true
path: /home/vagrant/M310-HW-2.5/r0/mongodb.log
verbosity: 1

slowOpThresholdMs: 50
mode: slowOp

port: 31250
bindIp: “localhost,database.m310.mongodb.university”
mode: preferSSL
PEMKeyFile: /home/vagrant/shared/certs/server.pem
CAFile: /home/vagrant/shared/certs/ca.pem
allowInvalidCertificates: true
allowInvalidHostnames: true

fork: true

authorization: disabled
enableEncryption: true
encryptionKeyFile: /home/vagrant/shared/certs/mongodbEncryption.key


Mongodb.log for Primary (which is R2 right now):

m2019-02-19T20:06:02.062+0000 I COMMAND [conn6] command local.oplog.rs command: getMore { getMore: 13417359422, collection: “oplog.rs”, maxTimeMS: 5000, term: 6, lastKnownCommittedOpTime: { ts: Timestamp 1550603555000|2, t: 6 } } planSummary: COLLSCAN cursorid:13417359422 keysExamined:0 docsExamined:0 keyUpdates:0 writeConflicts:0 numYields:1 nreturned:0 reslen:292 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 5000ms
2019-02-19T20:06:03.124+0000 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: “UNENCRYPTED”, configVersion: 1, from: “database.m310.mongodb.university:31250”, fromId: 0, term: 6 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms
2019-02-19T20:06:03.197+0000 I COMMAND [conn8] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1550603555000|2, t: 6 }, appliedOpTime: { ts: Timestamp 1550603555000|2, t: 6 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1550603555000|2, t: 6 }, appliedOpTime: { ts: Timestamp 1550603555000|2, t: 6 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp 1550603387000|1, t: 5 }, appliedOpTime: { ts: Timestamp 1550603387000|1, t: 5 }, memberId: 2, cfgver: 1 } ], $replData: { term: 6, lastOpCommitted: { ts: Timestamp 1550603555000|2, t: 6 }, lastOpVisible: { ts: Timestamp 0|0, t: -1 }, configVersion: 1, replicaSetId: ObjectId(‘5c6c4dda7f18994bc4d5a40f’), primaryIndex: 2, syncSourceIndex: 2 } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:22 locks:{} protocol:op_command 0ms

rs.status() from r2(primary now:
MongoDB Enterprise UNENCRYPTED:SECONDARY> rs.status()
“set” : “UNENCRYPTED”,
“date” : ISODate(“2019-02-19T20:04:19.335Z”),
“myState” : 2,
“term” : NumberLong(6),
“syncingTo” : “database.m310.mongodb.university:31252”,
“heartbeatIntervalMillis” : NumberLong(2000),
“members” : [
“_id” : 0,
“name” : “database.m310.mongodb.university:31250”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 3285,
“optime” : {
“ts” : Timestamp(1550603555, 2),
“t” : NumberLong(6)
“optimeDate” : ISODate(“2019-02-19T19:12:35Z”),
“syncingTo” : “database.m310.mongodb.university:31252”,
“configVersion” : 1,
“self” : true
“_id” : 1,
“name” : “database.m310.mongodb.university:31251”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 3274,
“optime” : {
“ts” : Timestamp(1550603555, 2),
“t” : NumberLong(6)
“optimeDate” : ISODate(“2019-02-19T19:12:35Z”),
“lastHeartbeat” : ISODate(“2019-02-19T20:04:19.025Z”),
“lastHeartbeatRecv” : ISODate(“2019-02-19T20:04:18.547Z”),
“pingMs” : NumberLong(0),
“syncingTo” : “database.m310.mongodb.university:31252”,
“configVersion” : 1
“_id” : 2,
“name” : “database.m310.mongodb.university:31252”,
“health” : 1,
“state” : 1,
“stateStr” : “PRIMARY”,
“uptime” : 3248,
“optime” : {
“ts” : Timestamp(1550603555, 2),
“t” : NumberLong(6)
“optimeDate” : ISODate(“2019-02-19T19:12:35Z”),
“lastHeartbeat” : ISODate(“2019-02-19T20:04:19.031Z”),
“lastHeartbeatRecv” : ISODate(“2019-02-19T20:04:18.549Z”),
“pingMs” : NumberLong(0),
“electionTime” : Timestamp(1550603555, 1),
“electionDate” : ISODate(“2019-02-19T19:12:35Z”),
“configVersion” : 1
“ok” : 1

The problem appears to be that your database was wiped. The setup script creates a replica set (without at-rest encryption) and adds some data. Your task is to upgrade the replica set so that it has at-rest encryption without MongoDB downtime (or losing any data).

DO NOT TAKE DOWN ALL THREE SERVERS AT THE SAME TIME. The steps outlined as part of the homework essentially have you take down one MongoDB server at a time (so the cluster is always up), wipe the data folder of the MongoDB server that you took down, and then bring it back online with at-rest encryption enabled. When it comes back online, it rejoins the replication set and retrieves the database data from the primary (storing its data in encrypted files).

After performing the above for all three MongoDB servers, you will have wiped each server’s data files (required since the initial data files are not encrypted), brought the MongoDB server back up with at-rest encryption enabled, and replicated data to each server. Because you are doing one server at a time, no data will be lost.

Hi Everyone.
Even though it seems a simple exercise, I am struggling to move on from the #1 step.

1- ) I have already created the key file and grant appropriate permissions.

2-) Safely shutdown a secondary of the replica set and delete the old database files
vagrant@database:~ ls /home/vagrant/M310-HW-2.5/r1 vagrant@database:~

3-) Restart the server with storage encryption enabled ( read through the suggested article as well )

vagrant@database:~$ mongod --dbpath /home/vagrant/M310-HW-2.5/r1 --logpath /home/vagrant/M310-HW-2.5/r1/mongo.log.log --port 31251 --fork --enableEncryption --encryptionKeyFile mongodb-keyfile_new --replSet UNENCRYPTED
about to fork child process, waiting until server is ready for connections.
forked process: 6428
ERROR: child process failed, exited with error number 14

Other deamons running:

vagrant@database:~$ ps -efa | grep mongod
vagrant 5807 1 3 03:48 ? 00:00:28 mongod --dbpath /home/vagrant/M310-HW-2.5/r0 --logpath /home/vagrant/M310-HW-2.5/r0/mongo.log.log --port 31250 --replSet UNENCRYPTED --fork
vagrant 5861 1 3 03:48 ? 00:00:28 mongod --dbpath /home/vagrant/M310-HW-2.5/r2 --logpath /home/vagrant/M310-HW-2.5/r2/mongo.log.log --port 31252 --replSet UNENCRYPTED --fork
vagrant 6433 1966 0 04:02 pts/0 00:00:00 grep --color=auto mongod

Any suggestion on why it has happened:
Thanks. Alex

The mongo.log.log file should contain the details of the problem. My guess is that it cannot find the mongodb-keyfile_new file. I had a similar issue and needed to specify the full path, even though it was in the current working directory. I used the following syntax:

mongod --dbpath "$HOME/M310-HW-2.5/r0" --logpath "$HOME/M310-HW-2.5/r0/mongo.log" --port 31250 --replSet UNENCRYPTED --fork --enableEncryption --encryptionKeyFile "$HOME/M310-HW-2.5/mongodb-keyfile"

That said, I have experienced some other strange issues that nobody else seems to, so it could be something else entirely. Post the mongo.log.log file.

hey @ciscofu
I am getting crazy because I had forgotten to provide you the most important part :-/
Conclusion: I deleted everything and started it all over again and this time has worked. Thanks for your help.

Previous error was:

vagrant@database:~ mongod --dbpath /home/vagrant/M310-HW-2.5/r0 --logpath /home/vagrant/M310-HW-2.5/r0/mongo.log --port 31250 --enableEncryption --encryptionKeyFi le /home/vagrant/mongodb-keyfile_new --replSet UNENCRYPTED vagrant@database:~ more /home/vagrant/M310-HW-2.5/r0/mongo.log
2019-02-25T00:26:57.287+0000 I CONTROL [initandlisten] MongoDB starting : pid=2400 port=31250 dbpath=/home/vagrant/M310-HW-2.5/r0 64-bit host=database
2019-02-25T00:26:57.288+0000 I CONTROL [initandlisten] db version v3.2.22
2019-02-25T00:26:57.288+0000 I CONTROL [initandlisten] git version: 105acca0d443f9a47c1a5bd608fd7133840a58dd
2019-02-25T00:26:57.288+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014
2019-02-25T00:26:57.288+0000 I CONTROL [initandlisten] allocator: tcmalloc
2019-02-25T00:26:57.288+0000 I CONTROL [initandlisten] modules: enterprise
2019-02-25T00:26:57.288+0000 I CONTROL [initandlisten] build environment:
2019-02-25T00:26:57.288+0000 I CONTROL [initandlisten] distmod: ubuntu1404
2019-02-25T00:26:57.288+0000 I CONTROL [initandlisten] distarch: x86_64
2019-02-25T00:26:57.288+0000 I CONTROL [initandlisten] target_arch: x86_64
2019-02-25T00:26:57.288+0000 I CONTROL [initandlisten] options: { net: { port: 31250 }, replication: { replSet: “UNENCRYPTED” }, security: { enableEncryption: tru
e, encryptionKeyFile: “/home/vagrant/mongodb-keyfile_new” }, storage: { dbPath: “/home/vagrant/M310-HW-2.5/r0” }, systemLog: { destination: “file”, path: “/home/va
grant/M310-HW-2.5/r0/mongo.log” } }
2019-02-25T00:26:57.377+0000 I - [initandlisten] Detected data files in /home/vagrant/M310-HW-2.5/r0 created by the ‘wiredTiger’ storage engine, so setting
the active storage engine to ‘wiredTiger’.
2019-02-25T00:26:57.377+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1G,session_max=20000,eviction=(threads_min=4,threads_max=4),confi
2019-02-25T00:26:57.397+0000 E STORAGE [initandlisten] Unable to retrieve key .system, error: There are existing data files, but no valid keystore could be locate
2019-02-25T00:26:57.400+0000 I - [initandlisten] Fatal Assertion 28561
2019-02-25T00:26:57.400+0000 I - [initandlisten]

***aborting after fassert() failure

Glad to hear that you got it working! The important bit is at the very bottom of the log file, right before MongoDB terminates:

That indicates that MongoDB could not find/use the key file. Based on your command, you were specifying the full path to the key name, but perhaps the permissions on it were not correct (MongoDB documentation indicates that it should be “400”, or read-only and limited to a specific user). I am pretty sure that I have used read/write and limited to a specific user without problem, but MongoDB will definitely refuse to use the file if it is world readable.

Thank you - somewhere in the middle of recreating it I forgot to run the setup script again :). Now it’s all good. Thank you!