Transition from Secondary to Primary took long time

Hi team,

I’m testing auto failover with the below steps:

Step 1: Setup 3 instance

Instance type: Amazon Linux AMI
mongod version: 3.6.9
mongod engine: mmapv1

Step 2: Set up a three nodes replication: Node A (primary), Node B (secondary), Node C (arbiter).

Step 3: Step down Node A. With this step, I expect Node B to become primary instantly. But the outcome is not that:

2020-04-29T10:40:52.185+0900 I COMMAND [conn10] Received replSetStepUp request
2020-04-29T10:40:52.185+0900 I REPL [conn10] Starting an election due to step up request
2020-04-29T10:40:52.185+0900 I REPL [conn10] skipping dry run and running for election in term 3
2020-04-29T10:40:52.258+0900 I REPL [replexec-10] VoteRequester(term 3) received a yes vote from 3.114.210.6:27018; response message: { term: 3, voteGranted: true, reason: “”, ok: 1.0, operationTime: Timestamp(1588124445, 1), $clusterTime: { clusterTime: Timestamp(1588124445, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }
2020-04-29T10:40:52.258+0900 I REPL [replexec-10] election succeeded, assuming primary role in term 3
2020-04-29T10:40:52.258+0900 I REPL [replexec-10] transition to PRIMARY from SECONDARY
2020-04-29T10:40:52.258+0900 I REPL [replexec-10] Resetting sync source to empty, which was secondaryMemberIP:27018
2020-04-29T10:40:52.258+0900 I REPL [replexec-10] Entering primary catch-up mode.
2020-04-29T10:40:52.259+0900 I REPL [replexec-10] Member secondaryMemberIP:27018 is now in state SECONDARY
2020-04-29T10:40:52.259+0900 I REPL [replexec-10] Caught up to the latest optime known via heartbeats after becoming primary.
2020-04-29T10:40:52.259+0900 I REPL [replexec-10] Exited primary catch-up mode.
2020-04-29T10:50:22.001+0900 I REPL [rsSync] transition to primary complete; database writes are now permitted

As you can see, in this example it took 10 minutes for Node B to complete transitioning to primary since the stepup request. As a result, during this transition time, my services are dead and couldn’t response to client request.Is this normal behavior?

Thanks for your help.

Can you paste below information -

rs.conf()
rs,status() and logs from secondary and arbiter nodes.

Here are the logs. host1, host2, host3 are the nodes’s respective public ip address.

rs.conf()

{
        "_id" : "replSet1",
        "version" : 6,
        "protocolVersion" : NumberLong(1),
        "members" : [
                {
                        "_id" : 0,
                        "host" : "host1:27018",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {
                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 1,
                        "host" : "host2:27018",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {
                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 2,
                        "host" : "host3:30000",
                        "arbiterOnly" : true,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 0,
                        "tags" : {
                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatIntervalMillis" : 2000,
                "heartbeatTimeoutSecs" : 10,
                "electionTimeoutMillis" : 10000,
                "catchUpTimeoutMillis" : -1,
                "catchUpTakeoverDelayMillis" : 30000,
                "getLastErrorModes" : {
                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                },
                "replicaSetId" : ObjectId("5ea7d565f6f9fe4300763108")
        }
}

rs.status()

{
        "set" : "replSet1",
        "date" : ISODate("2020-04-29T06:38:02.630Z"),
        "myState" : 1,
        "term" : NumberLong(4),
        "syncingTo" : "",
        "syncSourceHost" : "",
        "syncSourceId" : -1,
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1588142273, 1),
                        "t" : NumberLong(4)
                },
                "readConcernMajorityOpTime" : {
                        "ts" : Timestamp(1588142273, 1),
                        "t" : NumberLong(4)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1588142273, 1),
                        "t" : NumberLong(4)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1588142273, 1),
                        "t" : NumberLong(4)
                }
        },
        "members" : [
                {
                        "_id" : 0,
                        "name" : "host1:27018",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 81474,
                        "optime" : {
                                "ts" : Timestamp(1588142273, 1),
                                "t" : NumberLong(4)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1588142273, 1),
                                "t" : NumberLong(4)
                        },
                        "optimeDate" : ISODate("2020-04-29T06:37:53Z"),
                        "optimeDurableDate" : ISODate("2020-04-29T06:37:53Z"),
                        "lastHeartbeat" : ISODate("2020-04-29T06:38:01.939Z"),
                        "lastHeartbeatRecv" : ISODate("2020-04-29T06:38:02.118Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "host2:27018",
                        "syncSourceHost" : "host2:27018",
                        "syncSourceId" : 1,
                        "infoMessage" : "",
                        "configVersion" : 6
                },
                {
                        "_id" : 1,
                        "name" : "host2:27018",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 82263,
                        "optime" : {
                                "ts" : Timestamp(1588142273, 1),
                                "t" : NumberLong(4)
                        },
                        "optimeDate" : ISODate("2020-04-29T06:37:53Z"),
                        "syncingTo" : "",
                        "syncSourceHost" : "",
                        "syncSourceId" : -1,
                        "infoMessage" : "",
                        "electionTime" : Timestamp(1588142063, 1),
                        "electionDate" : ISODate("2020-04-29T06:34:23Z"),
                        "configVersion" : 6,
                        "self" : true,
                        "lastHeartbeatMessage" : ""
                },
                {
                        "_id" : 2,
                        "name" : "host3:30000",
                        "health" : 1,
                        "state" : 7,
                        "stateStr" : "ARBITER",
                        "uptime" : 515,
                        "lastHeartbeat" : ISODate("2020-04-29T06:38:01.940Z"),
                        "lastHeartbeatRecv" : ISODate("2020-04-29T06:38:01.713Z"),
                        "pingMs" : NumberLong(1),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "",
                        "syncSourceHost" : "",
                        "syncSourceId" : -1,
                        "infoMessage" : "",
                        "configVersion" : 6
                }
        ],
        "ok" : 1,
        "operationTime" : Timestamp(1588142273, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1588142273, 1),
                "signature" : {
                        "hash" : BinData(0,"4YEfqovQck2B9BmvHKchzML/oDg="),
                        "keyId" : NumberLong("6803576634675822593")
                }
        }
}

Primary Log:

2020-04-29T15:34:23.780+0900 I COMMAND  [conn2] Received replSetStepUp request
2020-04-29T15:34:23.780+0900 I REPL     [conn2] Starting an election due to step up request
2020-04-29T15:34:23.780+0900 I REPL     [conn2] skipping dry run and running for election in term 4
2020-04-29T15:34:23.809+0900 I REPL     [replexec-24] VoteRequester(term 4) received a no vote from host3:30000 with reason "can see a healthy primary (host1:27018) of equal or greater priority"; response message: { term: 4, voteGranted: false, reason: "can see a healthy primary (host1:27018) of equal or greater priority", ok: 1.0 }
2020-04-29T15:34:23.832+0900 I REPL     [replexec-13] VoteRequester(term 4) received a yes vote from host1:27018; response message: { term: 4, voteGranted: true, reason: "", ok: 1.0, operationTime: Timestamp(1588142062, 1), $clusterTime: { clusterTime: Timestamp(1588142062, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }
2020-04-29T15:34:23.832+0900 I REPL     [replexec-13] election succeeded, assuming primary role in term 4
2020-04-29T15:34:23.832+0900 I REPL     [replexec-13] transition to PRIMARY from SECONDARY
2020-04-29T15:34:23.832+0900 I REPL     [replexec-13] Resetting sync source to empty, which was host1:27018
2020-04-29T15:34:23.833+0900 I REPL     [replexec-13] Entering primary catch-up mode.
2020-04-29T15:34:23.833+0900 I REPL     [replexec-27] Member host1:27018 is now in state SECONDARY
2020-04-29T15:34:23.833+0900 I REPL     [replexec-27] Caught up to the latest optime known via heartbeats after becoming primary. Target optime: { ts: Timestamp(1588142062, 1), t: 3 }. My Last Applied: { ts: Timestamp(1588142062, 1), t: 3 }
2020-04-29T15:34:23.833+0900 I REPL     [replexec-27] Exited primary catch-up mode.
2020-04-29T15:37:33.666+0900 I REPL     [rsSync] transition to primary complete; database writes are now permitted

Secondary Log:

2020-04-29T15:34:23.753+0900 I COMMAND  [conn56] Attempting to step down in response to replSetStepDown command
2020-04-29T15:34:23.753+0900 I REPL     [conn56] transition to SECONDARY from PRIMARY
2020-04-29T15:34:23.753+0900 I NETWORK  [conn56] Skip closing connection for connection # 51
2020-04-29T15:34:23.753+0900 I NETWORK  [conn56] Skip closing connection for connection # 33
2020-04-29T15:34:23.753+0900 I NETWORK  [conn56] Skip closing connection for connection # 30
2020-04-29T15:34:23.753+0900 I NETWORK  [conn56] Skip closing connection for connection # 10
2020-04-29T15:34:23.754+0900 I NETWORK  [conn31] end connection host2:26834 (7 connections now open)
2020-04-29T15:34:23.754+0900 I NETWORK  [conn25] end connection host2:26308 (6 connections now open)
2020-04-29T15:34:23.754+0900 I NETWORK  [conn21] end connection host1:59150 (5 connections now open)
2020-04-29T15:34:23.754+0900 I REPL     [conn56] Handing off election to host2:27018
2020-04-29T15:34:23.754+0900 I NETWORK  [conn56] Error sending response to client: SocketException: Broken pipe. Ending connection from 127.0.0.1:59134 (connection id: 56)
2020-04-29T15:34:23.754+0900 I NETWORK  [conn56] end connection 127.0.0.1:59134 (4 connections now open)
2020-04-29T15:34:23.756+0900 I NETWORK  [listener] connection accepted from 127.0.0.1:59152 #57 (5 connections now open)
2020-04-29T15:34:23.756+0900 I NETWORK  [conn57] received client metadata from 127.0.0.1:59152 conn57: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.9" }, os: { type: "Linux", name: "Amazon Linux AMI release 2018.03", architecture: "x86_64", version: "Kernel 4.14.94-73.73.amzn1.x86_64" } }
2020-04-29T15:34:23.759+0900 I ACCESS   [conn57] Successfully authenticated as principal Nexiv on admin
2020-04-29T15:34:25.467+0900 I REPL     [replexec-21] Member host2:27018 is now in state PRIMARY
2020-04-29T15:34:26.030+0900 I REPL     [rsBackgroundSync] sync source candidate: host2:27018
2020-04-29T15:34:26.030+0900 I ASIO     [NetworkInterfaceASIO-RS-0] Connecting to host2:27018
2020-04-29T15:34:26.032+0900 I ASIO     [NetworkInterfaceASIO-RS-0] Successfully connected to host2:27018, took 2ms (1 connections now open to host2:27018)
2020-04-29T15:34:27.521+0900 I NETWORK  [conn30] end connection host2:26560 (4 connections now open)
2020-04-29T15:34:41.015+0900 I NETWORK  [PeriodicTaskRunner] Socket closed remotely, no longer connected (idle 60 secs, remote host host1:27018)
2020-04-29T15:34:56.030+0900 I ASIO     [NetworkInterfaceASIO-RS-0] Ending connection to host host2:27018 due to bad connection status; 0 connections to that host remain open
2020-04-29T15:34:56.030+0900 I REPL     [replication-0] Blacklisting host2:27018 due to error: 'NetworkInterfaceExceededTimeLimit: Operation timed out' for 10s until: 2020-04-29T15:35:06.030+0900
2020-04-29T15:34:56.031+0900 I REPL     [replication-0] could not find member to sync from
2020-04-29T15:35:06.035+0900 I REPL     [rsBackgroundSync] sync source candidate: host2:27018
2020-04-29T15:35:06.035+0900 I ASIO     [NetworkInterfaceASIO-RS-0] Connecting to host2:27018
2020-04-29T15:35:06.037+0900 I ASIO     [NetworkInterfaceASIO-RS-0] Successfully connected to host2:27018, took 2ms (1 connections now open to host2:27018)
2020-04-29T15:35:33.549+0900 I NETWORK  [LogicalSessionCacheRefresh] Starting new replica set monitor for replSet1/host2:27018,host1:27018
2020-04-29T15:35:33.549+0900 I NETWORK  [listener] connection accepted from host1:34080 #58 (5 connections now open)
2020-04-29T15:35:33.549+0900 I NETWORK  [conn58] received client metadata from host1:34080 conn58: { driver: { name: "MongoDB Internal Client", version: "3.6.9" }, os: { type: "Linux", name: "Amazon Linux AMI release 2018.03", architecture: "x86_64", version: "Kernel 4.14.94-73.73.amzn1.x86_64" } }
2020-04-29T15:35:33.549+0900 I NETWORK  [ReplicaSetMonitor-TaskExecutor-0] Successfully connected to host1:27018 (1 connections now open to host1:27018 with a 5 second timeout)
2020-04-29T15:35:33.551+0900 I NETWORK  [LogicalSessionCacheRefresh] Successfully connected to host2:27018 (1 connections now open to host2:27018 with a 5 second timeout)
2020-04-29T15:35:33.551+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:34.053+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:34.554+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:35.055+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:35.556+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:36.035+0900 I ASIO     [NetworkInterfaceASIO-RS-0] Ending connection to host host2:27018 due to bad connection status; 0 connections to that host remain open
2020-04-29T15:35:36.035+0900 I REPL     [replication-0] Blacklisting host2:27018 due to error: 'NetworkInterfaceExceededTimeLimit: Operation timed out' for 10s until: 2020-04-29T15:35:46.035+0900
2020-04-29T15:35:36.035+0900 I REPL     [replication-0] could not find member to sync from
2020-04-29T15:35:36.057+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:36.558+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:37.059+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:37.560+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:38.061+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:38.562+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:39.063+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:39.563+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:40.065+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:40.566+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:41.067+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:41.568+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:42.069+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:42.570+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:43.071+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:43.572+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:44.073+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:44.574+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:45.077+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:45.578+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:46.037+0900 I REPL     [rsBackgroundSync] sync source candidate: host2:27018
2020-04-29T15:35:46.037+0900 I ASIO     [NetworkInterfaceASIO-RS-0] Connecting to host2:27018
2020-04-29T15:35:46.039+0900 I ASIO     [NetworkInterfaceASIO-RS-0] Successfully connected to host2:27018, took 2ms (1 connections now open to host2:27018)
2020-04-29T15:35:46.079+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:46.580+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:47.081+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:47.582+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:48.083+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:48.584+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:49.086+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:49.587+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:50.087+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:50.596+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:51.101+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:51.602+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:52.103+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:52.603+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:53.104+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:53.606+0900 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:53.606+0900 I CONTROL  [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Could not find host matching read preference { mode: "primary" } for set replSet1
2020-04-29T15:36:16.038+0900 I ASIO     [NetworkInterfaceASIO-RS-0] Ending connection to host host2:27018 due to bad connection status; 0 connections to that host remain open
2020-04-29T15:36:16.038+0900 I REPL     [replication-0] Blacklisting host2:27018 due to error: 'NetworkInterfaceExceededTimeLimit: Operation timed out' for 10s until: 2020-04-29T15:36:26.038+0900
2020-04-29T15:36:16.038+0900 I REPL     [replication-0] could not find member to sync from
2020-04-29T15:36:26.040+0900 I REPL     [rsBackgroundSync] sync source candidate: host2:27018
2020-04-29T15:36:26.040+0900 I ASIO     [NetworkInterfaceASIO-RS-0] Connecting to host2:27018
2020-04-29T15:36:26.043+0900 I ASIO     [NetworkInterfaceASIO-RS-0] Successfully connected to host2:27018, took 3ms (1 connections now open to host2:27018)
2020-04-29T15:36:56.040+0900 I ASIO     [NetworkInterfaceASIO-RS-0] Ending connection to host host2:27018 due to bad connection status; 0 connections to that host remain open
2020-04-29T15:36:56.040+0900 I REPL     [replication-0] Blacklisting host2:27018 due to error: 'NetworkInterfaceExceededTimeLimit: Operation timed out' for 10s until: 2020-04-29T15:37:06.040+0900
2020-04-29T15:36:56.040+0900 I REPL     [replication-0] could not find member to sync from
2020-04-29T15:37:06.042+0900 I REPL     [rsBackgroundSync] sync source candidate: host2:27018
2020-04-29T15:37:06.042+0900 I ASIO     [NetworkInterfaceASIO-RS-0] Connecting to host2:27018
2020-04-29T15:37:06.044+0900 I ASIO     [NetworkInterfaceASIO-RS-0] Successfully connected to host2:27018, took 2ms (1 connections now open to host2:27018)
2020-04-29T15:37:33.642+0900 I REPL     [rsBackgroundSync] Changed sync source from empty to host2:27018
2020-04-29T15:37:33.645+0900 I ASIO     [NetworkInterfaceASIO-RS-0] Connecting to host2:27018
2020-04-29T15:37:33.647+0900 I ASIO     [NetworkInterfaceASIO-RS-0] Successfully connected to host2:27018, took 2ms (2 connections now open to host2:27018)
2020-04-29T15:39:23.808+0900 I NETWORK  [conn33] end connection host2:50454 (4 connections now open)
2020-04-29T15:40:33.549+0900 I NETWORK  [LogicalSessionCacheRefresh] Starting new replica set monitor for replSet1/host2:27018,host1:27018

Arbiter Log:

2020-04-29T15:34:25.520+0900 I REPL     [replexec-2] Member host1:27018 is now in state SECONDARY
2020-04-29T15:34:25.578+0900 I REPL     [replexec-2] Member host2:27018 is now in state PRIMARY
2020-04-29T15:39:12.463+0900 I CONTROL  [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist

Hi ,

It is a network issue , Secondary host was unable to connect Primary host . Monitor your network during test. Also it took only 3 minute to failover.

2020-04-29T15:35:33.551+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:34.053+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:34.554+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:35.055+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:35.556+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:36.035+0900 I ASIO [NetworkInterfaceASIO-RS-0] Ending connection to host host2:27018 due to bad connection status; 0 connections to that host remain open
2020-04-29T15:35:36.035+0900 I REPL [replication-0] Blacklisting host2:27018 due to error: ‘NetworkInterfaceExceededTimeLimit: Operation timed out’ for 10s until: 2020-04-29T15:35:46.035+0900
2020-04-29T15:35:36.035+0900 I REPL [replication-0] could not find member to sync from
2020-04-29T15:35:36.057+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:36.558+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:37.059+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:37.560+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:38.061+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:38.562+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:39.063+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:39.563+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:40.065+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:40.566+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:41.067+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:41.568+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:42.069+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:42.570+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:43.071+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:43.572+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:44.073+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:44.574+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:45.077+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:45.578+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:46.037+0900 I REPL [rsBackgroundSync] sync source candidate: host2:27018
2020-04-29T15:35:46.037+0900 I ASIO [NetworkInterfaceASIO-RS-0] Connecting to host2:27018
2020-04-29T15:35:46.039+0900 I ASIO [NetworkInterfaceASIO-RS-0] Successfully connected to host2:27018, took 2ms (1 connections now open to host2:27018)
2020-04-29T15:35:46.079+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:46.580+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:47.081+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:47.582+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:48.083+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:48.584+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:49.086+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:49.587+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:50.087+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:50.596+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:51.101+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:51.602+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:52.103+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:52.603+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:53.104+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
2020-04-29T15:35:53.606+0900 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set replSet1
1 Like