Not creating more than 103 connection

Hi all,
I am doing performance testing of a rest API with more than 100 concurrent users.
Apparently, mongodb does not create more than 103 connection, and throwing errors.
I am using mongodb community version on Windows 10 and running mongod via CMD.
Let me know if there is way to create more connection when concurrent users are increased.
Below is the output of connections:

db.serverStatus().connections
{
“current” : 103,
“available” : 999897,
“totalCreated” : 103,
“active” : 78,
“exhaustIsMaster” : 1,
“exhaustHello” : 0,
“awaitingTopologyChanges” : 1
}

Hi @major1mong,

Your serverStatus output shows plenty of available server connections, so I would look into the configuration used by the driver for your REST API.

Most MongoDB drivers use connection pools to allow connections to be reused (saving on the overhead of establishing connections) and to manage the number of connections per client (trying to avoid overwhelming a deployment).

The 103 limit sounds like you could be hitting a driver connection pool default of 100 connections (which happens to be the default for a few drivers, like Java). The few extra connections would be from monitoring or admin interaction (like your mongo shell session to get serverStatus() output).

What specific MongoDB driver & version are you using for your REST API?

Regards,
Stennie

Hi Stennie, thanks for you response.
I am using mongodb-driver-reactivestreams-4.1.1.jar.
Please let me know if there is a way to increase the number of connections from a driver configuration on the app side. Thanks.

I was able to change the maxPoolSize via connection URI and it worked as expected.

When I change the pool size to 200 or more, I get lot of socket errors. I presume socket errors may be due to lack of resources available in the system.

hi major1mong, did you find which resource-setting in your system was limiting the number of sockets that can be created for the pools ? We seem to encounter the same situation, our system has

net.ipv4.ip_local_port_range = 2000 65535
net.ipv4.tcp_fin_timeout = 15

so it would be able to handle (65535-2000)/15 = 4235 sockets per second per IP.
But still we encounter lots of cases where a shard server shuts down with a logmessage like this:

{"t":{"$date":"2021-01-04T13:45:47.800+01:00"},"s":"F",  "c":"CONTROL",  "id":4757800, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"Writing fatal message","attr":{"message":"DBException::toString(): NetworkInterfaceExceededTimeLimit: Remote command timed out while waiting to get a connection from the pool, took 31481ms, timeout was set to 20000ms\nActual exception type: mongo::error_details::ExceptionForImpl<(mongo::ErrorCodes::Error)202, mongo::ExceptionForCat<(mongo::ErrorCategory)1>, mongo::ExceptionForCat<(mongo::ErrorCategory)10> >\n"}}