Configure remote access to mongodb running on docker with replica set on ubuntu

I am running my mongod replica set instances on docker with ssl/tsl security enabled.
I can connect from mongo sheel and mongocxx on the machine where my docker is running using below connection string.

mongo --ssl --sslCAFile /etc/mongodb/ssl/mongoCA.crt --host rs0/mongo1:27017,mongo2:27017,mongo3:27017 --sslPEMKeyFile /etc/mongodb/ssl/mongo_client.pem

the problem is I couldn’t connect to mongod running on docker from anothe machine.
Both of my machines are on same network.

I tried to bind_ip to 0.0.0.0 in my docker file but didn’t work for me.
I also tried this link from mongodb for configuring firewall but still I couldn’t establish connection from my remote machine.

does any one what am i doing wrong?
Below is my docker file

version: '3'
networks:
    netBackEnd:
        ipam:
            driver: default
            config:
                 - subnet: 192.168.0.0/24
services:
    api:
        hostname: api
        build: .
        ports:
            - 8000:8000
            - 8001:8001
            - 8500:8500
        depends_on:
            - mongo1
            - mongo2
            - mongo3
        volumes:
            - "/etc/mongodb/ssl/client_ip.pem:/data/client_ip.pem:ro"
            - "/etc/mongodb/ssl/mongoCA.crt:/data/mongoCA.crt:ro"
        networks:
           netBackEnd:
    mongo1:
        hostname: mongo1
        container_name: mongo1
        image: mongo:4.2-bionic
        expose:
            - 27017
        ports:
            - 27011:27017
        restart: always
        volumes:
            - "/etc/mongodb/ssl/mongo1.pem:/data/mongo1.pem:ro"
            - "/etc/mongodb/ssl/mongoCA.crt:/data/mongoCA.crt:ro"
            - "/usr/local/mongo-volume1:/data/db"
        entrypoint: ['/usr/bin/mongod', '--replSet', 'rs0', '--sslMode', 'requireSSL', '--clusterAuthMode', 'x509', '--sslClusterFile', '/data/mongo1.pem', '--sslPEMKeyFile', '/data/mongo1.pem', '--sslCAFile', '/data/mongoCA.crt', '--bind_ip', '0.0.0.0']
        networks:
           netBackEnd:
               ipv4_address: 192.168.0.2

    mongo2:
        hostname: mongo2
        container_name: mongo2
        image: mongo:4.2-bionic
        expose:
            - 27017
        ports:
            - 27012:27017
        restart: always
        volumes:
            - "/etc/mongodb/ssl/mongo2.pem:/data/mongo2.pem:ro"
            - "/etc/mongodb/ssl/mongoCA.crt:/data/mongoCA.crt:ro"
            - "/usr/local/mongo-volume2:/data/db"
        entrypoint: ['/usr/bin/mongod', '--replSet', 'rs0', '--sslMode', 'requireSSL', '--clusterAuthMode', 'x509', '--sslClusterFile', '/data/mongo2.pem', '--sslPEMKeyFile', '/data/mongo2.pem', '--sslCAFile', '/data/mongoCA.crt', '--bind_ip', '0.0.0.0']
        networks:
            netBackEnd:
               ipv4_address: 192.168.0.3

    mongo3:
        hostname: mongo3
        container_name: mongo3
        image: mongo:4.2-bionic
        expose:
            - 27017
        ports:
            - 27013:27017
        restart: always
        volumes:
            - "/etc/mongodb/ssl/mongo3.pem:/data/mongo3.pem:ro"
            - "/etc/mongodb/ssl/mongoCA.crt:/data/mongoCA.crt:ro"
            - "/usr/local/mongo-volume3:/data/db"
        entrypoint: ['/usr/bin/mongod', '--replSet', 'rs0', '--sslMode', 'requireSSL', '--clusterAuthMode', 'x509', '--sslClusterFile', '/data/mongo3.pem', '--sslPEMKeyFile', '/data/mongo3.pem', '--sslCAFile', '/data/mongoCA.crt', '--bind_ip', '0.0.0.0']
        networks:
            netBackEnd:
               ipv4_address: 192.168.0.5

below is the docker ps output

37075e728a2f        mongo:4.2-bionic     "/usr/bin/mongod --r…"   2 hours ago         Up About an hour    0.0.0.0:27012->27017/tcp                                             mongo2
45a84da16c56        mongo:4.2-bionic     "/usr/bin/mongod --r…"   2 hours ago         Up About an hour    0.0.0.0:27011->27017/tcp                                             mongo1
3615e7b08bf7        mongo:4.2-bionic     "/usr/bin/mongod --r…"   2 hours ago         Up About an hour    0.0.0.0:27013->27017/tcp                                             mongo3

Thank you

Hi @Anusha_Reddy and welcome in the MongoBD Community :muscle: !

I think you have a port issue here.
Inside your containers, mongod runs on port 27017 and you are mapping these ports on your host to 27012, 27013 and 27014 which is fine because you can’t have 3 services running on the same port.
So you can’t expose 27017 for each containers.
When you connect to this cluster, you should use the exposed ports so 27012, 27013 and 27014 with the IP address of your host.
I’m already surprised your host can resolve “mongo1”.
From your host, you should use rs0/127.0.0.1:27012,127.0.0.1:27013,127.0.0.1:27014 I think.

I hope this helps.

Cheers,
Maxime.

1 Like

With replicaset discovery this will not work as the hosts and ports will not match the replicaset configuration.

You will be able to connect to an individual node. Each member and client must be able to resolve and connect to the members as defined in the replicaset configuration.

The way I do it is:
Please note this is only applicable for testing/development as running 3 replicas on the same host is of no production benefit(ha,redendancy) you may as well run one mongod.

1 Like

Hi @MaBeuLux88, thanks for the reply.
I’m able to connect to ‘mongo1’ or ‘mongo2’ because i added them as hosts in /etc/hosts/ file like below

127.0.0.1	localhost
127.0.1.1	xxxxxx-ThinkPad-X270
192.168.0.2	mongo1
192.168.0.3	mongo2
192.168.0.5	mongo3 

one more problem with connecting using localhost is ssl/tls certificates, I can not connect using localhost as I needed to sign my certificates with hostnames like ‘mongo1’. I get the below error.

2021-01-10T14:24:08.424+0800 E  NETWORK  [ReplicaSetMonitor-TaskExecutor] The server certificate does not match the host name. Hostname: 127.0.0.1 does not match CN: mongo1

without using ssl/tls certificate, I can connect using rs0/127.0.0.1:27012,127.0.0.1:27013,127.0.0.1:27014 from the machine where my docker is running but not from remote client.

When I connect from remote client, I get the below error

connecting to: mongodb://127.0.0.1:27011,127.0.0.1:27012,127.0.0.1:27013/?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0
{"t":{"$date":"2021-01-10T06:37:01.902Z"},"s":"I",  "c":"NETWORK",  "id":4333208, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"RSM host selection timeout","attr":{"replicaSet":"rs0","error":"FailedToSatisfyReadPreference: Could not find host matching read preference { mode: \"nearest\" } for set rs0"}}
Error: connect failed to replica set rs0/127.0.0.1:27011,127.0.0.1:27012,127.0.0.1:27013 :

I am not sure, what went wrong? from the host where my docker is running, it’s working fine.

Thank you

Hi Chris, Thanks for the reply.

May i see you replica set initialization? Do we need to update /etc/hosts file for each client where we try to connect to replica set? I did update my hosts file but still I couldn’t connect from remote client.

In my case, I do the following

config={"_id":"rs0","members":[{"_id":0,"host":"mongo1:27017","priority":2},{"_id":1,"host":"mongo2:27017"},{"_id":2,"host":"mongo3:27017"}]}

I followed your compose file, everything is working fine from the machine where my docker is running. I still get the same error from remote client.

I see that you didn’t use any ip_binding in your compose file, how does it work for you from remote clients? according MongoDB documentation we should either use 0.0.0.0 or ip_address or host-names of client right?

Please correct me if i am wrong.

I get the following error when connecting from remote client

connecting to: mongodb://127.0.10.1:27017/?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0
{"t":{"$date":"2021-01-10T07:24:31.441Z"},"s":"I",  "c":"NETWORK",  "id":4333208, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"RSM host selection timeout","attr":{"replicaSet":"rs0","error":"FailedToSatisfyReadPreference: Could not find host matching read preference { mode: \"nearest\" } for set rs0"}}
Error: connect failed to replica set rs0/127.0.10.1:27017 :
connect@src/mongo/shell/mongo.js:374:17

I completely started with new configuration, was deleted previous /data/db folder and my mongo volumes just to make sure if it wasn’t related to any configuration problem.

Thanks

Hi Anusha Reddy,

Can you share the result of below query.
1.) rs.conf()
2.)rs.status()

Thanks

Hi @BM_Sharma ,

  1. rs.conf()
{
	"_id" : "rs0",
	"version" : 1,
	"protocolVersion" : NumberLong(1),
	"writeConcernMajorityJournalDefault" : true,
	"members" : [
		{
			"_id" : 0,
			"host" : "mongo1:27017",
			"arbiterOnly" : false,
			"buildIndexes" : true,
			"hidden" : false,
			"priority" : 2,
			"tags" : {
				
			},
			"slaveDelay" : NumberLong(0),
			"votes" : 1
		},
		{
			"_id" : 1,
			"host" : "mongo2:27017",
			"arbiterOnly" : false,
			"buildIndexes" : true,
			"hidden" : false,
			"priority" : 1,
			"tags" : {
				
			},
			"slaveDelay" : NumberLong(0),
			"votes" : 1
		},
		{
			"_id" : 2,
			"host" : "mongo3:27017",
			"arbiterOnly" : false,
			"buildIndexes" : true,
			"hidden" : false,
			"priority" : 1,
			"tags" : {
				
			},
			"slaveDelay" : NumberLong(0),
			"votes" : 1
		}
	],
	"settings" : {
		"chainingAllowed" : true,
		"heartbeatIntervalMillis" : 2000,
		"heartbeatTimeoutSecs" : 10,
		"electionTimeoutMillis" : 10000,
		"catchUpTimeoutMillis" : -1,
		"catchUpTakeoverDelayMillis" : 30000,
		"getLastErrorModes" : {
			
		},
		"getLastErrorDefaults" : {
			"w" : 1,
			"wtimeout" : 0
		},
		"replicaSetId" : ObjectId("5ffbb4ad5ae5b9142aa2a80c")
	}
}
  1. rs.status()
{
	"set" : "rs0",
	"date" : ISODate("2021-01-11T05:31:45.411Z"),
	"myState" : 1,
	"term" : NumberLong(9),
	"syncingTo" : "",
	"syncSourceHost" : "",
	"syncSourceId" : -1,
	"heartbeatIntervalMillis" : NumberLong(2000),
	"majorityVoteCount" : 2,
	"writeMajorityCount" : 2,
	"optimes" : {
		"lastCommittedOpTime" : {
			"ts" : Timestamp(1610343104, 1),
			"t" : NumberLong(9)
		},
		"lastCommittedWallTime" : ISODate("2021-01-11T05:31:44.731Z"),
		"readConcernMajorityOpTime" : {
			"ts" : Timestamp(1610343104, 1),
			"t" : NumberLong(9)
		},
		"readConcernMajorityWallTime" : ISODate("2021-01-11T05:31:44.731Z"),
		"appliedOpTime" : {
			"ts" : Timestamp(1610343104, 1),
			"t" : NumberLong(9)
		},
		"durableOpTime" : {
			"ts" : Timestamp(1610343104, 1),
			"t" : NumberLong(9)
		},
		"lastAppliedWallTime" : ISODate("2021-01-11T05:31:44.731Z"),
		"lastDurableWallTime" : ISODate("2021-01-11T05:31:44.731Z")
	},
	"lastStableRecoveryTimestamp" : Timestamp(1610343054, 1),
	"lastStableCheckpointTimestamp" : Timestamp(1610343054, 1),
	"electionCandidateMetrics" : {
		"lastElectionReason" : "priorityTakeover",
		"lastElectionDate" : ISODate("2021-01-11T05:30:24.715Z"),
		"electionTerm" : NumberLong(9),
		"lastCommittedOpTimeAtElection" : {
			"ts" : Timestamp(1610343023, 1),
			"t" : NumberLong(8)
		},
		"lastSeenOpTimeAtElection" : {
			"ts" : Timestamp(1610343023, 1),
			"t" : NumberLong(8)
		},
		"numVotesNeeded" : 2,
		"priorityAtElection" : 2,
		"electionTimeoutMillis" : NumberLong(10000),
		"priorPrimaryMemberId" : 2,
		"numCatchUpOps" : NumberLong(0),
		"newTermStartDate" : ISODate("2021-01-11T05:30:24.727Z"),
		"wMajorityWriteAvailabilityDate" : ISODate("2021-01-11T05:30:25.727Z")
	},
	"electionParticipantMetrics" : {
		"votedForCandidate" : true,
		"electionTerm" : NumberLong(8),
		"lastVoteDate" : ISODate("2021-01-11T05:30:13.358Z"),
		"electionCandidateMemberId" : 2,
		"voteReason" : "",
		"lastAppliedOpTimeAtElection" : {
			"ts" : Timestamp(1610342959, 1),
			"t" : NumberLong(7)
		},
		"maxAppliedOpTimeInSet" : {
			"ts" : Timestamp(1610342959, 1),
			"t" : NumberLong(7)
		},
		"priorityAtElection" : 2
	},
	"members" : [
		{
			"_id" : 0,
			"name" : "mongo1:27017",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 103,
			"optime" : {
				"ts" : Timestamp(1610343104, 1),
				"t" : NumberLong(9)
			},
			"optimeDate" : ISODate("2021-01-11T05:31:44Z"),
			"syncingTo" : "",
			"syncSourceHost" : "",
			"syncSourceId" : -1,
			"infoMessage" : "",
			"electionTime" : Timestamp(1610343024, 1),
			"electionDate" : ISODate("2021-01-11T05:30:24Z"),
			"configVersion" : 1,
			"self" : true,
			"lastHeartbeatMessage" : ""
		},
		{
			"_id" : 1,
			"name" : "mongo2:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 101,
			"optime" : {
				"ts" : Timestamp(1610343104, 1),
				"t" : NumberLong(9)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1610343094, 1),
				"t" : NumberLong(9)
			},
			"optimeDate" : ISODate("2021-01-11T05:31:44Z"),
			"optimeDurableDate" : ISODate("2021-01-11T05:31:34Z"),
			"lastHeartbeat" : ISODate("2021-01-11T05:31:44.747Z"),
			"lastHeartbeatRecv" : ISODate("2021-01-11T05:31:44.777Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncingTo" : "mongo3:27017",
			"syncSourceHost" : "mongo3:27017",
			"syncSourceId" : 2,
			"infoMessage" : "",
			"configVersion" : 1
		},
		{
			"_id" : 2,
			"name" : "mongo3:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 101,
			"optime" : {
				"ts" : Timestamp(1610343104, 1),
				"t" : NumberLong(9)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1610343104, 1),
				"t" : NumberLong(9)
			},
			"optimeDate" : ISODate("2021-01-11T05:31:44Z"),
			"optimeDurableDate" : ISODate("2021-01-11T05:31:44Z"),
			"lastHeartbeat" : ISODate("2021-01-11T05:31:44.747Z"),
			"lastHeartbeatRecv" : ISODate("2021-01-11T05:31:43.886Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncingTo" : "mongo1:27017",
			"syncSourceHost" : "mongo1:27017",
			"syncSourceId" : 0,
			"infoMessage" : "",
			"configVersion" : 1
		}
	],
	"ok" : 1,
	"$clusterTime" : {
		"clusterTime" : Timestamp(1610343104, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	},
	"operationTime" : Timestamp(1610343104, 1)
}

@Anusha_Reddy
Sorry I set you wrong. I read you were trying to connect to 127.0.0.1 and did the mental leap that was all you were trying to do… despite the topic.

  1. Do not run this as a production configuration. I mentioned it already but I will do so again. Replica sets are to provide redundancy and data availability. Running all the replicas on the same host counteracts the point of a replica set, an anti-pattern. The loss of this host is the loss of you data, temporarily or permanently. It is also very likely to be less performant due to contention for resources.
  2. If the replicaset is required for features such as change streams, you can initialize and run a replica set of one.

With that in mind and you still want to run a replicaset on one host…

  1. Replica set members and client must be able to resolve and connect to the members of the replicaset. Specifically the hostnames used to configure the replica set.
  2. With the fact that other hosts will connect to the replicaset use an alias/cname to point to the docker host where you will run the replicaset.
  3. Use this hostname for all members of the replicaset initialise.
  4. You will need a separate port for each member, use this as the port when you initialize the set or add a member.

Note: You will need to layer TLS and Authentication in.
I would mount another volume on each container for a configuration mount for Keys and a mongod.conf

Minimum viable compose
version: '3.9'

services:
  mongo-0-a:
    image: mongo:4.4
    ports:
      - 27017:27017
    volumes:
      - mongo-0-a:/data/db
    restart: unless-stopped
    command: "--wiredTigerCacheSizeGB 0.25 --replSet rs0"

  mongo-0-b:
    image: mongo:4.4
    ports:
      - 27117:27017
    volumes:
      - mongo-0-b:/data/db
    restart: unless-stopped
    command: "--wiredTigerCacheSizeGB 0.25 --replSet rs0"

  mongo-0-c:
    image: mongo:4.4
    ports:
      - 27217:27017
    volumes:
      - mongo-0-c:/data/db
    restart: unless-stopped
    command: "--wiredTigerCacheSizeGB 0.25 --replSet rs0"
volumes:
  mongo-0-a:
  mongo-0-b:
  mongo-0-c:
rs.initiate
rs.initiate(
  { 
    "_id" : "rs0",
    "members" : [
      { 
        "_id" : 0,
        "host" : "this-laptop.domain:27017"
      },
      { 
        "_id" : 1,
        "host" : "this-laptop.domain:27117"
      },
      { 
        "_id" : 2,
        "host" : "this-laptop.domain:27217"
      }
    ]
  }
)
Connect from another host

mongo “mongodb://this-laptop.domain:27217/admin?replicaSet=rs0” --quiet
Welcome to the MongoDB shell.
For interactive help, type “help”.
For more comprehensive documentation, see
https://docs.mongodb.com/
Questions? Try the MongoDB Developer Community Forums
MongoDB Developer Community Forums - A place to discover, learn, and grow with MongoDB technologies
rs0:PRIMARY> db.isMaster().hosts
[
“this-laptop.domain:27017”,
“this-laptop.domain:27117”,
“this-laptop.domain:27217”

3 Likes

Hi Chris,

Thanks for the reply.

can you let me know bit more about alias/cname to the docker host ?

as I already have host-name in my docker file ‘mongo1’, I have tried adding one more entry to mongo1 in my /etc/hosts file and extra_hosts in my docker file and using aliases in networks.

        extra_hosts:
            - "test-domain:mongo1"
127.0.0.1	localhost
127.0.1.1	xxxx-ThinkPad-X270
#127.0.0.1       mongo1 mongo2 mongo3
192.168.0.2	mongo1
192.168.0.3	mongo2
192.168.0.4	mongo3

with the above syntax i get error during initialization.

{
	"operationTime" : Timestamp(0, 0),
	"ok" : 0,
	"errmsg" : "No host described in new configuration 1 for replica set rs0 maps to this node",
	"code" : 93,
	"codeName" : "InvalidReplicaSetConfig",
	"$clusterTime" : {
		"clusterTime" : Timestamp(0, 0),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}

sorry, this is bit confusing for me. It would be great if you have docker file or an etc/hosts file showing how to add an alias name.

Thank you

Hi chris,

thanks for pointing out the most important factor of replica-set redundancy.

We are planning to use this for production. But how do i run mongod on different docker containers on different machines and join them as replica set.

I know this is bit out of context to my question, but do you have any starting point where i can know about replica set initialization when running mongod on different hosts.

I can only find on internet mostly the way I do it.

Thank you.

M103: Basic Cluster Administration is a great place to start.

Thanks, I will look into it.

Can you let me know what i was doing wrong for the alias name for the docker host in other reply? I couldn’t get that right.

Thank you .

I gave it some thoughts and here is something to get you started.

  1. You will want to persist on your host your config and your db folder. So you can kill your container and restart it if necessary with its config and db content.
  2. You will need a mongod.conf file in a config folder and an empty folder for your db files (WiredTiger’s file system).

Here is my mongod.conf file which you will definitely want to expend on:

replication:
   replSetName: prod
net:
   bindIp: 127.0.0.1

You will most probably do something for the logs, add some authentification mechanism and you will also need to bindIp all the IP addresses that will be able to access this cluster, starting with the other nodes in that same cluster.

mkdir -p ~/mongodb-prod/{config,db}
cd ~/mongodb-prod
vim config/mongod.conf # hack your config file
docker run -d -p 27017:27017 -h $(hostname) --name mongo -v /home/polux/mongodb-prod/config/:/etc/mongo -v /home/polux/mongodb-prod/db/:/data/db mongo:4.4.3 --config /etc/mongo/mongod.conf

Once you have done the same thing on your X servers with X>=3 for a production environment, you can then connect to one of the node and rs.initiate({your-conf}) with something like:

# connect with
docker exec -it mongo mongo

# initiate with
rs.initiate({
      _id: "prod",
      members: [
         { _id: 0, host: "host1:27017" },
         { _id: 1, host: "host2:27017" },
         { _id: 2, host: "host3:27017" }]});

Then your cluster should be ready to work.
Like I said, don’t follow what I just said to the letter ─ but I think it’s a good starting point at least.

Cheers,
Maxime.

Hi Chris,

After spending some time and trying different methods for connecting remote mongodb instance using nginx reverse proxy mechanism or setting bind_ip to 0.0.0.0 and enabling ssl/tls protection. I found out the exact problem.

Below command running on the host where all mongod running on docker, which is working fine.

mongo --ssl --sslCAFile /etc/mongodb/ssl/testing.ca.crt --host rs0/<host-ip-address>:27011,<host-ip-address>:27012,<host-ip-address>:27013 --sslPEMKeyFile /etc/mongodb/ssl/client.pem --authenticationDatabase '$external' --authenticationMechanism 'MONGODB-X509'

Below command is from remote client, which doesn’t work

mongo --ssl --sslCAFile /etc/mongodb/ssl/testing.ca.crt --host rs0/<host-ip-address>:27011,<host-ip-address>:27012,<host-ip-address>:27013 --sslPEMKeyFile /etc/mongodb/ssl/remote_client.pem --authenticationDatabase '$external' --authenticationMechanism 'MONGODB-X509'

{"t":{"$date":"2021-01-19T08:14:08.803Z"},"s":"I",  "c":"NETWORK",  "id":4333208, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"RSM host selection timeout","attr":{"replicaSet":"rs0","error":"FailedToSatisfyReadPreference: Could not find host matching read preference { mode: \"nearest\" } for set rs0"}}
Error: connect failed to replica set rs0/<host-ip-address>:27011,<host-ip-address>:27012,<host-ip-address>:27013 :
connect@src/mongo/shell/mongo.js:374:17
@(connect):3:6

But surprisingly, I can connect to individual host from remote client but not to replica set. Below command works.

mongo --ssl --sslCAFile /etc/mongodb/ssl/testing.ca.crt --host <host-ip-address>:27011 --sslPEMKeyFile /etc/mongodb/ssl/remote_client.pem --authenticationDatabase '$external' --authenticationMechanism 'MONGODB-X509'

I see that, in your previous answer, you also connect to single host. but why is it like that? if i connect to single host, if that host fails, I wouldn’t get replica set benefits right?

Do you have any idea why is this happening.

I think it was already mentioned by someone above: Because MongoDB is configured in replica set mode it has different communication, replica set uses discovery mechanism for the set, the
member name in the configuration of the set and the host in the
connection must match. You need to do some kind of tricks with DNS to make this, e.g. host files or split-horizon DNS.