Mongo Atlas scaling

Hi,
I want to use Mongo Atlas for production level code. Currently I am using a M30 cluster with 1 primary and 2 replicas and the reads are set to use the primary node. Is there a way to read data from all the 3 nodes rather than just primary node? Should I consider a shard cluster?

Please let me know your thoughts.

Thanks,
Supriya

Hi @Supriya_Bansal,

What version is your Atlas cluster? We do offer different read Preferences for your operations as part of the client logic.

Now there are a few considerations you need to be aware of:

  1. The secondary is going through same write workload as the primary. Non blocking reads (blocked behind replication) were introduced only in 4.0+. So although you will ease your primary you will not necessarily get better performance

  2. Since replication is asynchronous you may read stale data. Not all reads can live with that.

  3. The driver does not necessarily load balance connections across secondaries and therefore it will be more a round robin assignment.

I would suggest that if you have specific load you want to offload from the primary for analytics you should provision ANALYTIC nodes .

MongoDB official stand on scaling is by using sharding as the best scale out option. You can always just scale up with raising the Atlas tier as well.

Please read more on our production consideration:

Best
Pavel

1 Like

Thank you so much for the response. Atlas cluster is using the latest Mongo 4.4 version.
I am going to look into sharding of Atlas cluster. My storage would be less than 1 GB which is around 500 MB. Raising the Atlas tier definitely provides higher RAM and CPU but gives lot of storage which is not needed for me.

Best,
Supriya

Hi @Supriya_Bansal,

With this small storage I expect M30-M40 to.be good enough with a replica set. Do you know the data size?

If you can remain with one replica set I would consider that as you van convert any replica set to a sharded cluster at any time.

Be aware that sharding requries to shard collections with a performant sharding key which might introduce extra complexity, just a warning.

Best
Pavel

Hi @Pavel_Duchovny,
Thank you again for the guidance.
The data size is 72MB which is going to grow till 150MB by end of this year. This is static data, once the collection is uploaded in Mongo its used only for read purposes. Average document size is around 480B.

Thanks,
Supriya

Hi @Supriya_Bansal,

Sharding is definitely not needed. If there is fairly low traffic you may consider even M20 for those sizes as it will still fit into memory…

Thanks
Pavel

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.