MongoDB Atlas Change Stream into AWS

Hi, we are working on a solution on AWS and are using MongoDB Atlas as DB.
Right now we are using the MongoDB Atlas Shared cluster.

We require a change stream for some of our collections, so we can execute side-effects and update other transitive collections when specific conditions are met. In AWS we are already using Event Bridge and i thought its a good use case for using the MongoDB Atlas Trigger into AWS Event Bridge.

So far so good. Though when we had implemented it, we saw that the MongoDB Atlas Trigger has a hard time with catching up our burst load. There are burst loads, when we are updating ~50k item in one collection. With ordering enabled the trigger only delivers 3-4 items per second.
Without the ordering it manages to handle the burst, though when handling updates of a document, the order is important i guess.

So my question is, is there anything else we can do? Should we move to another concept to achieve our change stream? I am glad for every advice!

Hi Jurgen – You’ll definitely want to make sure your Atlas Cluster, Realm Application, and Eventbridge instance are all collocated in order to get the best performance here. With Event Ordering on, we’ll have to process events serially, so the additional latency that can be introduced by cross-region communication can add up. For ex. if your requests complete in 10ms you would expect to get 100 events / s but if they tool ~250ms you would only get 3-4 events / s. That being said, if you’re bulk loading 50k items with event ordering on it will likely take a few minutes regardless of the approach.

Hi Drew, thanks for your answer!
I am using a shared cluster in the same region as my AWS application (AWS Frankfurt). So i guess also the realm app will be in the same application for the triggers. One trigger requests completes within 300-400ms.
300-400 ms for one change event feels wrong to me, especially when we assume it is all in the same location. Has this also something to do because we are using the shared cluster? Would a dedicated cluster change anything about this?

Hi @JuHwon and welcome in the MongoDB Community :muscle: !

At least Frankfurt is a supported region for local Realm deployments.

Dedicated cluster = better performance in general. Shared tier have several limitations that can reduce your performances.

Especially the throughput and bandwidth limitations.

Cheers,
Maxime.

Hi @JuHwon – You will want to check that your Realm application is in Frankfurt as well, this can be checked by clicking into the Application and looking under ‘App Settings’. In the event that the application is not located in Frankfurt, we would recommend creating a new application with the region set as Frankfurt/local.

2 Likes

@Drew_DiPalma thank you very much for the realm advice.
Since i am new to mongodb atlas am realm, i assumed, when i have an atlas DB in AWS eu-central-1, and i add a trigger via the Atlas interface to that cluster, the trigger/realm app will get hosted in the same location.
Which would totally make sense imho!

I checked the realm app, which was located in virgina. I created a new realm app in eu-central-1, configured everthing accordingly and now the trigger execution only requires 20-30ms. thanks a lot.

I also want to mention, that this default behaviour is very confusing imo. I don’t know if this is well documented, and if there is any reason for it though i would never assume this default behaviour.

@JuHwon – Great to hear that helped! I agree that this experience could be a bit more clear, we’re actually in the process of planning how to re-organize the Atlas Triggers getting started experience to be more intuitive, so I appreciate the feedback here.

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.