I am in my first foray into MongoDB. Trying to write an app accessible from PHP web server and in future mobile apps hopefully.
I have been applying what I think are the recommended data modelling patterns from tutorials and webinars, namely to embed documents. I see the benefit in having all data in 1 remote call to the DB (Atlas, in my instance). Is the compromise complexity?
I seem to need to do quite advanced aggregations pipelines to what might seem fairly trivial document updates in the RDBMS world!
I am not sure how to judge where and when to split this data. Is there a typical way to measure this? i.e. no of remote calls, volume / size of documents returned, etc?
To illustrate:
- Document per round of golf played
- Inside this document, sub-array of Object per hole played
-
hole
object contains lots of properties, i.e hole no, par, stroke index, playerScore and some others
With this structure, to update a single hole’s score, I need to dive into the layered sub-documents. If these were separate collections, wouldn’t a hole
document be easier to update?
I have repeated this pattern often now. Or maybe its just the generic steep learning curve of any platform / technology?
Discussion piece, not an actual problem to solve. But peoples insights would be invaluable.