To normalise MongoDB/Mongoose, in order to simplify GraphQL Schema?

Assume a Seller schema with over 60+ embedded fields representing 1:1 and 1:M relationships and mix of embedded documents and document references, e.g “director”, “addresses” and “bankAccounts”. We are definitely not storing anything big that would exceed 16MB document limit). You essentially have a typical, best practice, de-normalised NoSQL schema design.

A friend is suggesting that we should break out previously embedded fields of say “director”, “addresses” and “bankAccounts” into their own schema and collections. Each has an added “sellerId” field to point to its creating owner/Seller.

My friend’s argument is that by doing this our GraphQL type and input schema definitions become easier to define and manage - considering 60+ fields for Seller? So instead of creating a single, massive 60+ field “type” and “input types” definition in our GraphQL Schema, he can now break out “directors”, “addresses” and “bankAccounts” into their own types and input definitions

Q1. Is there any valid logic in this? To normalise the MongoDB to simplify the GraphQL schema type and input handling?

Q2. Assuming we did break out those aforementioned, previously embedded fields, which are mostly stale and infrequently queried data anyway, is it realllllly that bad to do so and to normalise in MongoDB?

Yes, I know if you normalise too much, why don’t you use an RDB/SQL?? I guess because having JavaScript knowledge, working with JSON and Mongoose ORM, having limited resources, makes it an attractable data store.