To further clarify Doug’s comment:
Primary should be considered a transient role so another secondary can be elected in the event of failover. If you have different replica set member configurations in terms of oplog size or system resources, failover may result in unanticipated consequences (for example, reducing the time you have to get a former primary back online before it becomes stale).
Varying member configurations are supported, but you should have good reasons for doing so and should definitely try to model failover scenarios. You will encounter fewer operational challenges using the default deployment settings with identically configured replica set members.
Some replica set configuration/failover considerations:
Failover is not always a result of failure. For example, regular maintenance activities such as upgrading software versions may also require you to briefly restart services on a replica set member.
If your deployment is distributed across multiple data centres, consider the effect of chained replication (which is enabled by default). With chained replication a secondary can choose to replicate from another secondary of the replica set which is closer (based on network ping time) than the current primary.
The current duration of the oplog is estimated based on the timestamps between first and last entries. If data insert or update patterns change significantly in your workload, the oplog duration will also be affected. Note: the upcoming MongoDB 4.4 server release adds a new Minimum Oplog Retention Period to provide better assurance on oplog duration.
If you want to better understand behaviour for a proposed deployment configuration, I suggest standing up a replica set in a test environment to simulate scenarios.
If oplog sizes vary, members with a larger oplog will be able to store more history. The oplog sizes on each replica set member are not determined by the size of the source oplog.