Custom query resolver + rules = terribly slow on a large collection

I have a custom query resolver. Here’s a maximally reduced version of it:

const resolver = async (input) => {
    const db ='mongodb-atlas').db('xxx'));
    const users = db.collection('users');
    return (await users
exports = resolver;

The users collection has 4000 entries in it.

If it runs with system privileges it takes 0.5s to execute.
If it runs under application authentication it takes 7s.

I have only one rule on that collection that looks like this:

            "name": "unauthenticated",
            "apply_when": {},
            "fields": {
                "email": {},
                "fullName": {
                    "read": true
                "locationObject": {
                    "read": true
                "mainPicture": {
                    "read": true
                "name": {
                    "read": true
                "skills": {
                    "read": true
            "insert": false,
            "delete": false,
            "search": true,
            "additional_fields": {}

If I make the query any more complex (e.g. throw in some aggregation) it just times out…

The fact that this resolver is only slow with application authentication begs the thought that there’s something wrong with how rules are enforced in Realm.

Can somebody from the team shed the light on how they work under the hood?
It feels like the rules are applied to every record in the collection, before the limit is applied, making them unusable… I would expect thee rules to be enforced on top of the resolver results, not inside mongodb query.

Please help me figure it out! Running everything under system context and enforcing security by hand would be an awful and very insecure approach!

p.s.: bottom line is that it’s very hard to work with rules without being able to see how they are applied and what kind of query is ran under the hood. A leaky abstraction with no way to dig into it. Having detailed description of its mechanism would have helped.

I guess I have a similar problem:

I have a canWritePartion function that checks if the user has access to the partition:

exports = function(partition) {
  console.log(`Checking if can sync a write for partition = ${partition}`);

  const teamCollection ="mongodb-atlas").db("test").collection("Team");
  const personCollection ="mongodb-atlas").db("test").collection("Person");
  const user = context.user;
  let partitionKey = "";
  let partitionVale = "";
  const splitPartition = partition.split("=");
  if (splitPartition.length == 2) {
    partitionKey = splitPartition[0];
    partitionValue = splitPartition[1];
    console.log(`Partition key = ${partitionKey}; partition value = ${partitionValue}`);
  } else {
    partitionKey = partition
    console.log(`Partition key = ${partitionKey}`);
  switch (partitionKey) {
  case "user":
    return partitionValue ===;
  case "events":
    return personCollection.findOne({ _id: }).then(person => {
      if (person && person.teams && person.teams.length > 0) {
        return teamCollection.findOne({ _id: person.teams[0] }).then(team => {
          const isAdmin = team.membersAdmin.find(m => m === !== undefined;
          console.log('is user admin for this team?', isAdmin);
          return isAdmin;
        }, error => {
	       console.log(`Unable to write Event document: ${error}`);
	       return false;
    }, error => {
     console.log(`Unable to write Event document: ${error}`);
     return false;
    console.log(`Unexpected partition key: ${partitionKey}`);
    return false;

If i try to find the objects with:

events.find({ _partition: partition })

the query takes about 1 minute. If I return just true at the top it takes about 1 second. The Events collection has about 4000 documents.

kind regards