Shouldn't motor deliver a higher performance than pymongo?

I’ve just ran a benchmark tool (GitHub - hasura/graphql-bench: A super simple tool to benchmark GraphQL queries) in a very simple graphql query.

I configured two api’s doing the same query, but one using motor driver and the other one using PyMongo.

I got similar results for both.

moto-driver results


candidate: simpleQuery on python-async-mongodb at http://192.168.1.2:8080/
Warmup:
++++++++++++++++++++
200Req/s Duration:60s open connections:20
Running 1m test @ http://192.168.1.2:8080/
8 threads and 20 connections
Thread calibration: mean lat.: 3396.996ms, rate sampling interval: 13017ms
Thread calibration: mean lat.: 3308.702ms, rate sampling interval: 11968ms
Thread calibration: mean lat.: 3554.039ms, rate sampling interval: 12828ms
Thread calibration: mean lat.: 3406.929ms, rate sampling interval: 12484ms
Thread calibration: mean lat.: 3445.203ms, rate sampling interval: 12648ms
Thread calibration: mean lat.: 3426.942ms, rate sampling interval: 12812ms
Thread calibration: mean lat.: 3652.908ms, rate sampling interval: 13115ms
Thread calibration: mean lat.: 3430.667ms, rate sampling interval: 11984ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 24.28s 10.16s 48.43s 58.56%
Req/Sec 6.78 0.96 8.00 100.00%
Latency Distribution (HdrHistogram - Recorded Latency)
50.000% 24.28s
75.000% 32.93s
90.000% 37.98s
99.000% 44.79s
99.900% 47.58s
99.990% 48.46s
99.999% 48.46s
100.000% 48.46s

    Detailed Percentile spectrum:

  #[Mean    =    24283.712, StdDeviation   =    10161.542]
  #[Max     =    48431.104, Total count    =         2756]
  #[Buckets =           27, SubBuckets     =         2048]
  ----------------------------------------------------------
    3314 requests in 1.00m, 1.44MB read
    Socket errors: connect 0, read 0, write 0, timeout 4
    Non-2xx or 3xx responses: 3314
  Requests/sec:     55.22
  Transfer/sec:     24.64KB

Benchmark:
  ++++++++++++++++++++
  200Req/s Duration:300s open connections:20
  Running 5m test @ http://192.168.1.2:8080/
    8 threads and 20 connections
    Thread calibration: mean lat.: 4509.201ms, rate sampling interval: 14213ms
    Thread calibration: mean lat.: 4358.747ms, rate sampling interval: 13746ms
    Thread calibration: mean lat.: 4194.528ms, rate sampling interval: 13590ms
    Thread calibration: mean lat.: 4319.308ms, rate sampling interval: 13221ms
    Thread calibration: mean lat.: 4189.944ms, rate sampling interval: 13180ms
    Thread calibration: mean lat.: 4145.555ms, rate sampling interval: 12836ms
    Thread calibration: mean lat.: 4424.448ms, rate sampling interval: 13443ms
    Thread calibration: mean lat.: 4403.278ms, rate sampling interval: 14680ms
    Thread Stats   Avg      Stdev     Max   +/- Stdev
      Latency     1.86m     1.01m    4.16m    58.56%
      Req/Sec     6.53      0.95     8.00    100.00%
    Latency Distribution (HdrHistogram - Recorded Latency)
   50.000%    1.84m 
   75.000%    2.72m 
   90.000%    3.23m 
   99.000%    3.82m 
   99.900%    4.08m 
   99.990%    4.15m 
   99.999%    4.16m 
  100.000%    4.16m 
  
    Detailed Percentile spectrum:

  #[Mean    =   111375.774, StdDeviation   =    60805.210]
  #[Max     =   249692.160, Total count    =        16171]
  #[Buckets =           27, SubBuckets     =         2048]
  ----------------------------------------------------------
    16654 requests in 5.00m, 7.26MB read
    Socket errors: connect 0, read 0, write 0, timeout 1
    Non-2xx or 3xx responses: 16654
  Requests/sec:     55.51
  Transfer/sec:     24.77KB

PyMongo results


candidate: simpleQuery on python-sync-mongodb at http://192.168.1.2:8080/
Warmup:
++++++++++++++++++++
200Req/s Duration:60s open connections:20
Running 1m test @ http://192.168.1.2:8080/
8 threads and 20 connections
Thread calibration: mean lat.: 3800.246ms, rate sampling interval: 13156ms
Thread calibration: mean lat.: 3823.071ms, rate sampling interval: 12681ms
Thread calibration: mean lat.: 3858.973ms, rate sampling interval: 13156ms
Thread calibration: mean lat.: 3669.897ms, rate sampling interval: 12419ms
Thread calibration: mean lat.: 3648.545ms, rate sampling interval: 12181ms
Thread calibration: mean lat.: 3685.089ms, rate sampling interval: 12173ms
Thread calibration: mean lat.: 3764.000ms, rate sampling interval: 12632ms
Thread calibration: mean lat.: 3752.922ms, rate sampling interval: 12640ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 25.10s 10.58s 49.71s 59.71%
Req/Sec 5.93 0.81 8.00 92.59%
Latency Distribution (HdrHistogram - Recorded Latency)
50.000% 23.69s
75.000% 34.08s
90.000% 39.98s
99.000% 46.60s
99.900% 49.55s
99.990% 49.74s
99.999% 49.74s
100.000% 49.74s

    Detailed Percentile spectrum:

  #[Mean    =    25098.605, StdDeviation   =    10575.561]
  #[Max     =    49709.056, Total count    =         2462]
  #[Buckets =           27, SubBuckets     =         2048]
  ----------------------------------------------------------
    2998 requests in 1.00m, 1.31MB read
    Socket errors: connect 0, read 0, write 0, timeout 16
    Non-2xx or 3xx responses: 2998
  Requests/sec:     49.95
  Transfer/sec:     22.29KB

Benchmark:
  ++++++++++++++++++++
  200Req/s Duration:300s open connections:20
  Running 5m test @ http://192.168.1.2:8080/
    8 threads and 20 connections
    Thread calibration: mean lat.: 3703.784ms, rate sampling interval: 12713ms
    Thread calibration: mean lat.: 3748.122ms, rate sampling interval: 12943ms
    Thread calibration: mean lat.: 3697.915ms, rate sampling interval: 12689ms
    Thread calibration: mean lat.: 3774.441ms, rate sampling interval: 12689ms
    Thread calibration: mean lat.: 3562.794ms, rate sampling interval: 11821ms
    Thread calibration: mean lat.: 3626.784ms, rate sampling interval: 11976ms
    Thread calibration: mean lat.: 3646.199ms, rate sampling interval: 12738ms
    Thread calibration: mean lat.: 4295.842ms, rate sampling interval: 13770ms
    Thread Stats   Avg      Stdev     Max   +/- Stdev
      Latency     1.87m     1.01m    4.16m    58.40%
      Req/Sec     6.34      0.96     8.00    100.00%
    Latency Distribution (HdrHistogram - Recorded Latency)
   50.000%    1.85m 
   75.000%    2.74m 
   90.000%    3.24m 
   99.000%    3.80m 
   99.900%    4.05m 
   99.990%    4.14m 
   99.999%    4.16m 
  100.000%    4.16m 
  
    Detailed Percentile spectrum:

  #[Mean    =   112274.070, StdDeviation   =    60410.941]
  #[Max     =   249561.088, Total count    =        15703]
  #[Buckets =           27, SubBuckets     =         2048]
  ----------------------------------------------------------
    16205 requests in 5.00m, 7.06MB read
    Socket errors: connect 0, read 0, write 0, timeout 3
    Non-2xx or 3xx responses: 16205
  Requests/sec:     54.01
  Transfer/sec:     24.10KB

Shouldn’t I get a much better results using motor-drive ?

There is also a similar question on SO

Summarizing, motor got Requests/sec: 55.51 and Pymongo got Requests/sec: 54.01.

Hi Kleyson! Thanks for your question.

I think there’s a couple of things to talk about here:

The first is that async code isn’t necessarily faster than non-async code. What it is often better for is handling more concurrent connections, because it can switch context when waiting for IO to complete, with less overhead per connection than spinning up a thread for each connection.

The other thing is that although Motor provides an asyncio API, it’s actually implemented as a wrapper around PyMongo, with all blocking operations conducted in a worker thread, so in practice you’d expect to get similar results with Motor and PyMongo.

3 Likes

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.