Skip to content

Conversation

NoahStapp
Copy link
Contributor

Please complete the following before merging:

  • Update changelog.
  • Test changes in at least one language driver.
  • Test these changes against all server versions and topologies (including standalone, replica set, and sharded
    clusters).

Python Django implementation: mongodb/django-mongodb-backend#366.

@@ -0,0 +1 @@
{"field1":"miNVpaKW","field2":"CS5VwrwN","field3":"Oq5Csk1w","field4":"ZPm57dhu","field5":"gxUpzIjg","field6":"Smo9whci","field7":"TW34kfzq","field8":55336395,"field9":41992681,"field10":72188733,"field11":46660880,"field12":3527055,"field13":74094448}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

format


### Benchmark Server

The MongoDB ODM Performance Benchmark must be run against a standalone MongoDB server running the latest stable database
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can open up this to be a standalone or a replica set with a size of 1. (This is because some ODMs leverage transactions)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using a replica set of size 1 makes more sense here, agreed.


### Benchmark placement and scheduling

The MongoDB ODM Performance Benchmark should be placed within the ODM's test directory as an independent test suite. Due
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still think we should leave an option for folks to create their own benchmarking repo if that helps out. I'm open to others take on this one seeing as I worry about maintainers not wanting a benchmark repo.

to the relatively long runtime of the benchmarks, including them as part of an automated suite that runs against every
PR is not recommended. Instead, scheduling benchmark runs on a regular cadence is the recommended method of automating
this suite of tests.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per your suggestion earlier, we should include some new information about testing mainline usecases.

Comment on lines +379 to +382
As discussed earlier in this document, ODM feature sets vary significantly across libraries. Many ODMs have features
unique to them or their niche in the wider ecosystem, which makes specifying concrete benchmark test cases for every
possible API unfeasible. Instead, ODM authors should determine what mainline use cases of their library are not covered
by the benchmarks specified above and expand this testing suite with additional benchmarks to cover those areas.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section is attempting to specify that ODMs should implement additional benchmark tests to cover mainline use cases that do not fall into those included in this specification. One example would be the use of Django's in filter operator: Model.objects.filter(field__in=["some_val"]).

@NoahStapp NoahStapp marked this pull request as ready for review August 21, 2025 21:22
@NoahStapp NoahStapp requested a review from a team as a code owner August 21, 2025 21:22
@NoahStapp NoahStapp requested review from JamesKovacs, alexbevi, aclark4life, ajcvickers, rozza, damieng and R-shubham and removed request for a team August 21, 2025 21:22
@rozza rozza removed their request for review August 26, 2025 08:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants