-
Notifications
You must be signed in to change notification settings - Fork 993
valkey-benchmark: Tests for ZSCORE, ZRANGE and SISMEMBER #2575
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: unstable
Are you sure you want to change the base?
Conversation
…MEMBER This can help improve the used test scenarios in our performance benchmark Signed-off-by: Ran Shidlansik <[email protected]>
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## unstable #2575 +/- ##
=========================================
Coverage 72.18% 72.18%
=========================================
Files 126 126
Lines 70662 70675 +13
=========================================
+ Hits 51004 51016 +12
- Misses 19658 19659 +1
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good.
For a more complete coverage, I guess we need add many more tests to valkey-benchmark and then it may be good to change the default set of tests to not include all pre-defined tests. Maybe we should also add some grouping to be able to run only a specific group of tests, such as all the sorted-set tests.
Signed-off-by: Ran Shidlansik <[email protected]>
@zuiderkwast done. |
Maybe you are right. Personally I do not mind having a long list of tests running, since the user interested only in a specific subset of tests can use the '-t' flag in order to achieve that. given said that, it might have benefit grouping tests by subject (like we are doing in the website) e.g. groups per data type like 'sets', 'sorted-sets', 'lists', 'geo', 'hash' etc... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it might have benefit grouping tests by subject (like we are doing in the website) e.g. groups per data type like 'sets', 'sorted-sets', 'lists', 'geo', 'hash' etc...
Yes, we can refine this idea a bit more. Let's postpone it to a future PR?
The benchmark is written generically enough that we don't need to hardcode new tests inside the benchmark. Do we need to add these? |
Not strictly necessary. It's only for convenience, I suppose. |
It's not really even that convenient, since you have to run something to populate them first and then run the commands. At what point do we just add every single command here? |
What's your vision for this tool? The builtin tests just serve as examples? |
Mostly yeah. I think some people like to download it and run |
@madolson I want to summarize the discussion around this: |
This can help improve the used test scenarios in our performance benchmark
see #2508 (comment)