Skip to content

Conversation

ranshid
Copy link
Member

@ranshid ranshid commented Sep 3, 2025

This can help improve the used test scenarios in our performance benchmark

see #2508 (comment)

…MEMBER

This can help improve the used test scenarios in our performance benchmark

Signed-off-by: Ran Shidlansik <[email protected]>
Copy link

codecov bot commented Sep 3, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 72.18%. Comparing base (47d2203) to head (1ebb403).

Additional details and impacted files
@@            Coverage Diff            @@
##           unstable    #2575   +/-   ##
=========================================
  Coverage     72.18%   72.18%           
=========================================
  Files           126      126           
  Lines         70662    70675   +13     
=========================================
+ Hits          51004    51016   +12     
- Misses        19658    19659    +1     
Files with missing lines Coverage Δ
src/valkey-benchmark.c 61.68% <100.00%> (+0.14%) ⬆️

... and 7 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Contributor

@zuiderkwast zuiderkwast left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.

For a more complete coverage, I guess we need add many more tests to valkey-benchmark and then it may be good to change the default set of tests to not include all pre-defined tests. Maybe we should also add some grouping to be able to run only a specific group of tests, such as all the sorted-set tests.

Signed-off-by: Ran Shidlansik <[email protected]>
@ranshid
Copy link
Member Author

ranshid commented Sep 3, 2025

@zuiderkwast done.

@ranshid ranshid added the release-notes This issue should get a line item in the release notes label Sep 3, 2025
@ranshid
Copy link
Member Author

ranshid commented Sep 3, 2025

Looks good.

For a more complete coverage, I guess we need add many more tests to valkey-benchmark and then it may be good to change the default set of tests to not include all pre-defined tests. Maybe we should also add some grouping to be able to run only a specific group of tests, such as all the sorted-set tests.

Maybe you are right. Personally I do not mind having a long list of tests running, since the user interested only in a specific subset of tests can use the '-t' flag in order to achieve that. given said that, it might have benefit grouping tests by subject (like we are doing in the website) e.g. groups per data type like 'sets', 'sorted-sets', 'lists', 'geo', 'hash' etc...

Copy link
Contributor

@zuiderkwast zuiderkwast left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it might have benefit grouping tests by subject (like we are doing in the website) e.g. groups per data type like 'sets', 'sorted-sets', 'lists', 'geo', 'hash' etc...

Yes, we can refine this idea a bit more. Let's postpone it to a future PR?

@madolson
Copy link
Member

madolson commented Sep 3, 2025

The benchmark is written generically enough that we don't need to hardcode new tests inside the benchmark. Do we need to add these?

@zuiderkwast zuiderkwast changed the title valkey-benchmark - introduce test scenarios for ZCORE, ZRANGE and SISMEMBER valkey-benchmark: Tests for ZSCORE, ZRANGE and SISMEMBER Sep 3, 2025
@zuiderkwast
Copy link
Contributor

The benchmark is written generically enough that we don't need to hardcode new tests inside the benchmark. Do we need to add these?

Not strictly necessary. It's only for convenience, I suppose.

@madolson
Copy link
Member

madolson commented Sep 3, 2025

Not strictly necessary. It's only for convenience, I suppose.

It's not really even that convenient, since you have to run something to populate them first and then run the commands. At what point do we just add every single command here?

@zuiderkwast
Copy link
Contributor

At what point do we just add every single command here?

What's your vision for this tool? The builtin tests just serve as examples?

@madolson
Copy link
Member

madolson commented Sep 3, 2025

What's your vision for this tool? The builtin tests just serve as examples?

Mostly yeah. I think some people like to download it and run valkey-benchmark to get a sample of what the performance numbers look like, but it shouldn't really be comprehensive. I don't think it should be comprehensive. Folks have done a good job extending it so it's easy to test a variety of workloads, and I think we should continue on that trend so that any workload can easily be tested without having to make a change.

@ranshid
Copy link
Member Author

ranshid commented Sep 10, 2025

What's your vision for this tool? The builtin tests just serve as examples?

Mostly yeah. I think some people like to download it and run valkey-benchmark to get a sample of what the performance numbers look like, but it shouldn't really be comprehensive. I don't think it should be comprehensive. Folks have done a good job extending it so it's easy to test a variety of workloads, and I think we should continue on that trend so that any workload can easily be tested without having to make a change.

@madolson I want to summarize the discussion around this:
Basically, you are saying that users should use our benchmark numbers to understand the performance benefits/impact of a new version.
In case they would like to test it on their own framework, they should use the tool with tailored commands or if they just want a very simple test overview (which we will not maintain and add new commands and groupings) they will still be able to use the current offered tests?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release-notes This issue should get a line item in the release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants