Raising Exception for Missing mean_tests
or std_tests
in MAE Metric
#1661
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This PR addresses issue #1627. Previously, passing
tests=[...]
toMAE()
silently ignored the test, leading to unexpected results in reports. This could confuse users, especially when writing assertions based on the number of tests executed.Fix
MAE
class constructor to raise aValueError
iftests
is provided instead ofmean_tests
orstd_tests
.mean_tests
orstd_tests
depending on what they want to test.I had considered using a pydantic.Config class with extra = "forbid" to prevent accidental usage. It felt like a cleaner and more generalized solution for validating input.
However, I wasn't entirely sure if this would break other internal logic or dynamic field handling elsewhere in the codebase.
If this kind of config-based validation is acceptable or preferred, I'm happy to explore replacing the current manual ValueError check with a more declarative Pydantic approach. Let me know what you think!