-
Notifications
You must be signed in to change notification settings - Fork 462
[Misc] test: modify apa scale test #1523
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @jiangxiaobin96, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request focuses on improving the test coverage and reliability for the Adaptive Pod Autoscaler (APA) by enabling a previously skipped test and enhancing its internal test cases. The changes aim to ensure the autoscaler behaves as expected under different load conditions, contributing to the overall stability and correctness of the scaling logic.
Highlights
- Test Enablement: The
TestAPAScale
test function, which was previously skipped, has been re-enabled to allow for proper testing of the APA (Adaptive Pod Autoscaler) behavior. - Test Case Refinement: The existing test cases within
TestAPAScale
have been replaced with a more comprehensive set of four distinct scenarios. These new tests cover various scaling conditions, including scenarios within fluctuation tolerance, scaling up, scaling down, and handling zero ready pods.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request enables and rewrites the TestAPAScale
test to correctly verify the behavior of the ApaAutoscaler
. The new test cases are well-structured and cover scaling up, scaling down, no-scaling scenarios, and the edge case of zero ready pods. The logic appears correct. I've included a suggestion to refactor the test to use a table-driven approach, which will improve its long-term maintainability and readability.
// test 1: currentUsePerPod/expectedUse between [1-DownFluctuationTolerance,1+UpFluctuationTolerance] | ||
apaScaler.scalingContext.BaseScalingContext.TargetValue = 5 | ||
result := apaScaler.Scale(readyPodCount, metricKey, now) | ||
// recent rapid rising metric value make scaler adapt turn on panic mode | ||
if result.DesiredPodCount != 10 { | ||
t.Errorf("result.DesiredPodCount = 10, got %d", result.DesiredPodCount) | ||
if result.DesiredPodCount != 5 { | ||
t.Errorf("result.DesiredPodCount = 5, but got %d", result.DesiredPodCount) | ||
} | ||
|
||
// test 2: | ||
// 1.1 means APA won't scale up unless current usage > TargetValue * (1+1.1), i.e. 210% | ||
// In this test case with UpFluctuationTolerance = 1.1, APA will not scale up. | ||
apaScaler.scalingContext.UpFluctuationTolerance = 1.1 | ||
// test 2: currentUsePerPod/expectedUse larger than 1+UpFluctuationTolerance | ||
apaScaler.scalingContext.BaseScalingContext.TargetValue = 4 | ||
result = apaScaler.Scale(readyPodCount, metricKey, now) | ||
// recent rapid rising metric value make scaler adapt turn on panic mode | ||
if result.DesiredPodCount != int32(readyPodCount) { | ||
t.Errorf("result should remain previous replica = %d, but got %d", readyPodCount, result.DesiredPodCount) | ||
if result.DesiredPodCount != 7 { | ||
t.Errorf("result.DesiredPodCount = 7, but got %d", result.DesiredPodCount) | ||
} | ||
|
||
// test 3: currentUsePerPod/expectedUse less than 1-DownFluctuationTolerance | ||
apaScaler.scalingContext.BaseScalingContext.TargetValue = 10 | ||
result = apaScaler.Scale(readyPodCount, metricKey, now) | ||
if result.DesiredPodCount != 3 { | ||
t.Errorf("result.DesiredPodCount = 3, but got %d", result.DesiredPodCount) | ||
} | ||
|
||
// test 4: readyPodCount = 0 | ||
result = apaScaler.Scale(0, metricKey, now) | ||
if result.DesiredPodCount != 0 { | ||
t.Errorf("result.DesiredPodCount = 0 when readyPodCount = 0, but got %d", result.DesiredPodCount) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These sequential test cases are good, but they can be made more maintainable and easier to extend by refactoring them into a table-driven test. Using subtests with t.Run
will also provide clearer output when tests fail, making it easier to debug.
testCases := []struct {
name string
targetValue float64
readyPodCount int
expectedDesiredCount int32
}{
{
name: "should not scale when within tolerance",
targetValue: 5,
readyPodCount: readyPodCount,
expectedDesiredCount: 5,
},
{
name: "should scale up when above tolerance",
targetValue: 4,
readyPodCount: readyPodCount,
expectedDesiredCount: 7,
},
{
name: "should scale down when below tolerance",
targetValue: 10,
readyPodCount: readyPodCount,
expectedDesiredCount: 3,
},
{
name: "should return 0 when readyPodCount is 0",
targetValue: 10, // target value doesn't matter here
readyPodCount: 0,
expectedDesiredCount: 0,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
apaScaler.scalingContext.BaseScalingContext.TargetValue = tc.targetValue
result := apaScaler.Scale(tc.readyPodCount, metricKey, now)
if result.DesiredPodCount != tc.expectedDesiredCount {
t.Errorf("expected DesiredPodCount = %d, but got %d", tc.expectedDesiredCount, result.DesiredPodCount)
}
})
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks more better.
ab8cbc9
to
7565828
Compare
a780ec5
to
fae02ad
Compare
@googs1025 @Jeffwan help review again. Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for this table-driven test for unit test
Signed-off-by: jiangxiaobin96 <[email protected]>
Signed-off-by: jiangxiaobin96 <[email protected]> Signed-off-by: ChethanUK <[email protected]>
Pull Request Description
I'm not sure why the first test of apa_test was skipped, so I modified this test based on my understanding.