-
Notifications
You must be signed in to change notification settings - Fork 50
test: Add client side install method for restoring Config Sync #1834
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
test: Add client side install method for restoring Config Sync #1834
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @tiffanny29631, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request enhances the reliability of end-to-end tests, specifically those related to nomos migrate and ACM uninstallation. It addresses test flakiness by ensuring that legacy reconciler-manager and resource-group-controller deployments are explicitly deleted during test teardown. This guarantees a clean slate for Config Sync installations in subsequent test runs, preventing issues caused by lingering states from previous tests.
Highlights
- Improved Test Reliability: Enhances the stability of nomos migrate and ACM uninstall end-to-end tests by preventing state leakage between test runs.
- Explicit Deployment Deletion: Introduces logic to explicitly delete reconciler-manager and resource-group-controller deployments during test cleanup.
- Ensuring Clean State: Guarantees a fresh Config Sync installation for each test, avoiding reliance on patching or implicit cleanup of legacy components.
- Targeted Cleanup: The cleanup logic is applied to the Cleanup functions of TestNomosMigrate, TestNomosMigrateMonoRepo, and TestACMUninstallScript.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds cleanup logic to delete legacy reconciler-manager
and resource-group-controller
deployments in several e2e tests. This improves test reliability by ensuring a clean state. The implementation is correct, but the same cleanup logic is duplicated across three test functions. I've suggested refactoring this duplicated code into a helper function to improve maintainability.
ab3731e
to
7dd2508
Compare
Note: should only merge if #1791 has stable success presubmit result. /hold |
7dd2508
to
b93791b
Compare
851c901
to
6cbe85e
Compare
To guarantee the e2e client claims ownership of all fields for objects that might have drifted, use client-side apply when reinstalling Config Sync and Webhook fter ConfigManagement was previously installed and removed.
6cbe85e
to
313a440
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds a client-side install method for restoring Config Sync in e2e tests to ensure proper field ownership when reinstalling after ConfigManagement removal. The key changes include replacing server-side apply with client-side apply and updating watcher methods to check for current status rather than absence.
- Introduces
InstallConfigSyncFromManifest
function using client-side kubectl apply - Updates test cleanup to use the new manifest-based installation method
- Changes watcher behavior from checking for resource absence to checking current status
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.
File | Description |
---|---|
e2e/testcases/cli_test.go | Updates three test functions to use new installation method and modified watcher behavior |
e2e/nomostest/config_sync.go | Adds new client-side manifest installation function with direct kubectl apply |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
f13e1ae
to
77a9062
Compare
/retest |
/unhold |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
e2e/nomostest/config_sync.go
Outdated
|
||
// InstallConfigSyncFromManifest installs ConfigSync on the test cluster by directly | ||
// applying the manifest file using kubectl client-side apply | ||
func InstallConfigSyncFromManifest(nt *NT) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will revert some of the modifications made by the usual InstallConfigSync function used by the test scaffolding. Can this be consolidated to use the same function? I expect you could add an option to use Update
instead of Patch/Apply
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you elaborate on the 'same function'? I assume you were not suggesting merging the two install functions, the KubeClient.Apply does not achieve necessary field management or removes any legacy fields. Is it now kubectl update
instead of kubectl apply
that you suggest?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm suggesting it would use the client-go Update
(KubeClient.Update) method instead of the client-go Apply
method (KubeClient.Apply)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The go-client update has a few edge cases too.
When working on the configsync manifest, CRD like clusters.clusterregistry.k8s.io disallows update when there's no resourceversion attached. Something like
cli_test.go:1477: 2025-09-10 05:42:53.626931518 +0000 UTC ERROR: customresourcedefinitions.apiextensions.k8s.io "clusters.clusterregistry.k8s.io" is invalid: metadata.resourceVersion: Invalid value: 0x0: must be specified for an update
The operator also removes the admission webhook, which will need a special handle when restoring the shared test env, a create instead of update. We could also use preventDrift: true when configuring the configmanagement?
Anyways not sure if the effort is worthy for a deprecated use case test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK added the resourceVersion, hopefully it solves the issue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes you have to set the resourceVersion when using update. Are you missing context on working with k8s client libraries? If so I can take over the PR - although I don't think this deliverable is really needed until we solve the upgrade issue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good chance to learn. The latest change seemed to be working.
New changes are detected. LGTM label has been removed. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
return err | ||
} | ||
case InstallMethodUpdate: | ||
currentObj := o.DeepCopyObject().(client.Object) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The type assertion .(client.Object)
could panic if DeepCopyObject()
returns an object that doesn't implement client.Object
. Consider using a type switch or checking the assertion result to handle this more safely.
currentObj := o.DeepCopyObject().(client.Object) | |
currentObjObj := o.DeepCopyObject() | |
currentObj, ok := currentObjObj.(client.Object) | |
if !ok { | |
return fmt.Errorf("object %v does not implement client.Object", core.GKNN(o)) | |
} |
Copilot uses AI. Check for mistakes.
// Attach existing resourceVersion to the object | ||
o.SetResourceVersion(currentObj.GetResourceVersion()) | ||
if err := nt.KubeClient.Update(o); err != nil { | ||
return err | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Setting the resource version on the object being updated could cause conflicts if the object was modified between the Get and Update operations. Consider using a retry mechanism or server-side apply to handle concurrent modifications more reliably.
// Attach existing resourceVersion to the object | |
o.SetResourceVersion(currentObj.GetResourceVersion()) | |
if err := nt.KubeClient.Update(o); err != nil { | |
return err | |
} | |
// Attach existing resourceVersion to the object and retry on conflict | |
const maxUpdateAttempts = 5 | |
var lastErr error | |
for attempt := 1; attempt <= maxUpdateAttempts; attempt++ { | |
o.SetResourceVersion(currentObj.GetResourceVersion()) | |
if err := nt.KubeClient.Update(o); err != nil { | |
if apierrors.IsConflict(err) { | |
// Re-fetch the latest object and retry | |
if err := nt.KubeClient.Get(currentObj.GetName(), currentObj.GetNamespace(), currentObj); err != nil { | |
lastErr = err | |
break | |
} | |
lastErr = err | |
continue | |
} else { | |
lastErr = err | |
break | |
} | |
} else { | |
lastErr = nil | |
break | |
} | |
} | |
if lastErr != nil { | |
return lastErr | |
} |
Copilot uses AI. Check for mistakes.
type InstallMethod string | ||
|
||
const ( | ||
// InstallMethodApply uses server-side apply (default) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't look like this is currently the default. If the intent is to retain default behavior, I'd suggest updating the function signature to accept a variadic list of options (for example func InstallConfigSync(nt *NT, opts ...InstallConfigSyncOpts) error {
)
return err | ||
} | ||
case InstallMethodUpdate: | ||
currentObj := o.DeepCopyObject().(client.Object) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A DeepCopy should not be necessary here for every object, can this be switched to populating an empty object (e.g. Unstructured
)?
return err | ||
} | ||
} else { | ||
// Attach existing resourceVersion to the object |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment doesn't add much context, I'd suggest explaining "why" it's attaching the resourceVersion
to the object.
In this instance, the current state of the object on the cluster does not matter and the intent is to fully update to the original configuration declared in the manifest
To guarantee the e2e client claims ownership of all fields for objects that might have drifted, use client-side apply when reinstalling Config Sync after ConfigManagement was previously installed and removed.