Skip to content

Conversation

tiffanny29631
Copy link
Contributor

@tiffanny29631 tiffanny29631 commented Aug 19, 2025

To guarantee the e2e client claims ownership of all fields for objects that might have drifted, use client-side apply when reinstalling Config Sync after ConfigManagement was previously installed and removed.

@google-oss-prow google-oss-prow bot requested review from mikebz and sdowell August 19, 2025 23:44
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @tiffanny29631, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the reliability of end-to-end tests, specifically those related to nomos migrate and ACM uninstallation. It addresses test flakiness by ensuring that legacy reconciler-manager and resource-group-controller deployments are explicitly deleted during test teardown. This guarantees a clean slate for Config Sync installations in subsequent test runs, preventing issues caused by lingering states from previous tests.

Highlights

  • Improved Test Reliability: Enhances the stability of nomos migrate and ACM uninstall end-to-end tests by preventing state leakage between test runs.
  • Explicit Deployment Deletion: Introduces logic to explicitly delete reconciler-manager and resource-group-controller deployments during test cleanup.
  • Ensuring Clean State: Guarantees a fresh Config Sync installation for each test, avoiding reliance on patching or implicit cleanup of legacy components.
  • Targeted Cleanup: The cleanup logic is applied to the Cleanup functions of TestNomosMigrate, TestNomosMigrateMonoRepo, and TestACMUninstallScript.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds cleanup logic to delete legacy reconciler-manager and resource-group-controller deployments in several e2e tests. This improves test reliability by ensuring a clean state. The implementation is correct, but the same cleanup logic is duplicated across three test functions. I've suggested refactoring this duplicated code into a helper function to improve maintainability.

@google-oss-prow google-oss-prow bot added size/S and removed size/M labels Aug 27, 2025
@tiffanny29631 tiffanny29631 changed the title Cleanup legacy controller deployments in test teardown test: Add client side install method for restoring Config Sync Aug 27, 2025
@tiffanny29631
Copy link
Contributor Author

Note: should only merge if #1791 has stable success presubmit result.

/hold

To guarantee the e2e client claims ownership of all fields for objects that might have drifted, use client-side apply when reinstalling Config Sync and Webhook fter ConfigManagement was previously installed and removed.
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds a client-side install method for restoring Config Sync in e2e tests to ensure proper field ownership when reinstalling after ConfigManagement removal. The key changes include replacing server-side apply with client-side apply and updating watcher methods to check for current status rather than absence.

  • Introduces InstallConfigSyncFromManifest function using client-side kubectl apply
  • Updates test cleanup to use the new manifest-based installation method
  • Changes watcher behavior from checking for resource absence to checking current status

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.

File Description
e2e/testcases/cli_test.go Updates three test functions to use new installation method and modified watcher behavior
e2e/nomostest/config_sync.go Adds new client-side manifest installation function with direct kubectl apply

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

@tiffanny29631
Copy link
Contributor Author

/retest

@tiffanny29631
Copy link
Contributor Author

/unhold

Copy link
Contributor

@Camila-B Camila-B left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm


// InstallConfigSyncFromManifest installs ConfigSync on the test cluster by directly
// applying the manifest file using kubectl client-side apply
func InstallConfigSyncFromManifest(nt *NT) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will revert some of the modifications made by the usual InstallConfigSync function used by the test scaffolding. Can this be consolidated to use the same function? I expect you could add an option to use Update instead of Patch/Apply

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you elaborate on the 'same function'? I assume you were not suggesting merging the two install functions, the KubeClient.Apply does not achieve necessary field management or removes any legacy fields. Is it now kubectl update instead of kubectl apply that you suggest?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm suggesting it would use the client-go Update (KubeClient.Update) method instead of the client-go Apply method (KubeClient.Apply)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The go-client update has a few edge cases too.

When working on the configsync manifest, CRD like clusters.clusterregistry.k8s.io disallows update when there's no resourceversion attached. Something like

    cli_test.go:1477: 2025-09-10 05:42:53.626931518 +0000 UTC ERROR: customresourcedefinitions.apiextensions.k8s.io "clusters.clusterregistry.k8s.io" is invalid: metadata.resourceVersion: Invalid value: 0x0: must be specified for an update

The operator also removes the admission webhook, which will need a special handle when restoring the shared test env, a create instead of update. We could also use preventDrift: true when configuring the configmanagement?

Anyways not sure if the effort is worthy for a deprecated use case test.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK added the resourceVersion, hopefully it solves the issue

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes you have to set the resourceVersion when using update. Are you missing context on working with k8s client libraries? If so I can take over the PR - although I don't think this deliverable is really needed until we solve the upgrade issue

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good chance to learn. The latest change seemed to be working.

Copy link

New changes are detected. LGTM label has been removed.

@google-oss-prow google-oss-prow bot removed the lgtm label Sep 9, 2025
Copy link

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from camila-b. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@Camila-B Camila-B removed their assignment Sep 10, 2025
@mikebz mikebz requested a review from Copilot September 11, 2025 17:51
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.


Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

return err
}
case InstallMethodUpdate:
currentObj := o.DeepCopyObject().(client.Object)
Copy link
Preview

Copilot AI Sep 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The type assertion .(client.Object) could panic if DeepCopyObject() returns an object that doesn't implement client.Object. Consider using a type switch or checking the assertion result to handle this more safely.

Suggested change
currentObj := o.DeepCopyObject().(client.Object)
currentObjObj := o.DeepCopyObject()
currentObj, ok := currentObjObj.(client.Object)
if !ok {
return fmt.Errorf("object %v does not implement client.Object", core.GKNN(o))
}

Copilot uses AI. Check for mistakes.

Comment on lines +267 to +271
// Attach existing resourceVersion to the object
o.SetResourceVersion(currentObj.GetResourceVersion())
if err := nt.KubeClient.Update(o); err != nil {
return err
}
Copy link
Preview

Copilot AI Sep 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting the resource version on the object being updated could cause conflicts if the object was modified between the Get and Update operations. Consider using a retry mechanism or server-side apply to handle concurrent modifications more reliably.

Suggested change
// Attach existing resourceVersion to the object
o.SetResourceVersion(currentObj.GetResourceVersion())
if err := nt.KubeClient.Update(o); err != nil {
return err
}
// Attach existing resourceVersion to the object and retry on conflict
const maxUpdateAttempts = 5
var lastErr error
for attempt := 1; attempt <= maxUpdateAttempts; attempt++ {
o.SetResourceVersion(currentObj.GetResourceVersion())
if err := nt.KubeClient.Update(o); err != nil {
if apierrors.IsConflict(err) {
// Re-fetch the latest object and retry
if err := nt.KubeClient.Get(currentObj.GetName(), currentObj.GetNamespace(), currentObj); err != nil {
lastErr = err
break
}
lastErr = err
continue
} else {
lastErr = err
break
}
} else {
lastErr = nil
break
}
}
if lastErr != nil {
return lastErr
}

Copilot uses AI. Check for mistakes.

type InstallMethod string

const (
// InstallMethodApply uses server-side apply (default)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't look like this is currently the default. If the intent is to retain default behavior, I'd suggest updating the function signature to accept a variadic list of options (for example func InstallConfigSync(nt *NT, opts ...InstallConfigSyncOpts) error {)

return err
}
case InstallMethodUpdate:
currentObj := o.DeepCopyObject().(client.Object)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A DeepCopy should not be necessary here for every object, can this be switched to populating an empty object (e.g. Unstructured)?

return err
}
} else {
// Attach existing resourceVersion to the object
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment doesn't add much context, I'd suggest explaining "why" it's attaching the resourceVersion to the object.

In this instance, the current state of the object on the cluster does not matter and the intent is to fully update to the original configuration declared in the manifest

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants