-
Notifications
You must be signed in to change notification settings - Fork 55
feat: add configurable resource tracking support #690
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: add configurable resource tracking support #690
Conversation
📝 WalkthroughWalkthroughAgent now consults a ResourceTrackingReader (backed by an informer-linked SettingsManager) to determine the active resource-tracking method and uses it when deciding whether a Kubernetes resource is managed by Argo CD, replacing prior hardcoded label/annotation checks. Changes
Sequence Diagram(s)sequenceDiagram
participant Agent as Agent
participant TR as ResourceTrackingReader
participant SM as SettingsManager
participant K8s as Kubernetes API
Agent->>TR: NewResourceTrackingReader(ctx, kubeClient, namespace)
TR->>SM: Initialize informer-backed cache (watch ConfigMap)
SM->>K8s: Watch argocd-cm ConfigMap
Note over Agent,TR: When evaluating a resource
Agent->>TR: IsResourceTracked(labels, annotations)
TR->>SM: GetTrackingMethod()
SM-->>TR: Return tracking method
TR->>TR: Evaluate labels/annotations per method
TR-->>Agent: Return tracked? (true/false)
Agent->>K8s: (if needed) follow ownerRefs -> repeat IsResourceTracked via TR
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Fix all issues with AI Agents 🤖
In @test/e2e/resource_tracking_test.go:
- Around line 367-463: Test_ResourceTracking_ConfigChange updates the ConfigMap
key cm2.Data["application.resourceTrackingMethod"] but never asserts the agent
picked up the change; after switching to "label" trigger a new sync (e.g., patch
the Application resource `app` or update its spec to force a reconciliation) and
then fetch the target Deployment ("guestbook-ui" in namespace "tracking-test")
and assert that the tracking annotation is removed and the expected tracking
label is present on the resource; if forcing a new sync isn't possible in this
test harness, mark Test_ResourceTracking_ConfigChange as a placeholder with a
TODO noting the missing verification.
🧹 Nitpick comments (4)
agent/tracking_test.go (1)
316-343: Consider increasing sleep tolerance or using a synchronization mechanism for flaky test prevention.The test relies on
time.Sleep(50 * time.Millisecond)which could be flaky under load. While the comment acknowledges this is a basic test, consider using a channel or condition variable to make the test more deterministic.🔎 Proposed improvement
func TestResourceTrackingReader_StartPeriodicRefresh(t *testing.T) { t.Run("Periodic refresh updates config", func(t *testing.T) { - // This is a basic test - in reality we'd need a more sophisticated - // setup to test the periodic refresh properly cm := &corev1.ConfigMap{ ObjectMeta: v1.ObjectMeta{ Name: "argocd-cm", Namespace: "argocd", }, Data: map[string]string{ "application.resourceTrackingMethod": "label", }, } kubeClient := kube.NewDynamicFakeClient(cm) reader := NewResourceTrackingReader(kubeClient, "argocd") reader.refreshInterval = 100 * time.Millisecond ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond) defer cancel() reader.StartPeriodicRefresh(ctx) - // Give it a moment to do initial refresh - time.Sleep(50 * time.Millisecond) - - config := reader.GetConfig() - assert.Equal(t, TrackingMethodLabel, config.Method) + // Use Eventually to avoid flaky timing-based assertions + require.Eventually(t, func() bool { + config := reader.GetConfig() + return config.Method == TrackingMethodLabel + }, 200*time.Millisecond, 10*time.Millisecond) }) }agent/resource.go (1)
401-419: Consider adding nil check fortrackingConfigparameter.The function will panic if
trackingConfigis nil when callingIsResourceTracked. While current callers always pass a valid config, defensive programming would prevent potential issues from future code changes.🔎 Proposed fix
func isResourceManaged(kube *kube.KubernetesClient, res *unstructured.Unstructured, maxRecurse int, trackingConfig *ResourceTrackingConfig) (bool, error) { if maxRecurse < 1 { return false, fmt.Errorf("recursion limit reached") } + + if trackingConfig == nil { + trackingConfig = DefaultResourceTrackingConfig() + } // If the resource carries a tracking label or annotation based on theagent/tracking.go (2)
98-146: Consider returning the error instead of swallowing it when ConfigMap retrieval fails.The function signature indicates it returns an error, but when the ConfigMap is not found, it returns
nilafter logging. This might mask issues where the ConfigMap should exist but doesn't. Consider distinguishing between "ConfigMap not found" (which may be expected during initial setup) and other errors.🔎 Proposed improvement
// Get the argocd-cm ConfigMap cm, err := r.kubeClient.Clientset.CoreV1().ConfigMaps(r.namespace).Get(ctx, r.configMapName, v1.GetOptions{}) if err != nil { - logCtx.WithError(err).Warnf("Failed to get %s ConfigMap, using default tracking config", r.configMapName) - // Use default config if ConfigMap is not found - r.config = DefaultResourceTrackingConfig() - r.lastRefresh = time.Now() - return nil + // Use default config if ConfigMap is not found or inaccessible + if apierrors.IsNotFound(err) { + logCtx.Debugf("ConfigMap %s not found, using default tracking config", r.configMapName) + } else { + logCtx.WithError(err).Warnf("Failed to get %s ConfigMap, using default tracking config", r.configMapName) + } + r.config = DefaultResourceTrackingConfig() + r.lastRefresh = time.Now() + return nil // Still return nil to allow agent to start with defaults }Note: This would require importing
apierrors "k8s.io/apimachinery/pkg/api/errors".
181-212: Unnecessary map allocations for nil inputs.Creating new empty maps for nil labels/annotations is unnecessary since the only operation is a map lookup, which safely returns
false, ""for nil maps in Go.🔎 Proposed simplification
func (r *ResourceTrackingConfig) IsResourceTracked(labels map[string]string, annotations map[string]string) bool { - if labels == nil { - labels = make(map[string]string) - } - if annotations == nil { - annotations = make(map[string]string) - } - switch r.Method { case TrackingMethodAnnotation: // Only check annotation _, hasAnnotation := annotations[r.AnnotationKey] return hasAnnotationGo's map lookup on a nil map is safe and returns the zero value, so these allocations are unnecessary overhead.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
agent/agent.goagent/resource.goagent/resource_test.goagent/tracking.goagent/tracking_test.gotest/e2e/resource_tracking_test.go
🧰 Additional context used
🧬 Code graph analysis (6)
agent/agent.go (1)
agent/tracking.go (2)
ResourceTrackingReader(64-72)NewResourceTrackingReader(75-83)
agent/tracking.go (2)
internal/kube/client.go (1)
KubernetesClient(37-44)internal/logging/logging.go (2)
Info(295-297)Warn(300-302)
agent/tracking_test.go (1)
agent/tracking.go (6)
DefaultResourceTrackingConfig(54-60)TrackingMethodAnnotationAndLabel(38-38)ResourceTrackingConfig(42-49)TrackingMethodAnnotation(34-34)TrackingMethodLabel(36-36)NewResourceTrackingReader(75-83)
agent/resource.go (2)
internal/kube/client.go (1)
KubernetesClient(37-44)agent/tracking.go (1)
ResourceTrackingConfig(42-49)
test/e2e/resource_tracking_test.go (3)
test/e2e/fixture/fixture.go (1)
BaseSuite(43-50)internal/logging/logfields/logfields.go (2)
Name(59-59)Status(130-130)internal/event/event.go (3)
Create(57-57)Delete(58-58)Update(59-59)
agent/resource_test.go (1)
agent/tracking.go (5)
DefaultResourceTrackingConfig(54-60)ResourceTrackingConfig(42-49)TrackingMethodLabel(36-36)TrackingMethodAnnotation(34-34)NewResourceTrackingReader(75-83)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Run end-to-end tests
- GitHub Check: Build & cache Go code
- GitHub Check: Lint Go code
- GitHub Check: Run unit tests
- GitHub Check: Build and push image
- GitHub Check: Analyze (go)
🔇 Additional comments (22)
agent/agent.go (3)
117-119: LGTM!The
trackingReaderfield is properly added to the Agent struct and follows the existing field organization pattern.
316-317: LGTM!The
trackingReaderis correctly initialized with the Kubernetes client and namespace, matching the pattern used for other agent components.
360-361: LGTM!The periodic refresh is started with
a.contextwhich is properly set earlier in theStartmethod (line 342). The goroutine will correctly terminate when the agent context is cancelled.agent/tracking_test.go (4)
29-36: LGTM!Good test coverage for the default configuration values, ensuring backward compatibility with expected default keys.
38-164: LGTM!Excellent table-driven test coverage for
IsResourceTrackedacross all tracking methods and edge cases. The test cases comprehensively cover annotation-only, label-only, annotation+label, and empty inputs scenarios.
166-283: LGTM!Good coverage of
RefreshConfigscenarios including ConfigMap not found, various tracking methods, whitespace handling, and unknown method fallback. The tests properly verify the expected fallback behavior.
285-301: LGTM!Important test verifying that
GetConfigreturns independent copies, preventing external modification of the internal state.agent/resource.go (2)
322-322: LGTM!The tracking configuration is correctly retrieved from the reader and passed to
isResourceManaged.
468-468: LGTM!The tracking configuration is correctly propagated through the recursive call chain.
agent/resource_test.go (3)
28-29: LGTM!Good practice to create a shared default tracking config at the start of the test function for reuse across subtests.
289-406: LGTM!Excellent addition of dedicated subtests for different tracking methods. The tests properly verify that:
- Label-only tracking ignores annotations
- Annotation-only tracking ignores labels
- Custom tracking keys work correctly
This provides strong coverage for the configurable tracking feature.
446-446: LGTM!The
trackingReaderis consistently initialized across all test cases that create an Agent instance, ensuring the tests accurately reflect the production behavior.Also applies to: 512-512, 562-562, 611-611, 632-632, 666-666, 698-698, 1156-1156
test/e2e/resource_tracking_test.go (4)
39-51: LGTM!Proper test setup creating a dedicated namespace for isolation.
53-83: LGTM!Good cleanup logic that resets the ConfigMap to the default tracking method and properly delegates to the base teardown.
105-107: The 10-second sleep may be insufficient for config refresh.The default refresh interval is 5 minutes (as configured in
NewResourceTrackingReader). The 10-second sleep assumes the agent will do an initial refresh on startup, but if the agent is already running, it may take up to 5 minutes for the config change to be picked up. Consider either:
- Reducing the refresh interval for E2E tests via a test-specific configuration
- Triggering a manual refresh if possible
- Waiting longer or using a polling mechanism
This could lead to flaky E2E tests. Please verify the agent performs an initial refresh when
StartPeriodicRefreshis called, which would make the 10-second wait sufficient for the initial test setup.
85-176: LGTM!Comprehensive E2E tests covering all three tracking methods. Each test properly:
- Configures the tracking method via ConfigMap
- Creates and syncs an application
- Verifies the expected tracking metadata on deployed resources
- Cleans up after itself
Also applies to: 178-268, 270-365
agent/tracking.go (6)
29-39: LGTM!Clean type definition with well-documented constants matching Argo CD's tracking methods.
41-60: LGTM!The
ResourceTrackingConfigstruct andDefaultResourceTrackingConfigfunction are well-designed with sensible defaults that support backward compatibility.
64-83: LGTM!The
ResourceTrackingReaderstruct properly encapsulates the configuration caching with synchronization primitives. The 5-minute refresh interval is reasonable for configuration changes.
86-95: LGTM!Correctly returns a copy of the configuration to prevent external modifications, with proper read-lock protection.
149-177: LGTM!The periodic refresh implementation correctly:
- Performs an initial refresh before starting the ticker
- Properly handles context cancellation
- Cleans up the ticker on exit
- Logs errors without crashing
Good use of goroutine lifecycle management.
214-218: LGTM!Clean string representation useful for debugging and logging.
80d532c to
b21f446
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Fix all issues with AI Agents 🤖
In @agent/tracking_test.go:
- Around line 263-282: The test title is misleading: update the t.Run
description string from "ConfigMap with unknown method defaults to annotation"
to reflect the actual expected fallback (e.g., "ConfigMap with unknown method
defaults to annotation and label" or "defaults to annotation+label") so it
matches the assertion that config.Method equals
TrackingMethodAnnotationAndLabel; you only need to change the test name string
in the t.Run call surrounding the
NewResourceTrackingReader/RefreshConfig/GetConfig sequence.
🧹 Nitpick comments (4)
agent/tracking.go (4)
62-72: Unused fieldlastRefresh.The
lastRefreshfield is set inRefreshConfigbut never read. If it's intended for future use (e.g., staleness checks or debugging), consider adding a comment. Otherwise, it can be removed.
74-83: Consider nil-check forkubeClient.If
kubeClientisnil,RefreshConfigwill panic when accessingr.kubeClient.Clientset. Consider adding a nil-check or documenting that a valid client is required.🔎 Proposed defensive check
func NewResourceTrackingReader(kubeClient *kube.KubernetesClient, namespace string) *ResourceTrackingReader { + if kubeClient == nil { + panic("kubeClient cannot be nil") + } return &ResourceTrackingReader{ kubeClient: kubeClient, namespace: namespace, config: DefaultResourceTrackingConfig(), refreshInterval: 5 * time.Minute, // Refresh every 5 minutes configMapName: "argocd-cm", } }
98-115: Lock held during network call.The write lock is held while fetching the ConfigMap (line 108), which could block
GetConfigcallers during API latency. Consider fetching outside the lock and only locking during the config update.🔎 Proposed refactor to minimize lock contention
func (r *ResourceTrackingReader) RefreshConfig(ctx context.Context) error { logCtx := log().WithFields(logrus.Fields{ "method": "RefreshConfig", "namespace": r.namespace, }) - r.mu.Lock() - defer r.mu.Unlock() - // Get the argocd-cm ConfigMap cm, err := r.kubeClient.Clientset.CoreV1().ConfigMaps(r.namespace).Get(ctx, r.configMapName, v1.GetOptions{}) if err != nil { logCtx.WithError(err).Warnf("Failed to get %s ConfigMap, using default tracking config", r.configMapName) // Use default config if ConfigMap is not found + r.mu.Lock() r.config = DefaultResourceTrackingConfig() r.lastRefresh = time.Now() + r.mu.Unlock() return nil } newConfig := DefaultResourceTrackingConfig() // ... parsing logic ... + r.mu.Lock() // Check if config has changed and log it if r.config.Method != newConfig.Method { logCtx.Infof("Resource tracking configuration changed: method=%s", newConfig.Method) } r.config = newConfig r.lastRefresh = time.Now() + r.mu.Unlock() return nil }
181-187: Unnecessary nil map allocation.Creating empty maps when
nilis passed is unnecessary—Go map lookups onnilmaps safely return the zero value. This allocation can be removed.🔎 Proposed simplification
func (r *ResourceTrackingConfig) IsResourceTracked(labels map[string]string, annotations map[string]string) bool { - if labels == nil { - labels = make(map[string]string) - } - if annotations == nil { - annotations = make(map[string]string) - } - switch r.Method {
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
agent/agent.goagent/resource.goagent/resource_test.goagent/tracking.goagent/tracking_test.gotest/e2e/resource_tracking_test.go
🧰 Additional context used
🧬 Code graph analysis (6)
agent/resource_test.go (1)
agent/tracking.go (5)
DefaultResourceTrackingConfig(54-60)ResourceTrackingConfig(42-49)TrackingMethodLabel(36-36)TrackingMethodAnnotation(34-34)NewResourceTrackingReader(75-83)
agent/resource.go (2)
internal/kube/client.go (1)
KubernetesClient(37-44)agent/tracking.go (1)
ResourceTrackingConfig(42-49)
agent/agent.go (1)
agent/tracking.go (2)
ResourceTrackingReader(64-72)NewResourceTrackingReader(75-83)
test/e2e/resource_tracking_test.go (1)
test/e2e/fixture/fixture.go (1)
BaseSuite(43-50)
agent/tracking_test.go (1)
agent/tracking.go (6)
DefaultResourceTrackingConfig(54-60)TrackingMethodAnnotationAndLabel(38-38)ResourceTrackingConfig(42-49)TrackingMethodAnnotation(34-34)TrackingMethodLabel(36-36)NewResourceTrackingReader(75-83)
agent/tracking.go (2)
internal/kube/client.go (1)
KubernetesClient(37-44)internal/logging/logging.go (1)
Info(295-297)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Run unit tests
- GitHub Check: Build & cache Go code
- GitHub Check: Lint Go code
- GitHub Check: Run end-to-end tests
- GitHub Check: Build and push image
- GitHub Check: Analyze (go)
🔇 Additional comments (24)
agent/agent.go (3)
117-119: LGTM: Clean field declaration.The
trackingReaderfield is properly declared with clear documentation explaining its purpose.
316-317: LGTM: Proper initialization.The
trackingReaderis correctly initialized in theNewAgentconstructor with appropriate parameters.
360-361: LGTM: Proper lifecycle management.The periodic refresh is correctly started in the
Startmethod and uses the agent's context for lifecycle management, ensuring cleanup when the agent stops.agent/tracking_test.go (5)
29-36: LGTM: Clear validation of defaults.The test correctly validates the default tracking configuration values.
38-164: LGTM: Comprehensive test coverage.The table-driven tests thoroughly cover all tracking methods and edge cases for resource tracking detection.
285-301: LGTM: Good defensive programming test.This test ensures that
GetConfigreturns independent copies, preventing unintended mutations of the shared configuration.
303-314: LGTM: Flexible string representation test.The test appropriately validates that key information appears in the string output without being overly prescriptive about formatting.
316-344: Reasonable approach for testing periodic behavior.The test acknowledges its limitations (lines 318-319) and uses appropriate timing values for test execution. While time-based tests can be flaky, this is a reasonable approach for testing periodic refresh behavior.
agent/resource_test.go (3)
28-29: LGTM: Efficient test setup.Creating a shared
defaultTrackingConfigfor all test cases is a clean approach that reduces duplication.
290-406: LGTM: Comprehensive tracking method coverage.The new test cases thoroughly validate different tracking configurations, including label-only, annotation-only, and custom tracking keys, with both positive and negative scenarios.
446-446: LGTM: Consistent test setup.All test cases that create Agent instances properly initialize the
trackingReaderfield with a consistent pattern.Also applies to: 512-512, 562-562, 611-611, 632-632, 666-666, 698-698
agent/resource.go (3)
322-322: LGTM: Proper use of tracking configuration.The call correctly retrieves the current tracking configuration via
GetConfig(), which returns a copy to prevent shared state issues.
401-419: LGTM: Clean refactoring to support configurable tracking.The function signature and logic have been properly updated to use the configurable tracking system. The comment accurately reflects the new behavior, and the delegation to
trackingConfig.IsResourceTrackedis correct.
468-468: LGTM: Consistent tracking through recursion.The recursive call correctly propagates the
trackingConfigparameter, ensuring that the same tracking method is applied throughout the entire ownership chain.test/e2e/resource_tracking_test.go (4)
58-102: LGTM: Proper test isolation and cleanup.The setup and teardown methods ensure test isolation through unique namespaces and proper cleanup of resources and configuration.
106-195: LGTM: Appropriate E2E verification strategy.The tests follow a consistent pattern and appropriately verify the tracking behavior through the deployed resource metadata rather than internal agent state. This end-to-end approach validates that the feature works correctly from a user's perspective.
Also applies to: 199-287, 291-384
40-56: LGTM: Clean namespace naming helper.The
getTestNamespacehelper provides a clean way to generate unique namespace names from test names, properly handling path separators and formatting conventions.
124-126: No changes needed; timing is sound.The agent performs an initial refresh immediately on startup via
StartPeriodicRefresh()(agent/tracking.go lines 156-159), before the 5-minute periodic refresh interval begins. The 10-second wait accounts for this initial refresh to detect the ConfigMap change, making the timing assumption valid and not a gamble.agent/tracking.go (6)
29-39: LGTM!Clean enum-like pattern with well-documented constants for tracking methods.
41-49: LGTM!Well-structured configuration type with clear field documentation.
51-60: LGTM!Sensible defaults matching Argo CD conventions, with clear documentation explaining the rationale.
85-95: LGTM!Proper use of read lock and defensive copy to prevent external modifications.
148-177: LGTM!Clean periodic refresh implementation with proper context cancellation handling and ticker cleanup.
214-218: LGTM!Clean
String()implementation useful for debugging and logging.
b21f446 to
997a41a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
agent/tracking.go (1)
62-83: Consider making refresh interval configurable.The
refreshInterval(5 minutes) andconfigMapNameare hardcoded. For flexibility in testing and different environments, consider accepting these as optional parameters or providing setter methods.This is a minor suggestion for future improvement and doesn't block the current implementation.
agent/tracking_test.go (1)
317-344: Consider improving periodic refresh test reliability.The test uses
time.Sleep(50 * time.Millisecond)to wait for the initial refresh. While functional, this timing-based approach can be flaky in slow CI environments.Consider using a synchronization mechanism or polling with
require.Eventuallyfor more reliable test behavior.test/e2e/resource_tracking_test.go (1)
40-56: Consider usingstrings.LastIndexfor simpler parsing.The manual rune iteration to find the last
/can be simplified:🔎 Suggested simplification
func (suite *ResourceTrackingTestSuite) getTestNamespace() string { - // Use test name to create unique namespace per test testName := suite.T().Name() - // Extract just the test method name (after the last /) - parts := []rune(testName) - for i := len(parts) - 1; i >= 0; i-- { - if parts[i] == '/' { - testName = string(parts[i+1:]) - break - } + if idx := strings.LastIndex(testName, "/"); idx != -1 { + testName = testName[idx+1:] } - // Convert to lowercase and replace underscores with hyphens testName = strings.ToLower(testName) testName = strings.ReplaceAll(testName, "_", "-") return testName }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
agent/agent.goagent/resource.goagent/resource_test.goagent/tracking.goagent/tracking_test.gotest/e2e/resource_tracking_test.go
🧰 Additional context used
🧬 Code graph analysis (5)
agent/resource_test.go (3)
agent/tracking.go (5)
DefaultResourceTrackingConfig(54-60)ResourceTrackingConfig(42-49)TrackingMethodLabel(36-36)TrackingMethodAnnotation(34-34)NewResourceTrackingReader(75-83)test/fake/kube/kubernetes.go (1)
NewDynamicFakeClient(80-112)agent/agent.go (1)
Agent(62-120)
agent/tracking.go (3)
internal/kube/client.go (1)
KubernetesClient(37-44)internal/logging/logging.go (2)
Info(295-297)Warn(300-302)internal/env/env.go (1)
String(96-107)
agent/resource.go (2)
internal/kube/client.go (1)
KubernetesClient(37-44)agent/tracking.go (1)
ResourceTrackingConfig(42-49)
test/e2e/resource_tracking_test.go (1)
test/e2e/fixture/fixture.go (1)
BaseSuite(43-50)
agent/agent.go (1)
agent/tracking.go (2)
ResourceTrackingReader(64-72)NewResourceTrackingReader(75-83)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Run end-to-end tests
- GitHub Check: Run unit tests
- GitHub Check: Build & cache Go code
- GitHub Check: Lint Go code
- GitHub Check: Build and push image
- GitHub Check: Analyze (go)
🔇 Additional comments (24)
agent/tracking.go (6)
29-39: LGTM!The
TrackingMethodtype and constants are well-defined and align with Argo CD's documented tracking methods. The string values match the expected ConfigMap values.
41-60: LGTM!The
ResourceTrackingConfigstruct andDefaultResourceTrackingConfigfunction are well-designed. The default toannotation+labelis a sensible choice for maximum compatibility before the actual configuration is loaded.
86-95: LGTM!Good practice returning a copy of the config to prevent external modifications. Thread-safe access with
RLock.
98-146: LGTM with one observation.The fallback to default config on ConfigMap read error is a reasonable resilience pattern. The
lastRefreshtimestamp is correctly set to prevent continuous retry loops.One consideration: the unknown tracking method case (line 133) logs a warning but keeps the default method from
newConfig. This behavior is acceptable for graceful degradation.
148-177: LGTM!The periodic refresh implementation is well-structured:
- Performs initial refresh before starting the ticker
- Properly handles context cancellation
- Cleans up the ticker with
defer ticker.Stop()- Logs errors without blocking the refresh loop
179-212: LGTM!The
IsResourceTrackedmethod correctly implements all tracking method variants:
- Handles
nilmaps defensively- Each tracking method checks the appropriate metadata
- The default case mirrors
annotation+labelbehavior, consistent withDefaultResourceTrackingConfigagent/agent.go (3)
117-120: LGTM!The
trackingReaderfield is appropriately added to theAgentstruct with a clear comment explaining its purpose.
316-318: LGTM!The
trackingReaderis correctly initialized inNewAgentusing the existingkubeClientandnamespaceparameters.
360-362: LGTM!The periodic refresh is started with
a.context, ensuring it will be properly cancelled when the agent stops. The placement afteremitterinitialization and before backend starts is appropriate.agent/tracking_test.go (4)
29-37: LGTM!Good test coverage for default configuration values.
39-165: LGTM!Comprehensive table-driven tests covering all tracking method scenarios including edge cases with nil labels/annotations.
167-284: LGTM!Thorough test coverage for
RefreshConfigincluding:
- ConfigMap not found scenario
- All three tracking methods
- Whitespace handling
- Unknown method fallback
Note: The test name on line 264 has been correctly updated to reflect the actual fallback behavior.
286-315: LGTM!Good tests verifying:
GetConfigreturns independent copies that can be modified without affecting each otherString()output contains expected valuesagent/resource_test.go (3)
28-29: LGTM!Good approach using
DefaultResourceTrackingConfig()for existing tests to maintain backward compatibility while enabling the new configurable tracking.
289-406: LGTM!Excellent test coverage for the new tracking method configurations:
- Label-only tracking correctly identifies resources with labels and rejects annotation-only resources
- Annotation-only tracking correctly identifies resources with annotations and rejects label-only resources
- Custom tracking keys are properly tested
These tests validate the core tracking configuration functionality.
1153-1157: LGTM!The
trackingReaderis correctly initialized in test Agent instances to match the production code structure.agent/resource.go (3)
322-322: LGTM!The call to
a.trackingReader.GetConfig()correctly obtains the current tracking configuration for resource ownership checks.
401-419: LGTM!The refactored
isResourceManagedfunction cleanly delegates tracking logic totrackingConfig.IsResourceTracked(). The updated comments accurately describe the new behavior.
468-468: LGTM!The recursive call correctly propagates the
trackingConfigparameter, ensuring consistent tracking behavior throughout the owner reference chain.test/e2e/resource_tracking_test.go (5)
58-102: LGTM!The setup and teardown methods properly:
- Create a unique namespace per test
- Reset the tracking method to
annotation+labelafter each test- Handle cleanup errors gracefully
124-128: E2E tests rely on timing for config propagation.The 10-second sleep assumes the agent will pick up the ConfigMap change within that time. Since the agent's default refresh interval is 5 minutes, this works because
StartPeriodicRefreshdoes an initial refresh on startup.However, if the agent is already running when the test starts, the config change might not be picked up immediately. Consider adding a comment clarifying this assumption or adding a mechanism to verify the config was picked up.
Also applies to: 218-219, 310-311
104-196: LGTM!The annotation tracking test properly:
- Configures the tracking method
- Deploys an application
- Verifies the tracking annotation is present
- Cleans up resources
198-288: LGTM!The label tracking test correctly validates that the
app.kubernetes.io/instancelabel is applied when using label-only tracking.
290-385: LGTM!The annotation+label tracking test validates that both tracking identifiers are present on deployed resources, providing comprehensive coverage of the combined tracking method.
agent/tracking.go
Outdated
| AnnotationKey: "argocd.argoproj.io/tracking-id", | ||
| LabelKey: "app.kubernetes.io/instance", | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using annotation+label (instead of argocd annotation default) so that agent recognizes any resources tracked by any method during startup.
Shall I keep this as it is now or shall I switch back to argocd annotation default ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO, the annotation-based tracking should be the default method, which aligns with upstream Argo CD.
997a41a to
7222ff2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
agent/tracking_test.go (1)
317-344: Consider making the periodic refresh test more deterministic.The test relies on
time.Sleep(50 * time.Millisecond)which could be flaky in slow CI environments or under load. Consider these alternatives:
- Add a callback or channel to signal refresh completion
- Use a polling approach with a timeout to wait for the expected state
- Make
refreshIntervalconfigurable via a functional option for more controlled testingExample: Add synchronization helper
One approach is to add a method that blocks until the next refresh completes, useful for testing:
// In test setup, you could add a done channel type ResourceTrackingReader struct { // ... existing fields testRefreshDone chan struct{} // nil in production } // In RefreshConfig, signal completion if channel exists if r.testRefreshDone != nil { select { case r.testRefreshDone <- struct{}{}: default: } }agent/tracking.go (1)
62-95: Excellent thread-safety and immutability guarantees.The
ResourceTrackingReadercorrectly usessync.RWMutexfor concurrent access, andGetConfig()returns a copy of the configuration to prevent external modifications. This is a solid defensive design pattern.The
refreshIntervalis hardcoded to 5 minutes (line 80). While reasonable, consider making it configurable via functional options if different refresh frequencies are needed in the future.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
agent/agent.goagent/resource.goagent/resource_test.goagent/tracking.goagent/tracking_test.gotest/e2e/resource_tracking_test.go
🚧 Files skipped from review as they are similar to previous changes (2)
- test/e2e/resource_tracking_test.go
- agent/agent.go
🧰 Additional context used
🧬 Code graph analysis (2)
agent/resource.go (2)
internal/kube/client.go (1)
KubernetesClient(37-44)agent/tracking.go (1)
ResourceTrackingConfig(42-49)
agent/tracking.go (3)
internal/kube/client.go (1)
KubernetesClient(37-44)internal/logging/logging.go (1)
Info(295-297)internal/env/env.go (1)
String(96-107)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Run unit tests
- GitHub Check: Run end-to-end tests
- GitHub Check: Build & cache Go code
- GitHub Check: Lint Go code
- GitHub Check: Build and push image
- GitHub Check: Analyze (go)
🔇 Additional comments (12)
agent/tracking_test.go (3)
29-37: LGTM! Default configuration aligns with Argo CD conventions.The default tracking configuration correctly uses
annotation+labelto support all tracking methods until the actual configuration is loaded from the ConfigMap.
39-165: Excellent test coverage for resource tracking logic.The test cases comprehensively cover all tracking methods (annotation, label, annotation+label) with various combinations of labels and annotations, including edge cases.
167-284: Well-structured tests for configuration refresh scenarios.The tests cover all important scenarios: missing ConfigMap (defaults), valid methods, whitespace handling, and unknown method fallback.
agent/resource.go (2)
401-484: Well-designed ownership checking with consistent tracking configuration.The refactored
isResourceManagedfunction correctly:
- Accepts tracking configuration as a parameter
- Uses
trackingConfig.IsResourceTracked(lbls, annt)to delegate tracking logic- Propagates the same config snapshot through recursive owner checks for consistency
This ensures that all resources in an ownership chain are evaluated with the same tracking rules.
308-331: The code is correct. ThetrackingReaderfield is unconditionally initialized in theNewAgent()constructor (line 317) before the*Agentpointer is returned. All error paths occur prior to this initialization. SincegetManagedResource()is only invoked on a successfully initialized Agent instance during event processing,trackingReaderis guaranteed to be non-nil at line 322. No defensive nil check is necessary.agent/resource_test.go (3)
27-48: Clean test setup with default tracking configuration.Introducing
defaultTrackingConfigat the test level ensures consistent configuration across test cases and makes the tests more maintainable.
290-406: Comprehensive test coverage for different tracking methods.The new tests thoroughly validate:
- Label-only tracking (with and without annotations)
- Annotation-only tracking (with and without labels)
- Custom tracking keys (ensuring default keys are not used)
This ensures the tracking configuration correctly controls resource ownership determination.
418-447: Proper test agent initialization with tracking reader.The test correctly initializes
trackingReaderusingNewResourceTrackingReader(kubeClient, "argocd"), ensuring tests exercise the full tracking configuration flow.agent/tracking.go (4)
29-60: Well-designed type system for resource tracking.The tracking method enum and configuration struct are cleanly defined. Defaulting to
annotation+labelis the right choice to support all Argo CD tracking methods until the actual configuration is loaded.
97-145: Robust configuration refresh with appropriate fallback behavior.The
RefreshConfigmethod handles errors gracefully by falling back to default configuration and logging warnings. This ensures the agent can start and operate even if the ConfigMap is temporarily unavailable.The whitespace trimming (line 122) and unknown method handling (lines 131-133) demonstrate good defensive programming.
147-176: Well-implemented periodic refresh with proper lifecycle management.The method correctly:
- Performs an initial refresh before starting the periodic refresh loop
- Handles context cancellation for clean shutdown
- Uses
defer ticker.Stop()to prevent resource leaks- Logs errors without panicking, allowing the agent to continue operating with default configuration
178-217: Correct tracking logic with defensive nil handling.The
IsResourceTrackedmethod properly implements the tracking logic for all three methods:
- Annotation: checks annotation key only
- Label: checks label key only
- Annotation+Label: checks either (OR logic)
The defensive nil checks (lines 181-186) prevent panics if callers pass nil maps. The
String()method provides useful debugging output.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #690 +/- ##
==========================================
+ Coverage 46.09% 46.10% +0.01%
==========================================
Files 99 100 +1
Lines 12228 12251 +23
==========================================
+ Hits 5636 5648 +12
- Misses 6046 6056 +10
- Partials 546 547 +1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| const ( | ||
| // TrackingMethodAnnotation uses only annotation for tracking | ||
| TrackingMethodAnnotation TrackingMethod = "annotation" | ||
| // TrackingMethodLabel uses only label for tracking | ||
| TrackingMethodLabel TrackingMethod = "label" | ||
| // TrackingMethodAnnotationAndLabel uses both annotation and label for tracking | ||
| TrackingMethodAnnotationAndLabel TrackingMethod = "annotation+label" | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: We can reuse the constants from Argo CD
https://github.com/argoproj/argo-cd/blob/master/pkg/apis/application/v1alpha1/types.go#L117
agent/tracking.go
Outdated
| // Check both annotation and label | ||
| _, hasAnnotation := annotations[r.AnnotationKey] | ||
| _, hasLabel := labels[r.LabelKey] | ||
| return hasAnnotation || hasLabel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With annotation+label method, Argo CD adds both the label and annotation to resources. Shouldn't we check for the existence of both?
| return hasAnnotation || hasLabel | |
| return hasAnnotation && hasLabel |
agent/tracking.go
Outdated
| AnnotationKey: "argocd.argoproj.io/tracking-id", | ||
| LabelKey: "app.kubernetes.io/instance", | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO, the annotation-based tracking should be the default method, which aligns with upstream Argo CD.
agent/tracking.go
Outdated
|
|
||
| go func() { | ||
| ticker := time.NewTicker(r.refreshInterval) | ||
| defer ticker.Stop() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of periodically polling the API server, should we consider using Argo CD's SettingsManager? It maintains a shared informer cache to track the config map data. The changes will be updated almost instantly, and the users don't have to wait for the next refresh cycle. WDYT?
https://github.com/argoproj/argo-cd/blob/master/util/settings/settings.go#L752
agent/tracking.go
Outdated
|
|
||
| default: | ||
| // Unknown method, default to checking both | ||
| _, hasAnnotation := annotations[r.AnnotationKey] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Annotation based tracking should be the default tracking method.
test/e2e/resource_tracking_test.go
Outdated
| cm.Data = make(map[string]string) | ||
| } | ||
| cm.Data["application.resourceTrackingMethod"] = "annotation+label" | ||
| err = suite.ManagedAgentClient.Update(suite.Ctx, cm, metav1.UpdateOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we use fixture.EnsureUpdate to handle update conflicts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The e2e tests only verify if the correct annotations/labels are set by Argo CD. I think we should check if the resource proxy on the control plane can actually fetch live resources with different tracking methods.
Example:
https://github.com/argoproj-labs/argocd-agent/blob/main/test/e2e/rp_test.go#L232
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have updated the logic to check resource proxy, PTAL
4a25138 to
14f4c2b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@agent/tracking.go`:
- Around line 61-67: The error-fallback in GetTrackingMethod returns a
ResourceTrackingConfig with Method and AnnotationKey but omits LabelKey; update
that error branch to also set LabelKey (e.g., the same
"argocd.argoproj.io/tracking-id" used in the success path) so
ResourceTrackingConfig returned on error always contains Method, AnnotationKey,
and LabelKey; modify the error return in GetTrackingMethod to populate LabelKey
alongside AnnotationKey.
🧹 Nitpick comments (1)
test/e2e/resource_tracking_test.go (1)
42-58: Consider simplifying namespace extraction with strings.LastIndex.The manual rune iteration to find the last
/in the test name can be simplified using standard library functions.♻️ Suggested simplification
func (suite *ResourceTrackingTestSuite) getTestNamespace() string { - // Use test name to create unique namespace per test testName := suite.T().Name() - // Extract just the test method name (after the last /) - parts := []rune(testName) - for i := len(parts) - 1; i >= 0; i-- { - if parts[i] == '/' { - testName = string(parts[i+1:]) - break - } + // Extract just the test method name (after the last /) + if idx := strings.LastIndex(testName, "/"); idx >= 0 { + testName = testName[idx+1:] } // Convert to lowercase and replace underscores with hyphens testName = strings.ToLower(testName) testName = strings.ReplaceAll(testName, "_", "-") return testName }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
agent/agent.goagent/resource.goagent/resource_test.goagent/tracking.goagent/tracking_test.gotest/e2e/resource_tracking_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
- agent/agent.go
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-10-31T10:37:28.008Z
Learnt from: chetan-rns
Repo: argoproj-labs/argocd-agent PR: 632
File: test/data/pre-sync/pre-sync-job.yaml:2-17
Timestamp: 2025-10-31T10:37:28.008Z
Learning: In the argocd-agent repository, Kubernetes manifests under test/data/ directories are test fixtures for e2e testing and do not require production-grade security hardening such as securityContext settings for runAsNonRoot or allowPrivilegeEscalation.
Applied to files:
test/e2e/resource_tracking_test.go
🧬 Code graph analysis (4)
agent/tracking_test.go (4)
internal/kube/client.go (1)
KubernetesClient(37-44)agent/tracking.go (2)
ResourceTrackingReader(37-39)NewResourceTrackingReader(44-51)internal/logging/logfields/logfields.go (1)
Name(59-59)test/fake/kube/kubernetes.go (1)
NewDynamicFakeClient(80-112)
agent/resource.go (2)
internal/kube/client.go (1)
KubernetesClient(37-44)agent/tracking.go (1)
ResourceTrackingConfig(26-33)
agent/tracking.go (1)
internal/kube/client.go (1)
KubernetesClient(37-44)
agent/resource_test.go (2)
agent/tracking.go (2)
ResourceTrackingConfig(26-33)NewResourceTrackingReader(44-51)test/fake/kube/kubernetes.go (1)
NewDynamicFakeClient(80-112)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Build & cache Go code
- GitHub Check: Run unit tests
- GitHub Check: Lint Go code
- GitHub Check: Run end-to-end tests
- GitHub Check: Build and push image
- GitHub Check: Analyze (go)
🔇 Additional comments (13)
agent/tracking.go (1)
90-122: LGTM!The
IsResourceTrackedmethod is well-implemented with proper nil-safety for maps and comprehensive handling of all tracking methods. The fallback to annotation-based tracking for unknown methods aligns with Argo CD's default behavior.agent/tracking_test.go (2)
28-52: Well-structured test fixtures.The
fixtureshelper function cleanly encapsulates test setup with the fake Kubernetes client and ConfigMap configuration. This promotes reuse across tests.
94-210: Comprehensive table-driven tests for IsResourceTracked.Good coverage of all tracking method combinations including edge cases (nil labels/annotations, missing keys). The test structure is clear and maintainable.
agent/resource.go (2)
401-419: Clean integration of configurable resource tracking.The signature change to accept
trackingConfigand usetrackingConfig.IsResourceTracked()cleanly separates the tracking logic from the resource management function. The updated comments accurately reflect the new behavior.
468-468: Proper propagation of tracking config in recursive calls.The
trackingConfigis correctly passed through the recursive owner reference resolution, ensuring consistent tracking behavior across the entire ownership chain.test/e2e/resource_tracking_test.go (4)
74-105: Good cleanup logic in TearDownTest.The teardown properly resets the
argocd-cmConfigMap to the default annotation tracking method and cleans up the test namespace. Usingfixture.EnsureUpdatehandles potential update conflicts correctly.
107-239: Comprehensive e2e test for annotation tracking.This test properly validates the full flow: configures tracking method, deploys application, waits for sync, verifies tracking annotation on deployed resources, and validates resource proxy fetch capability. This addresses the objective of testing resource proxy behavior with different tracking methods.
241-373: Good coverage for label-only tracking.The label tracking test follows the same comprehensive pattern, properly verifying the
app.kubernetes.io/instancelabel and resource proxy fetch behavior.
375-512: Thorough validation for annotation+label tracking.This test verifies both tracking markers are present when using the combined method, ensuring the agent correctly identifies resources that have both annotation and label.
agent/resource_test.go (4)
29-32: Good pattern: shared tracking config for related tests.Using a shared
labelTrackingConfigat the top of the test function reduces duplication and makes the default tracking method explicit.
296-433: Excellent coverage for tracking method behavior.These tests thoroughly validate that
isResourceManagedcorrectly respects the tracking configuration:
- Label-only: tracks resources with label, ignores annotation-only resources
- Annotation-only: tracks resources with annotation, ignores label-only resources
- Annotation+Label: requires both markers to be present
This directly addresses the PR objective of using the actual resource-tracking configuration.
465-476: Proper integration of fixtures for tracking configuration in tests.Using
fixtures("argocd", v1alpha1.TrackingMethodLabel)ensures consistent test setup and aligns with the pattern established intracking_test.go.
1176-1188: Consistent tracking configuration in delete tests.The delete resource tests properly use the fixtures helper and wire the tracking reader into the Agent, ensuring the managed resource check uses the configured tracking method.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
14f4c2b to
b8edd62
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
test/run-e2e.sh (1)
17-23: Script changes appear to be for focused development testing - revert before merge.The modifications narrow the test scope to only
TestResourceTrackingTestSuiteand remove important flags:
-race: Removed, which reduces data race detection-count=1: Removed, enabling test caching that may mask flaky tests-timeout 30m→10m: Reduced, which may cause timeouts for the full suiteARGS=$*: Set but unusedThese changes are appropriate for local development but should be reverted to run all e2e tests in CI.
🔧 Suggested revert to run full e2e suite
set -ex -o pipefail -ARGS=$* if ! kubectl config get-contexts | tail -n +2 | awk '{ print $2 }' | grep -qE '^vcluster-control-plane$'; then echo "kube context vcluster-control-plane is not configured; missing setup?" >&2 exit 1 fi -go test -v -timeout 10m ./test/e2e -run TestResourceTrackingTestSuite +go test -v -count=1 -race -timeout 30m github.com/argoproj-labs/argocd-agent/test/e2e
🧹 Nitpick comments (1)
test/e2e/resource_tracking_test.go (1)
43-58: Consider usingstrings.LastIndexfor cleaner parsing.The manual rune iteration to find the last
/could be simplified.♻️ Suggested simplification
func (suite *ResourceTrackingTestSuite) getTestNamespace() string { // Use test name to create unique namespace per test testName := suite.T().Name() - // Extract just the test method name (after the last /) - parts := []rune(testName) - for i := len(parts) - 1; i >= 0; i-- { - if parts[i] == '/' { - testName = string(parts[i+1:]) - break - } - } + // Extract just the test method name (after the last /) + if idx := strings.LastIndex(testName, "/"); idx != -1 { + testName = testName[idx+1:] + } // Convert to lowercase and replace underscores with hyphens testName = strings.ToLower(testName) testName = strings.ReplaceAll(testName, "_", "-") return testName }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
agent/agent.goagent/resource.goagent/resource_test.goagent/tracking.goagent/tracking_test.gotest/e2e/resource_tracking_test.gotest/run-e2e.sh
🚧 Files skipped from review as they are similar to previous changes (2)
- agent/agent.go
- agent/tracking.go
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-10-31T10:37:28.008Z
Learnt from: chetan-rns
Repo: argoproj-labs/argocd-agent PR: 632
File: test/data/pre-sync/pre-sync-job.yaml:2-17
Timestamp: 2025-10-31T10:37:28.008Z
Learning: In the argocd-agent repository, Kubernetes manifests under test/data/ directories are test fixtures for e2e testing and do not require production-grade security hardening such as securityContext settings for runAsNonRoot or allowPrivilegeEscalation.
Applied to files:
test/e2e/resource_tracking_test.gotest/run-e2e.sh
🧬 Code graph analysis (5)
agent/resource_test.go (3)
agent/tracking.go (2)
ResourceTrackingConfig(26-33)NewResourceTrackingReader(44-51)test/fake/kube/kubernetes.go (1)
NewDynamicFakeClient(80-112)agent/agent.go (1)
Agent(62-122)
agent/tracking_test.go (1)
agent/tracking.go (3)
ResourceTrackingReader(37-39)NewResourceTrackingReader(44-51)ResourceTrackingConfig(26-33)
test/e2e/resource_tracking_test.go (2)
test/e2e/fixture/fixture.go (3)
EnsureDeletion(109-156)EnsureUpdate(175-199)WaitForDeletion(159-172)test/e2e/fixture/argoclient.go (3)
GetArgoCDServerEndpoint(388-410)GetInitialAdminSecret(375-386)NewArgoClient(55-69)
test/run-e2e.sh (1)
test/e2e/resource_tracking_test.go (1)
TestResourceTrackingTestSuite(520-522)
agent/resource.go (2)
internal/kube/client.go (1)
KubernetesClient(37-44)agent/tracking.go (1)
ResourceTrackingConfig(26-33)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Run unit tests
- GitHub Check: Lint Go code
- GitHub Check: Run end-to-end tests
- GitHub Check: Build & cache Go code
- GitHub Check: Build and push image
- GitHub Check: Analyze (go)
🔇 Additional comments (13)
agent/tracking_test.go (3)
28-52: LGTM! Well-structured test fixture.The
fixtures()helper cleanly sets up a fake Kubernetes client with anargocd-cmConfigMap containing the specified tracking method. The conditional data population (lines 33-35) correctly handles the empty tracking method case for testing defaults.
54-92: Good coverage of tracking method configuration scenarios.The test cases comprehensively cover:
- Default behavior when not configured (falls back to annotation)
- Explicit annotation, label, and annotation+label methods
- Fallback behavior for unknown methods
94-210: Thorough table-driven tests for IsResourceTracked.The test matrix covers all meaningful combinations of tracking methods with label/annotation presence. The test cases correctly verify:
- Annotation method: requires annotation, ignores label
- Label method: requires label, ignores annotation
- Annotation+Label method: requires both present
agent/resource.go (3)
401-419: Clean integration of tracking configuration.The function signature change properly accepts the tracking config, and the delegation to
trackingConfig.IsResourceTracked(lbls, annt)replaces the hardcoded label/annotation checks. The comments are updated to reflect the new behavior.
468-468: Correct propagation of tracking config through recursive calls.The
trackingConfigis properly passed to recursiveisResourceManagedcalls, ensuring consistent tracking method usage when traversing owner references.
322-322: No action required—trackingReader is always initialized.The
trackingReaderfield is initialized in theNewAgentconstructor (line 326) before the function returns, andNewResourceTrackingReaderalways returns a valid pointer. SinceNewAgentis the only way to create anAgentinstance, anyAgentused in the code will have a non-niltrackingReader. No nil check is needed.agent/resource_test.go (4)
29-32: Good test setup with reusable tracking config.Defining
labelTrackingConfigat the test function level allows it to be reused across multiple sub-tests that use label-based tracking, reducing duplication.
297-433: Comprehensive tracking method tests.The new test cases thoroughly validate all three tracking methods:
- Label-only (lines 298-335): Verifies resources with labels are tracked, resources with only annotations are not
- Annotation-only (lines 337-374): Verifies resources with annotations are tracked, resources with only labels are not
- Annotation+Label (lines 376-433): Verifies both must be present for tracking
465-476: Correct integration of trackingReader in agent tests.The test properly uses
fixtures()to create a tracking reader configured with label tracking and wires it into the Agent instance.
1176-1188: Delete test updated with tracking configuration.The test now uses
fixtures()to set up label tracking, ensuring the delete operation correctly validates resource management using the configured tracking method.test/e2e/resource_tracking_test.go (3)
74-105: Good test isolation with namespace cleanup and config reset.The TearDownTest properly:
- Deletes the test-specific namespace
- Resets the
argocd-cmConfigMap tracking method to "annotation"This ensures tests don't interfere with each other.
109-241: Excellent e2e test coverage for annotation tracking.The test properly:
- Configures annotation tracking via ConfigMap
- Creates and syncs an Application
- Verifies the tracking annotation is present on deployed resources
- Validates the resource proxy can fetch the tracked resource
- Cleans up resources
This addresses the past review comment requesting resource proxy validation.
245-377: Consistent test structure across tracking methods.The label and annotation+label tests follow the same thorough pattern as the annotation test, ensuring all tracking methods are validated end-to-end including resource proxy fetch verification.
Also applies to: 381-518
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
b8edd62 to
17ab374
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@test/e2e/resource_tracking_test.go`:
- Around line 42-57: getTestNamespace() currently uses suite.T().Name() raw and
risks producing non‑DNS‑1123 and >63 char names; update getTestNamespace to
normalize the name to DNS‑1123 (lowercase, only a–z, 0–9 and '-' with no
leading/trailing '-'), replace any invalid characters with '-' (use a regexp),
then if the result exceeds 63 chars truncate it and append a stable short suffix
(e.g., a 6‑char hex of sha1/sum32 of the original testName) to ensure uniqueness
and stability; update references inside getTestNamespace to use suite.T().Name()
for hashing but return the normalized+truncated+suffix value.
- Around line 132-254: Register test cleanup immediately after creating the
Application instead of doing explicit cleanup at the end: after calling
suite.PrincipalClient.Create(...) for the app variable, call
suite.T().Cleanup(func(){ _ = fixture.EnsureDeletion(suite.Ctx,
suite.PrincipalClient, &app); _ = fixture.WaitForDeletion(suite.Ctx,
suite.ManagedAgentClient, &argoapp.Application{ObjectMeta:
metav1.ObjectMeta{Name: app.Name, Namespace: "argocd"}}) }) so the resource is
deleted even if a require.* fails; remove the explicit
EnsureDeletion/WaitForDeletion block at the end of this test and apply the same
pattern to the other two tests that create Applications.
🧹 Nitpick comments (2)
agent/resource.go (1)
322-338: Debug logging may be verbose in production.The added debug logging at lines 324-326 logs the tracking method, all labels, and all annotations for every resource check. This could produce substantial log volume in clusters with many resources.
Consider:
- Using
Tracelevel instead ofDebugfor the labels/annotations dump- Or conditionally logging only when tracking check fails
Example: Use Trace level for verbose output
trackingConfig := a.trackingReader.GetConfig() log().Debugf("Checking if resource %s/%s is managed: trackingMethod=%s", namespace, name, trackingConfig.Method) - log().Debugf("Resource labels: %v", res.GetLabels()) - log().Debugf("Resource annotations: %v", res.GetAnnotations()) + log().Tracef("Resource labels: %v", res.GetLabels()) + log().Tracef("Resource annotations: %v", res.GetAnnotations())agent/tracking.go (1)
25-33: Replace hardcoded tracking keys with Argo CD constants.The annotation and label keys at lines 65–66 and 86–87 are hardcoded as strings. Argo CD's
commonpackage exports these as constants:
common.AnnotationKeyAppInstancefor"argocd.argoproj.io/tracking-id"common.LabelKeyAppInstancefor"app.kubernetes.io/instance"Using them would reduce duplication and ensure consistency with Argo CD's values.
Example
import "github.com/argoproj/argo-cd/v3/common" // Replace lines 65–66: AnnotationKey: common.AnnotationKeyAppInstance, LabelKey: common.LabelKeyAppInstance,
4bbefcd to
002f772
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@agent/resource.go`:
- Around line 417-420: The isResourceManaged function currently calls
trackingConfig.IsResourceTracked(lbls, annt) without guarding against a nil
trackingConfig; update isResourceManaged to first check if trackingConfig == nil
and handle that case (e.g., return false, nil or appropriate default) before
calling trackingConfig.IsResourceTracked. Reference the isResourceManaged
function and its use of trackingConfig and IsResourceTracked, and ensure callers
like getManagedResource (which obtains config via a.trackingReader.GetConfig())
still behave the same while other callers or tests that pass nil won't panic.
♻️ Duplicate comments (1)
test/e2e/resource_tracking_test.go (1)
43-59: Namespace name may exceed DNS-1123 limits or contain invalid characters.The
getTestNamespace()function derives the namespace from the test name but only performs basic transformations (lowercase, underscores to hyphens). This doesn't handle:
- Names exceeding 63 characters (DNS label limit)
- Non-alphanumeric characters beyond underscores
- Names starting/ending with hyphens
This was flagged in a previous review and remains unaddressed.
🔧 Suggested fix with DNS-1123 normalization
+import ( + "crypto/sha256" + "encoding/hex" + "regexp" +) + +var dns1123Regexp = regexp.MustCompile(`[^a-z0-9-]`) + func (suite *ResourceTrackingTestSuite) getTestNamespace() string { testName := suite.T().Name() // Extract just the test method name (after the last /) parts := []rune(testName) for i := len(parts) - 1; i >= 0; i-- { if parts[i] == '/' { testName = string(parts[i+1:]) break } } // Convert to lowercase and replace underscores with hyphens testName = strings.ToLower(testName) testName = strings.ReplaceAll(testName, "_", "-") + // Normalize to DNS-1123: only lowercase alphanumeric and hyphens + testName = dns1123Regexp.ReplaceAllString(testName, "-") + // Remove leading/trailing hyphens + testName = strings.Trim(testName, "-") + // Truncate if too long, leaving room for hash suffix + const maxLen = 63 + const suffixLen = 8 // 7 chars hash + 1 hyphen + if len(testName) > maxLen-suffixLen { + hash := sha256.Sum256([]byte(suite.T().Name())) + suffix := hex.EncodeToString(hash[:])[:7] + testName = testName[:maxLen-suffixLen] + "-" + suffix + } return testName }
0170a98 to
e33b633
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@agent/tracking.go`:
- Around line 84-88: In GetConfig() where you construct and return the
ResourceTrackingConfig, replace the hardcoded LabelKey value with the app
instance label obtained from r.settingsManager.GetAppInstanceLabelKey(), falling
back to "app.kubernetes.io/instance" when that call returns empty/nil; leave the
AnnotationKey "argocd.argoproj.io/tracking-id" hardcoded. Ensure you reference
the existing ResourceTrackingConfig creation in GetConfig() and use a local
variable (e.g., instanceLabel) to hold the fallback logic before setting
LabelKey.
- Around line 44-50: NewResourceTrackingReader currently constructs
settings.NewSettingsManager with context.Background() and never starts its
informer, so the SettingsManager will not be canceled with the agent nor will
its informer populate GetTrackingMethod; change NewResourceTrackingReader to
accept (or otherwise use) the agent lifecycle context instead of
context.Background(), store the settings manager on ResourceTrackingReader as
before, and ensure the agent's Start() (or StartInformer) calls Start() (or
StartInformer()) on that settingsManager (mirroring
appManager/projectManager/repoManager behavior) so the informer is started and
will be stopped when the agent context is canceled; update references to
NewResourceTrackingReader, ResourceTrackingReader, settings.NewSettingsManager,
and the agent Start() to implement this.
♻️ Duplicate comments (2)
agent/resource.go (1)
323-324: Guard against nil trackingConfig before dereference.
IftrackingConfigis ever nil (e.g., Agent constructed without a tracking reader), this will panic. A defensive check keeps the function safe.🛡️ Suggested guard
func isResourceManaged(kube *kube.KubernetesClient, res *unstructured.Unstructured, maxRecurse int, trackingConfig *ResourceTrackingConfig) (bool, error) { if maxRecurse < 1 { return false, fmt.Errorf("recursion limit reached") } @@ - // Use the tracking configuration to determine if the resource is managed - if trackingConfig.IsResourceTracked(lbls, annt) { + // Use the tracking configuration to determine if the resource is managed + if trackingConfig == nil { + return false, fmt.Errorf("tracking configuration is nil") + } + if trackingConfig.IsResourceTracked(lbls, annt) { return true, nil }Also applies to: 417-419, 469-469
test/e2e/resource_tracking_test.go (1)
42-57: Normalize/truncate test namespace names to DNS‑1123.
getTestNamespace()can still emit invalid or >63‑char names for certain subtest names. Please normalize and truncate with a stable suffix.
|
@chetan-rns addressed all the comments, PTAL |
agent/tracking.go
Outdated
| trackingMethodStr, err := r.settingsManager.GetTrackingMethod() | ||
| if err != nil { | ||
| // If there's an error reading the tracking method, use the default (annotation) | ||
| return &ResourceTrackingConfig{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should handle this error properly. Can we return an error?
agent/tracking.go
Outdated
| trackingMethod = v1alpha1.TrackingMethodAnnotation | ||
| } | ||
|
|
||
| return &ResourceTrackingConfig{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
r.settingsManager.GetTrackingMethod() already returns the tracking method set in the argocd-cm. Why do we need to abstract it again within ResourceTrackingConfig?
agent/tracking.go
Outdated
|
|
||
| // ResourceTrackingConfig holds the configuration for how Argo CD tracks resources | ||
| type ResourceTrackingConfig struct { | ||
| // Method is the tracking method used by Argo CD |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if we still need this struct. We obtain the method from the settings manager, and the AnnotationKey and the LabelKeys are constants
https://github.com/argoproj/argo-cd/blob/master/common/common.go#L201
We can refactor to make this simple: Remove the ResourceTrackingConfig and GetConfig methods and directly fetch the tracking method in the IsResourceTrackedMethod. WDYT?
467033a to
362e6d8
Compare
Assisted-by: Cursor Signed-off-by: Rizwana777 <[email protected]>
362e6d8 to
fe0d538
Compare
chetan-rns
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @Rizwana777
LGTM
JIRA - https://issues.redhat.com/browse/GITOPS-8291
Fixes: #479
Summary by CodeRabbit
New Features
Tests
✏️ Tip: You can customize this high-level summary in your review settings.