-
Notifications
You must be signed in to change notification settings - Fork 22
Adding network policies to kmm including to kmm bundle #1652
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Adding network policies to kmm including to kmm bundle #1652
Conversation
Due to security concerns, we need to allow KMM operator only the nessecery traffic. This commits adds Network policies for each kmm pod. 1. controller 2. webhook 3. build and sign This commit also changes e2e tests to verify the network policies affect. This commit also affects KMM bundle to include the network policy mainfests.
✅ Deploy Preview for openshift-kmm ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
WalkthroughAdds optional inclusion of NetworkPolicy resources into generated bundles, introduces legacy bundle targets, updates Kustomize label handling (disables includeSelectors), narrows Service/Deployment selectors to control-plane labels, includes network-policy resources in deploy kustomizations, updates CSV timestamps, and scopes CI/e2e assets and scripts to the openshift-kmm namespace. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Dev as Developer
participant MK as Make (bundle)
participant OS as operator-sdk
participant GB as hack/generate-bundle
participant FS as FS: config/network-policy/*
participant B as Bundle Dir
Dev->>MK: make bundle
MK->>OS: generate kustomize manifests
OS-->>MK: manifests (api/)
MK->>GB: INCLUDE_NETWORK_POLICIES=true SUFFIX=""
alt INCLUDE_NETWORK_POLICIES=true
GB->>FS: list *.yaml (exclude kustomization.yaml)
GB->>B: copy as kmm-operator-*-networkpolicy.yaml
note right of GB: Optional copy step
end
GB-->>MK: bundle dir + bundle.Dockerfile
MK-->>Dev: bundle generated (no validation)
rect rgba(230,240,255,0.5)
note over Dev,MK: Legacy flow
Dev->>MK: make bundle-old / bundle-hub-old
MK->>OS: generate kustomize manifests
MK->>GB: INCLUDE_NETWORK_POLICIES (as configured)
GB-->>MK: bundle dir ready
MK->>OS: bundle validate ./bundle[(-hub)]
OS-->>Dev: validation result
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: TomerNewman The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (8)
bundle-hub/manifests/kernel-module-management-hub.clusterserviceversion.yaml (2)
228-231: Deployment selector immutability: changing matchLabels will break upgrades..spec.selector is immutable on Kubernetes Deployments. Reducing it to only control-plane: controller (from a wider label set) will cause OLM’s patch to fail on upgrade with a 422 error. If the goal is to target NetworkPolicies by control-plane only, keep the previous selector keys and add the control-plane label in addition; do not remove selector keys.
Proposed fix: add back the app.kubernetes.io/* labels to matchLabels while keeping control-plane. Also add the same labels under template.metadata.labels to remain consistent.
selector: matchLabels: - control-plane: controller + control-plane: controller + app.kubernetes.io/component: kmm-hub + app.kubernetes.io/name: kmm-hub + app.kubernetes.io/part-of: kmmAnd align the pod template labels:
metadata: annotations: kubectl.kubernetes.io/default-container: manager labels: - control-plane: controller + control-plane: controller + app.kubernetes.io/component: kmm-hub + app.kubernetes.io/name: kmm-hub + app.kubernetes.io/part-of: kmm
319-322: Same selector immutability risk for webhook deployment.The webhook’s Deployment selector was also narrowed to control-plane only. This is equally immutable and risky.
Apply the same correction here.
selector: matchLabels: - control-plane: webhook-server + control-plane: webhook-server + app.kubernetes.io/component: kmm-hub + app.kubernetes.io/name: kmm-hub + app.kubernetes.io/part-of: kmmAnd align the pod template labels:
metadata: annotations: kubectl.kubernetes.io/default-container: webhook-server labels: - control-plane: webhook-server + control-plane: webhook-server + app.kubernetes.io/component: kmm-hub + app.kubernetes.io/name: kmm-hub + app.kubernetes.io/part-of: kmmbundle/manifests/kernel-module-management.clusterserviceversion.yaml (2)
304-307: Deployment selector change will block seamless OLM upgrades.Narrowing matchLabels to only control-plane: controller alters an immutable field. On upgrade, the Deployment cannot be patched and the CSV may fail to progress. Preserve the original selector keys and simply add control-plane for NetworkPolicy targeting.
selector: matchLabels: - control-plane: controller + control-plane: controller + app.kubernetes.io/component: kmm + app.kubernetes.io/name: kmm + app.kubernetes.io/part-of: kmmAnd align the pod template labels:
metadata: annotations: kubectl.kubernetes.io/default-container: manager labels: - control-plane: controller + control-plane: controller + app.kubernetes.io/component: kmm + app.kubernetes.io/name: kmm + app.kubernetes.io/part-of: kmm
397-400: Webhook Deployment selector immutability issue (same as controller).Apply the same fix for the webhook Deployment.
selector: matchLabels: - control-plane: webhook-server + control-plane: webhook-server + app.kubernetes.io/component: kmm + app.kubernetes.io/name: kmm + app.kubernetes.io/part-of: kmmAnd align the pod template labels:
metadata: annotations: kubectl.kubernetes.io/default-container: webhook-server labels: - control-plane: webhook-server + control-plane: webhook-server + app.kubernetes.io/component: kmm + app.kubernetes.io/name: kmm + app.kubernetes.io/part-of: kmmconfig/default-hub/kustomization.yaml (1)
20-25: NetworkPolicy manifests require namespace and selector updatesAll NetworkPolicy resources under
config/network-policy/currently hardcodenamespace: systemand target pod labels (control-plane: webhook-server,openshift.io/build.name, etc.) that do not match the hub pods (which useapp.kubernetes.iolabels). To ensure the namespace transformer injectsopenshift-kmm-huband the policies actually select the KMM Hub workloads, please:• Remove any
metadata.namespace: systemlines so the kustomize namespace transformer appliesopenshift-kmm-hub.
• Update each policy’spodSelector.matchLabels(ormatchExpressions) to reference the hub’s labels:
app.kubernetes.io/component: kmm-hubapp.kubernetes.io/name: kmm-hubapp.kubernetes.io/part-of: kmmAffected files:
- config/network-policy/webhook.yaml
- config/network-policy/controller.yaml
- config/network-policy/build-and-sign.yaml
- config/network-policy/default-deny.yaml
Example diff for
webhook.yaml:--- a/config/network-policy/webhook.yaml @@ metadata: - namespace: system spec: - podSelector: - matchLabels: - control-plane: webhook-server + podSelector: + matchLabels: + app.kubernetes.io/component: kmm-hub + app.kubernetes.io/name: kmm-hub + app.kubernetes.io/part-of: kmmPlease apply similar changes across all policies to align namespace injection and selectors with the hub namespace and labels.
bundle-hub/manifests/kmm-operator-hub-default-deny_networking.k8s.io_v1_networkpolicy.yaml (2)
1-11: Hardcoded namespace: system — this will misplace resources or break OLM installs. Remove it.NetworkPolicy is namespaced. In bundle manifests, hardcoding namespace: system can cause the policy to be created in the wrong namespace or fail installation under OLM. The bundle should omit namespace so OLM/kustomize can place it into the operator’s target namespace (e.g., openshift-kmm-hub).
Because this file is generated from config/network-policy/default-deny.yaml, fix it at the source by removing the namespace there and regenerate. For the committed bundle artifact, also drop the namespace line:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny - namespace: system spec: podSelector: {} policyTypes: - Ingress - EgressFollow-up sanity checks after regeneration:
#!/bin/bash # Ensure no NetworkPolicy in the repo hardcodes a namespace rg -nP -C1 'kind:\s*NetworkPolicy' | awk -F: '{print $1}' | sort -u | xargs -I{} rg -n '^\s*namespace:\s*' {}
1-11: Add DNS egress to the webhook NetworkPolicyThe controller policy correctly allows DNS (UDP/TCP 53) and API server (TCP 6443/443) egress, and the build-and-sign policy currently allows all egress traffic. However, the webhook policy only permits egress to the API server and lacks DNS allowances, so webhook pods cannot resolve the Kubernetes API service by name.
Please update
config/network-policy/webhook.yamlto include egress rules for DNS that mirror the controller policy:
- namespaceSelector matching
dns.operator.openshift.io/daemonset-dns: default- ports UDP/TCP 53
Also confirm that:
- The controller and build-and-sign policies cover their required DNS/API traffic.
- e2e tests exercise both webhook admission and controller → API flows under NetworkPolicy enforcement.
config/network-policy/default-deny.yaml (1)
1-11: Remove hardcoded namespace from the source policy; let kustomize/OLM inject the target namespace.Same issue as the generated bundle: namespace: system should not be present. Removing it here ensures both direct deploys and generated bundles are correct.
Apply:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny - namespace: system spec: podSelector: {} policyTypes: - Ingress - EgressOptionally, add labels to ease auditing and selection consistency:
metadata: name: default-deny + labels: + app.kubernetes.io/part-of: kmm + app.kubernetes.io/name: kmm
♻️ Duplicate comments (3)
bundle/manifests/kmm-operator-controller_networking.k8s.io_v1_networkpolicy.yaml (1)
13-19: Same scope considerations as the source policy (ingress open; egress wide).Mirror the decision from config/network-policy/controller.yaml; keep both copies aligned to avoid drift.
Also applies to: 32-36
bundle-hub/manifests/kmm-operator-hub-controller_networking.k8s.io_v1_networkpolicy.yaml (2)
13-19: Replicate the source-policy scope decision here and keep in sync.Ingress openness and egress breadth mirror the source file; keep them aligned or consider scoping as discussed.
Also applies to: 32-36
3-6: Remove hardcoded namespace in hub bundle too.Same as the non-hub bundle: drop
metadata.namespace: systemor ensure your bundling pipeline rewrites it to the operator’s namespace.metadata: name: controller - namespace: system
🧹 Nitpick comments (21)
bundle-hub/manifests/kernel-module-management-hub.clusterserviceversion.yaml (1)
220-238: Keep redundant labels on pods for observability and policy scoping.Even if NetworkPolicies key off control-plane, retaining app.kubernetes.io/* labels on pods aids metrics, log routing, RBAC aggregation, and future policies without affecting your intended policy tightening.
Also applies to: 311-329
bundle/manifests/kernel-module-management.clusterserviceversion.yaml (1)
296-300: Operational advice: prefer additive labels over selector surgery.If the motivation was “simplify policy selectors,” you can keep control-plane as the canonical policy label without removing existing selector keys. This avoids any immutable-field changes and keeps compatibility with tooling and dashboards keyed on app.kubernetes.io/*.
Also applies to: 390-394
bundle/manifests/kmm-operator-webhook_networking.k8s.io_v1_networkpolicy.yaml (2)
13-16: Ingress allows 9443 from all peers; this is the practical choice for webhooks.Given kube-apiserver traffic originates from hostNetwork and doesn’t match pod/namespace selectors, allowing any source on 9443 is a common pattern to avoid breaking admissions. This is fine; document the rationale so future hardening attempts don’t break the webhook.
Add a brief comment:
ingress: - - ports: + - # apiserver originates from hostNetwork; allow from all peers on webhook port + ports:
17-22: Consider tightening egress for the webhook (443 may be unnecessary).For the webhook, egress to 6443 (Kubernetes API) is typically sufficient. Port 443 is often needed by build/sign components, not the webhook. If the webhook doesn’t call external HTTPS endpoints, drop 443 here to reduce blast radius.
egress: - - ports: # kube api server port + - ports: # kube api server port - protocol: TCP port: 6443 - - protocol: TCP - port: 443If you do need 443 for a known destination (e.g., cluster monitoring), prefer constraining with a namespaceSelector/podSelector or ipBlock.
config/olm-hub/kustomization.yaml (1)
13-18: Safer selector handling; ensure hub resources still match Pods.Same feedback as for the non-hub variant: with includeSelectors disabled, confirm hub Deployments/Services selectors still match the Pod template labels and that NetworkPolicies selecting on these labels continue to target the intended Pods.
Optional clarity tweak:
labels: -- includeSelectors: false +- includeSelectors: false + includeTemplates: true pairs: app.kubernetes.io/component: kmm-hub app.kubernetes.io/name: kmm-hub app.kubernetes.io/part-of: kmmbundle-hub/manifests/kmm-operator-hub-webhook_networking.k8s.io_v1_networkpolicy.yaml (1)
17-22: Egress is wide open on 6443/443; consider scoping and DNS
- Today the policy allows egress to any destination on 6443 and 443. If the webhook only talks to the kube-apiserver, scope “to” by namespaceSelector as above.
- If the client uses DNS (kubernetes.default.svc), ensure DNS egress (TCP/UDP 53 to CoreDNS/openshift-dns) is permitted somewhere. Many clients resolve via env vars, but verify your client behavior.
Illustrative tightening:
egress: - - ports: # kube api server port - - protocol: TCP - port: 6443 - - protocol: TCP - port: 443 + - to: + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: kube-system + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: openshift-kube-apiserver + ports: + - protocol: TCP + port: 6443 + - to: + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: kube-system + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: openshift-dns + ports: + - protocol: UDP + port: 53 + - protocol: TCP + port: 53Also please confirm whether port 443 is actually required for the webhook; if not, drop it.
config/network-policy/build-and-sign.yaml (2)
11-14: Indentation and EOF newline nits flagged by yamllintpolicyTypes item is over-indented and there’s no trailing newline.
Apply:
policyTypes: - - Egress + - Egress egress: - {} +
13-14: Egress ‘allow-all’ is pragmatic for builds but worth documenting or narrowingegress: [{}] effectively disables egress filtering for matched pods. That may be intentional (pull bases, reach registries, fetch deps). If security policy allows, consider narrowing to:
- DNS (TCP/UDP 53) in kube-system/openshift-dns
- Cluster registry and any required external registries
- Optional: object storage endpoints used during builds/signing
I can help author a minimally permissive rule set once you enumerate the endpoints these pods must reach.
bundle/manifests/kmm-operator-default-deny_networking.k8s.io_v1_networkpolicy.yaml (1)
4-5: Minor YAML alignment cleanupmetadata fields are indented with a single space; the rest of the repo seems to prefer two-space indentation.
metadata: - name: default-deny - namespace: system + name: default-deny + namespace: systemconfig/webhook-server/kustomization.yaml (1)
12-14: Labels updated, but NetworkPolicy still matches ‘control-plane’; align labelsYou’ve added app.kubernetes.io/component: webhook-server here, but the webhook NetworkPolicy matches control-plane: webhook-server. Either:
- Add control-plane: webhook-server here as well, or
- Change the NetworkPolicy to match app.kubernetes.io/component: webhook-server.
If you prefer to add the extra label via kustomize:
labels: - pairs: app.kubernetes.io/component: webhook-server + - pairs: + control-plane: webhook-serverThis keeps the NetworkPolicy working without further changes.
ci/e2e/kustomization.yaml (1)
8-9: Heads-up: disableNameSuffixHash=true disables rollout-on-config change.With hashes disabled, Deployments/Pods depending on these ConfigMaps/Secrets won’t auto-roll on content changes. If that’s desired for CI, fine; otherwise, consider enabling hashes and referencing them via envFrom/volumes to trigger rollouts.
config/network-policy/controller.yaml (1)
13-19: Policy scope check: ingress is open to all; egress to 6443/443 is destination-unconstrained. Confirm intent.
Ingress: allowing 8443/8081 from anywhere is simplest and avoids kubelet probe issues, but it also allows any in-cluster source to hit metrics/healthz. If you want stronger isolation, scope
from:to prometheus in openshift-monitoring for 8443 and consider leaving 8081 open to avoid kubelet pitfalls.Egress: ports-only rules permit any destination on 6443/443. That’s common for API access on platforms where apiserver uses hostNetwork, but it’s broader than “API servers only.”
Also applies to: 32-36
bundle-hub/manifests/kmm-operator-hub-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml (1)
12-14: Security check:egress: [{}]allows all egress. Confirm that’s the intended posture for Build pods.This fully opens egress for selected pods (common for builds needing internet/registry). Ensure this is acceptable with your default-deny baseline and document the rationale.
config/network-policy/webhook.yaml (3)
3-6: Namespace hard-coded to "system" — confirm OLM install behavior or drop it.Bundled namespaced resources typically install into the target OperatorGroup namespace; hard-coding
metadata.namespace: systemin files that are copied verbatim into the bundle can be surprising and possibly ignored or cause validation noise. If the intent is “install into whichever namespace the operator is installed,” consider omittingnamespacehere and let kustomize/OLM set it.If you agree, apply:
metadata: name: webhook - namespace: system
13-16: Ingress currently allows 9443 from all sources. Is this intended?With a default-deny policy in place, this rule opens the webhook listener on 9443 to any in-cluster peer. If the intent is to allow only kube-apiserver, please document the trade-off (hostNetwork apiserver pods are hard to select) or scope this via ipBlock/Namespace+Pod selectors where feasible in your environments.
17-22: Consider allowing DNS egress explicitly to avoid name-resolution regressions.If the webhook (or its libraries) resolve
kubernetes.default.svc, DNS egress (UDP/TCP 53) will be needed when egress default-deny is active. Suggest adding a DNS rule for OpenShift and vanilla K8s:egress: - ports: # kube api server port - protocol: TCP port: 6443 - protocol: TCP port: 443 + - to: + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: openshift-dns + ports: + - protocol: UDP + port: 53 + - protocol: TCP + port: 53 + - to: + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: kube-system + ports: + - protocol: UDP + port: 53 + - protocol: TCP + port: 53If you’re confident client-go will always use the injected
KUBERNETES_SERVICE_HOSTIP and never DNS, feel free to skip—but please confirm via e2e.hack/generate-bundle (1)
27-35: Make copying more robust (yml support, nullglob, and guard by kind).Minor hardening to avoid corner cases:
- Support both .yaml and .yml files.
- Enable nullglob so the loop doesn’t iterate on literal globs.
- Copy only files whose
kindis NetworkPolicy (defensive if other YAMLs slip into the directory).Suggested patch:
# add network policies -if [ "$INCLUDE_NETWORK_POLICIES" = "true" ] && [ -d "$SOURCE_DIR/config/network-policy" ]; then - for np_file in "$SOURCE_DIR/config/network-policy"/*.yaml; do - if [ -f "$np_file" ] && [ "$(basename "$np_file")" != "kustomization.yaml" ]; then - filename=$(basename "$np_file" .yaml) - cp "$np_file" "$BUNDLE_DIR/manifests/kmm-operator${SUFFIX}-${filename}_networking.k8s.io_v1_networkpolicy.yaml" - fi - done -fi +if [ "$INCLUDE_NETWORK_POLICIES" = "true" ] && [ -d "$SOURCE_DIR/config/network-policy" ]; then + shopt -s nullglob + for np_file in "$SOURCE_DIR/config/network-policy"/*.yaml "$SOURCE_DIR/config/network-policy"/*.yml; do + base="$(basename "$np_file")" + # skip kustomization files + if [ "$base" = "kustomization.yaml" ] || [ "$base" = "kustomization.yml" ]; then + continue + fi + # copy only NetworkPolicy kinds + if [ -f "$np_file" ] && grep -Eq '^\s*kind:\s*NetworkPolicy\b' "$np_file"; then + filename="${base%.*}" + cp "$np_file" "$BUNDLE_DIR/manifests/kmm-operator${SUFFIX}-${filename}_networking.k8s.io_v1_networkpolicy.yaml" + fi + done +fiAlso, quick check: ensure the kustomize output doesn’t already include the same NetworkPolicies to avoid duplicates in the bundle.
bundle/manifests/kmm-operator-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml (1)
5-14: Egress is fully open; confirm intent and consider minimal tightening.
egress: - {}allows all destinations and all ports. That may be acceptable for build pods (they often need to reach arbitrary registries), but it doesn’t align with “restrict to only necessary communications.”- If feasible in your environments, consider scoping to:
- TCP 443 (and 80, if needed) to external registries,
- the in-cluster registry service/namespace,
- DNS (UDP/TCP 53) for name resolution.
Also, as with other manifests, verify whether
namespace: systemis correct for OLM bundles or should be omitted to let the install namespace be chosen at install time.Makefile (3)
298-312: Main “bundle” target no longer validates the generated bundle — ensure CI compensates.Validation was removed here. If CI doesn’t explicitly run
bundle-old(or an equivalent validation step), regressions in bundle contents may slip through.If you prefer to keep validation in the main flow, add back:
./hack/generate-bundle + ${OPERATOR_SDK} bundle validate ./bundleAlternatively, document and enforce a CI job that runs
make bundle-oldon PRs.
336-357: Hub bundle also skips validation — mirror the decision from “bundle.”Same comment as the non-hub target: either re-add validation here or guarantee that CI runs a validating path.
Potential quick fix:
./hack/generate-bundle + ${OPERATOR_SDK} bundle validate ./bundle-hub
282-357: Reduce duplication between old/new and hub/non-hub bundle targets.Consider extracting a small Make macro/function for the common pre-steps (kustomize edits and env wiring) with parameters for SUFFIX and INCLUDE_NETWORK_POLICIES, then two thin targets call it with/without validation. This will make future changes (like image vars or flags) one-line updates.
I can sketch a DRY’d Make fragment if you’d like.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (35)
Makefile(3 hunks)PROJECT(1 hunks)bundle-hub/manifests/kernel-module-management-hub.clusterserviceversion.yaml(1 hunks)bundle-hub/manifests/kmm-operator-hub-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle-hub/manifests/kmm-operator-hub-controller-metrics-service_v1_service.yaml(0 hunks)bundle-hub/manifests/kmm-operator-hub-controller_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle-hub/manifests/kmm-operator-hub-default-deny_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle-hub/manifests/kmm-operator-hub-webhook-service_v1_service.yaml(0 hunks)bundle-hub/manifests/kmm-operator-hub-webhook_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle/manifests/kernel-module-management.clusterserviceversion.yaml(1 hunks)bundle/manifests/kmm-operator-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle/manifests/kmm-operator-controller-metrics-service_v1_service.yaml(0 hunks)bundle/manifests/kmm-operator-controller_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle/manifests/kmm-operator-default-deny_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle/manifests/kmm-operator-webhook-service_v1_service.yaml(0 hunks)bundle/manifests/kmm-operator-webhook_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)ci/e2e/kustomization.yaml(1 hunks)ci/e2e/module.yaml(1 hunks)ci/prow/e2e-incluster-build(4 hunks)ci/sign-key-certs/kustomization.yaml(1 hunks)config/default-hub/kustomization.yaml(1 hunks)config/default/kustomization.yaml(1 hunks)config/deploy-hub/kustomization.yaml(1 hunks)config/deploy/kustomization.yaml(1 hunks)config/manifests-hub/kustomization.yaml(1 hunks)config/manifests/kustomization.yaml(1 hunks)config/network-policy/build-and-sign.yaml(1 hunks)config/network-policy/controller.yaml(1 hunks)config/network-policy/default-deny.yaml(1 hunks)config/network-policy/kustomization.yaml(1 hunks)config/network-policy/webhook.yaml(1 hunks)config/olm-hub/kustomization.yaml(1 hunks)config/olm/kustomization.yaml(1 hunks)config/webhook-server/kustomization.yaml(1 hunks)hack/generate-bundle(2 hunks)
💤 Files with no reviewable changes (4)
- bundle/manifests/kmm-operator-webhook-service_v1_service.yaml
- bundle/manifests/kmm-operator-controller-metrics-service_v1_service.yaml
- bundle-hub/manifests/kmm-operator-hub-webhook-service_v1_service.yaml
- bundle-hub/manifests/kmm-operator-hub-controller-metrics-service_v1_service.yaml
🧰 Additional context used
🪛 YAMLlint (1.37.1)
config/network-policy/kustomization.yaml
[error] 7-7: no new line character at the end of file
(new-line-at-end-of-file)
config/network-policy/build-and-sign.yaml
[warning] 12-12: wrong indentation: expected 2 but found 4
(indentation)
[error] 14-14: no new line character at the end of file
(new-line-at-end-of-file)
bundle-hub/manifests/kmm-operator-hub-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml
[warning] 12-12: wrong indentation: expected 2 but found 4
(indentation)
[error] 14-14: no new line character at the end of file
(new-line-at-end-of-file)
bundle/manifests/kmm-operator-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml
[warning] 12-12: wrong indentation: expected 2 but found 4
(indentation)
[error] 14-14: no new line character at the end of file
(new-line-at-end-of-file)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (25)
bundle-hub/manifests/kernel-module-management-hub.clusterserviceversion.yaml (2)
40-40: Timestamp bump looks fine.createdAt updated to 2025-08-26T14:37:47Z. No concerns; consistent with bundle refresh.
312-316: Selectors align with pod labels—no changes neededAll Service objects in
bundle-hub/manifestsnow select oncontrol-plane: webhook-serverorcontrol-plane: controller, which matches thecontrol-planelabels present on the webhook and controller pod templates in the CSV. Likewise, each NetworkPolicy’spodSelectoreither targets the samecontrol-planelabels, uses a build-name expression, or defaults to{}(all pods). No Service is still selecting on any removedapp.kubernetes.io/*labels, and no NetworkPolicy is orphaned.Since there are no mismatches, we can resolve this without further edits.
bundle/manifests/kernel-module-management.clusterserviceversion.yaml (2)
50-50: Timestamp bump acknowledged.createdAt updated to 2025-08-26T14:37:46Z. OK and consistent with the hub bundle timing.
390-395: Services and NetworkPolicies selectors verifiedI’ve checked the bundled Service and NetworkPolicy manifests under bundle/manifests and confirmed that:
- Both kmm-operator-controller-metrics-service and kmm-operator-webhook-service use only control-plane: controller and control-plane: webhook-server in their selector: blocks, and the corresponding pod templates include those exact labels.
- The NetworkPolicies for controller and webhook likewise match on podSelector.matchLabels.control-plane only.
- All pod templates still include the app.kubernetes.io/* labels for compatibility, even though selectors rely solely on control-plane.
No changes are needed.
config/manifests/kustomization.yaml (1)
12-16: Disable includeSelectors safely verifiedI scanned the generated manifests across
config/andbundle/manifests/and confirmed that every explicit selector—for Deployments, Services, NetworkPolicies, and ServiceMonitors—relies solely on thecontrol-planelabel (with valuescontrollerorwebhook-server). No resource is selecting on any of theapp.kubernetes.io/*labels, so disablingincludeSelectorscannot break any existing selectors. This change is safe to merge.config/default/kustomization.yaml (1)
20-24: No selector coupling lost
I’ve verified that, withincludeSelectors: false, all Service and NetworkPolicy selectors still align exactly with thecontrol-planelabels on the corresponding Pod templates:
- Services
- config/webhook/service.yaml →
selector: control-plane: webhook-server- config/webhook-hub/service.yaml →
selector: control-plane: webhook-server- config/rbac-base/auth_proxy_service.yaml →
selector: control-plane: controller- Deployments
- config/webhook-server/deployment.yaml →
spec.selector.matchLabels: control-plane: webhook-server& Pod templatemetadata.labels: control-plane: webhook-server- config/manager-base/manager.yaml →
spec.selector.matchLabels: control-plane: controller& Pod templatemetadata.labels: control-plane: controller- NetworkPolicies
- config/network-policy/webhook.yaml →
podSelector.matchLabels: control-plane: webhook-server- config/network-policy/controller.yaml →
podSelector.matchLabels: control-plane: controllerAll selectors continue to target the existing
control-planelabels on Pods. No updates to selectors were needed by the overlay, so the flip toincludeSelectors: falseis safe here.config/manifests-hub/kustomization.yaml (1)
12-16: Hub bundle: same selector safety improvement — looks good.Mirrors the non-hub change; safer bundles with fewer surprises in selectors.
If hub selectors were narrowed to control-plane labels, re-run the scans from the non-hub path for hub resources as well.
bundle/manifests/kmm-operator-webhook_networking.k8s.io_v1_networkpolicy.yaml (2)
1-23: Default-deny NetworkPolicy present — verify apply orderingThe bundle already includes a namespace-scoped
default-denypolicy at
•bundle/manifests/kmm-operator-default-deny_networking.k8s.io_v1_networkpolicy.yaml(kind: NetworkPolicy, name: default-deny)Next steps:
- Confirm that your deployment tooling (e.g.
kustomization.yaml, Helm hooks, CI/CD apply jobs) applies the component-specific allow-list policies (controller, webhook, build-and-sign) before thedefault-denypolicy.- Ensure no transient gap occurs during upgrades that could block traffic to the API server or other essential control-plane services.
9-9: NetworkPolicy podSelector label verifiedThe
control-plane: webhook-serverlabel is correctly applied to the webhook Deployment’s metadata, selector, and Pod template, matching the NetworkPolicy’s podSelector:
- config/webhook-server/deployment.yaml:
- metadata.labels (line 8)
- spec.selector.matchLabels (line 12)
- template.metadata.labels (line 19)
- bundle/manifests/kmm-operator-webhook_networking.k8s.io_v1_networkpolicy.yaml (line 9)
No changes required.
config/olm/kustomization.yaml (1)
13-18: Ensure Pod template labels & selectors remain in syncYour current Kustomize labels transformer in
config/olm/kustomization.yamlsets onlyincludeSelectors: false. BecauseincludeTemplatesdefaults to false, Pod templates will not receive the newapp.kubernetes.io/*labels, and selectors will remain unchanged. The raw manifests below do not define those labels in their.spec.selectoror.spec.template.metadata.labels:•
config/manager-base/manager.yaml
•config/webhook-server/deployment.yaml
•config/webhook/service.yamlWithout applying both selectors and template labels, NetworkPolicies and Services won’t match the intended Pods. To address this, explicitly enable template labeling:
config/olm/kustomization.yaml labels: - includeSelectors: false + includeSelectors: false # keep selectors untouched + includeTemplates: true # ensure Pod templates get these labels pairs: app.kubernetes.io/component: kmm app.kubernetes.io/name: kmm app.kubernetes.io/part-of: kmmAfter applying, please verify:
- Each Deployment’s
.spec.template.metadata.labelsincludes all threeapp.kubernetes.io/*entries.- Each Service/Deployment
.spec.selector(or.spec.selector.matchLabels) matches those labels exactly.config/network-policy/default-deny.yaml (1)
1-11: Scope and coverage check: verify that compensating allow policies fully cover required traffic.Ensure all KMM Pods retain:
- Ingress/egress needed for health probes, metrics scraping (if any), webhook traffic from kube-apiserver, and inter-pod communications in the namespace.
- Egress to external registries or cluster-internal registries used by build-and-sign flows.
Please confirm e2e covers:
- Module build/sign success under policies
- Webhook admission requests succeeding with default-deny in place
- DNS resolution from Pods
If gaps are found, I can draft targeted allow rules (DNS, API, intra-namespace) for each component. Want me to propose those?
bundle/manifests/kmm-operator-default-deny_networking.k8s.io_v1_networkpolicy.yaml (1)
6-10: Default-deny is fine; ensure companion allow policies cover DNS and metrics pathsThis blanket policy will block all traffic for pods in the namespace not matched by more specific policies. Double-check that:
- DNS egress is granted where needed (controller/webhook/build/sign as applicable).
- Any metrics scraping (Prometheus) has explicit ingress to controller/webhook if required.
I can scan the repo and list which pods would remain only subject to default-deny to help spot gaps.
ci/e2e/module.yaml (2)
6-6: LGTM: e2e Module now targets openshift-kmmAligns with the broader namespace scoping in this PR.
7-34: Verify all e2e assets and scripts are namespace-alignedConfirm ServiceAccounts, Roles/RoleBindings, Secrets/ConfigMaps, and script invocations also use -n openshift-kmm consistently so e2e doesn’t flake on namespace mismatches.
If you want, I can generate a quick repo scan to highlight any lingering -n defaults or hardcoded namespaces.
config/webhook-server/kustomization.yaml (1)
12-14: includeSelectors removal: double-check intended propagationRemoving includeSelectors means kustomize will not mutate selectors. Ensure the Deployment/Service selectors already carry the labels your policies depend on and that dropping includeSelectors doesn’t unintentionally change rollout behavior.
I can grep Deployments/Services under config/webhook-server to confirm selector/label symmetry if helpful.
ci/e2e/kustomization.yaml (1)
11-14: Kustomize binary not available—please verify namespace locallyThe sandbox environment lacks the
kustomizeCLI, so we couldn’t confirm that generated ConfigMap and Secret resources includemetadata.namespace: openshift-kmm. To ensure the patch works as intended, run the build in an environment with Kustomize installed and verify that both resources have the correct namespace:kustomize build ci/e2e | awk '/^kind: (ConfigMap|Secret)/, /^$/' | sed -n '1,12p'If you see
metadata.namespace: openshift-kmmon both objects, the refactor is correct. Otherwise, apply the diff in the original comment to movenamespace: openshift-kmmto the top level ofkustomization.yaml.config/network-policy/controller.yaml (1)
20-31: DNS egress peer selection is precise (namespace+pod selector intersection). LGTM.Selecting openshift-dns daemonset via both namespaceSelector and podSelector is accurate and avoids accidental matches.
ci/prow/e2e-incluster-build (1)
12-12: LGTM! Consistent namespace scoping to openshift-kmm.The changes consistently scope all KMM-related operations to the
openshift-kmmnamespace, which aligns with the PR's objective to add network policies and improve namespace isolation for KMM components.Also applies to: 49-50, 55-56, 69-69, 78-78
config/deploy-hub/kustomization.yaml (1)
12-12: LGTM! Hub deployment now includes network policies.Adding the
../network-policyresource enables network policies to be included in hub deployments, which aligns with the PR objective to add NetworkPolicy manifests for KMM pods.config/network-policy/kustomization.yaml (1)
1-7: LGTM! Well-structured network policy kustomization.The kustomization properly references all four network policy manifests (default-deny, controller, webhook, and build-and-sign), providing a clean organization for the NetworkPolicy resources.
config/deploy/kustomization.yaml (1)
13-13: LGTM! Standard deployment now includes network policies.Adding the
../network-policyresource enables network policies to be included in standard deployments, complementing the hub deployment configuration and supporting the PR's network security objectives.ci/sign-key-certs/kustomization.yaml (1)
9-9: LGTM! Proper namespace scoping for signing secrets.Adding the
namespace: openshift-kmmto both secret generators ensures the signing certificates and keys are created in the correct namespace, which is essential for the network policies to function properly.Also applies to: 12-12
hack/generate-bundle (1)
11-11: Nice: feature-flag for including NetworkPolicies.
INCLUDE_NETWORK_POLICIESdefaulting to false is a clean toggle and keeps the generator backward compatible.Makefile (2)
282-297: Legacy “bundle-old” target is fine as a compatibility/validation path.Keeping the original flow (including
operator-sdk bundle validate) under a distinct target is a pragmatic way to iterate on the new bundle contents.
313-335: Hub legacy target mirrors non-hub — looks good.Parity with the non-hub “old” flow reads well and should ease debugging differences between the tracks.
| podSelector: | ||
| matchExpressions: | ||
| - key: openshift.io/build.name | ||
| operator: Exists | ||
| policyTypes: | ||
| - Egress | ||
| egress: | ||
| - {} No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix YAML lint issues (indentation, trailing newline) and keep minimal style.
YAMLLint flagged indentation under policyTypes and missing newline at EOF. Also normalize list indentation under matchExpressions.
spec:
- podSelector:
- matchExpressions:
- - key: openshift.io/build.name
- operator: Exists
- policyTypes:
- - Egress
- egress:
- - {}
+ podSelector:
+ matchExpressions:
+ - key: openshift.io/build.name
+ operator: Exists
+ policyTypes:
+ - Egress
+ egress:
+ - {}
+📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| podSelector: | |
| matchExpressions: | |
| - key: openshift.io/build.name | |
| operator: Exists | |
| policyTypes: | |
| - Egress | |
| egress: | |
| - {} | |
| podSelector: | |
| matchExpressions: | |
| - key: openshift.io/build.name | |
| operator: Exists | |
| policyTypes: | |
| - Egress | |
| egress: | |
| - {} | |
🧰 Tools
🪛 YAMLlint (1.37.1)
[warning] 12-12: wrong indentation: expected 2 but found 4
(indentation)
[error] 14-14: no new line character at the end of file
(new-line-at-end-of-file)
🤖 Prompt for AI Agents
In
bundle-hub/manifests/kmm-operator-hub-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml
around lines 7 to 14, the YAML has lint problems: incorrect indentation for the
policyTypes and list items under matchExpressions, and missing trailing newline.
Fix by aligning list items two spaces under their parent keys (ensure
matchExpressions items are indented to match podSelector and the dash items are
properly indented), place policyTypes and its list at the same indentation level
as podSelector and egress, and ensure the file ends with a single trailing
newline; keep the minimal style (no extra blank lines or comments).
| podSelector: | ||
| matchLabels: | ||
| control-plane: webhook-server | ||
| policyTypes: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Selector likely mismatched with current webhook pod labels
NetworkPolicy selects pods via control-plane: webhook-server, while config/webhook-server/kustomization.yaml applies app.kubernetes.io/component: webhook-server. Unless the Deployment also sets control-plane: webhook-server on the Pod template, this policy won’t select the webhook pods and will be a no-op.
Apply one of the following to align:
Option A — make the policy match the kustomize-applied label:
spec:
podSelector:
matchLabels:
- control-plane: webhook-server
+ app.kubernetes.io/component: webhook-serverOption B — keep the policy as-is but add the missing pod label via kustomize (see my comment on config/webhook-server/kustomization.yaml).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| podSelector: | |
| matchLabels: | |
| control-plane: webhook-server | |
| policyTypes: | |
| spec: | |
| podSelector: | |
| matchLabels: | |
| app.kubernetes.io/component: webhook-server | |
| policyTypes: |
| ingress: | ||
| - ports: | ||
| - protocol: TCP | ||
| port: 9443 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ingress currently allows from anywhere; restrict to API server
With an ingress rule that only lists ports and no from peers, traffic on 9443 is allowed from any source. For admission webhooks we should scope ingress to the API server namespaces to reduce blast radius.
Example tightening (covers both vanilla and OpenShift):
ingress:
- - ports:
- - protocol: TCP
- port: 9443
+ - from:
+ - namespaceSelector:
+ matchLabels:
+ kubernetes.io/metadata.name: kube-system
+ - namespaceSelector:
+ matchLabels:
+ kubernetes.io/metadata.name: openshift-kube-apiserver
+ ports:
+ - protocol: TCP
+ port: 9443If you prefer stricter scoping, we can further constrain with podSelector labels on the apiserver pods in those namespaces.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ingress: | |
| - ports: | |
| - protocol: TCP | |
| port: 9443 | |
| ingress: | |
| - from: | |
| - namespaceSelector: | |
| matchLabels: | |
| kubernetes.io/metadata.name: kube-system | |
| - namespaceSelector: | |
| matchLabels: | |
| kubernetes.io/metadata.name: openshift-kube-apiserver | |
| ports: | |
| - protocol: TCP | |
| port: 9443 |
🤖 Prompt for AI Agents
In
bundle-hub/manifests/kmm-operator-hub-webhook_networking.k8s.io_v1_networkpolicy.yaml
lines 13-16, the ingress rule lists only ports so port 9443 is open to any
source; add explicit from peers to restrict traffic to API server namespaces.
Modify the ingress stanza to include from entries for the API server namespaces
(e.g., a namespaceSelector with matchLabels: kubernetes.io/metadata.name:
kube-system and a second from entry for OpenShift API namespace like
matchLabels: kubernetes.io/metadata.name: openshift-apiserver / or
openshift-kube-apiserver as appropriate), and optionally add a podSelector
within those from entries to limit to apiserver pods; ensure multiple from
blocks are used so only requests originating from those namespaces/pods can
reach port 9443.
| policyTypes: | ||
| - Egress | ||
| egress: | ||
| - {} No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix YAML indentation and ensure newline at EOF to satisfy linters/CI.
Yamllint flagged indentation and missing newline. Apply:
spec:
podSelector:
matchExpressions:
- - key: openshift.io/build.name
- operator: Exists
- policyTypes:
- - Egress
- egress:
- - {}
+ - key: openshift.io/build.name
+ operator: Exists
+ policyTypes:
+ - Egress
+ egress:
+ - {}
+ Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 YAMLlint (1.37.1)
[warning] 12-12: wrong indentation: expected 2 but found 4
(indentation)
[error] 14-14: no new line character at the end of file
(new-line-at-end-of-file)
🤖 Prompt for AI Agents
In
bundle/manifests/kmm-operator-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml
around lines 11 to 14, the YAML has incorrect indentation for the
policyTypes/egress block and is missing a newline at EOF; fix the indentation so
that "policyTypes:" and "egress:" are top-level entries with their lists
properly indented (e.g. policyTypes: followed by an indented "- Egress" and
egress: followed by an indented "- {}" or a proper rule), and ensure the file
ends with a single trailing newline character.
| metadata: | ||
| name: controller | ||
| namespace: system | ||
| spec: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Bundle manifest should not hardcode metadata.namespace: system.
OLM typically applies bundle objects into the operator’s install namespace. Shipping a hardcoded namespace can cause install-time failures or resources landing in the wrong namespace. Recommend dropping the field in the bundled file (or have the bundle generator strip it).
metadata:
name: controller
- namespace: system🏁 Script executed:
#!/usr/bin/env bash
# Locate the NetworkPolicy manifest and print its metadata section
manifest="bundle/manifests/kmm-operator-controller_networking.k8s.io_v1_networkpolicy.yaml"
if [ -f "$manifest" ]; then
echo "Showing lines 1-20 of $manifest:"
sed -n '1,20p' "$manifest"
else
echo "$manifest not found."
fiLength of output: 871
Remove hardcoded namespace from NetworkPolicy manifest
The bundle’s NetworkPolicy manifest explicitly sets metadata.namespace: system, which will force the resource into the system namespace regardless of the operator’s install namespace. This can lead to install-time failures or resources landing in the wrong namespace. OLM by default applies namespaced objects into the operator’s install namespace, so you should omit the namespace field.
Please update the file as follows:
• File: bundle/manifests/kmm-operator-controller_networking.k8s.io_v1_networkpolicy.yaml
• Remove or comment out the namespace: system line under metadata.
Suggested diff:
metadata:
name: controller
- namespace: system📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| metadata: | |
| name: controller | |
| namespace: system | |
| spec: | |
| metadata: | |
| name: controller | |
| spec: |
🤖 Prompt for AI Agents
In
bundle/manifests/kmm-operator-controller_networking.k8s.io_v1_networkpolicy.yaml
around lines 3 to 6, remove the hardcoded metadata.namespace: system line so the
NetworkPolicy does not force the resource into a specific namespace; delete or
comment out that namespace entry under metadata and leave metadata.name only so
OLM can apply the object into the operator’s install namespace.
| wait_for_pod_and_print_logs () { | ||
| # we can't exec a command nor get the logs on a pod that isn't `Running` yet. | ||
| oc wait "pod/${POD_NAME}" --for jsonpath='{.status.phase}'=Running --timeout=60s | ||
| oc wait -n openshift-kmm"pod/${POD_NAME}" --for jsonpath='{.status.phase}'=Running --timeout=60s |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix syntax error in the oc wait command.
There's a missing space between openshift-kmm and "pod/${POD_NAME}" which will cause a syntax error.
- oc wait -n openshift-kmm"pod/${POD_NAME}" --for jsonpath='{.status.phase}'=Running --timeout=60s
+ oc wait -n openshift-kmm "pod/${POD_NAME}" --for jsonpath='{.status.phase}'=Running --timeout=60s📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| oc wait -n openshift-kmm"pod/${POD_NAME}" --for jsonpath='{.status.phase}'=Running --timeout=60s | |
| oc wait -n openshift-kmm "pod/${POD_NAME}" --for jsonpath='{.status.phase}'=Running --timeout=60s |
🤖 Prompt for AI Agents
ci/prow/e2e-incluster-build around line 9: the oc wait command lacks a space
between the namespace and the resource argument, causing a syntax error; fix by
inserting a space so the -n openshift-kmm flag and the pod/${POD_NAME} argument
are separate (or alternatively use the pod/NAME form), ensuring the command
syntax is valid and the --for/--timeout flags remain unchanged.
| podSelector: | ||
| matchExpressions: | ||
| - key: openshift.io/build.name | ||
| operator: Exists |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Selector only matches OpenShift Build pods; “sign” pods may remain blocked
The selector relies on openshift.io/build.name Exists, which matches Build/BuildConfig pods but not typical KMM signing Jobs/Pods. If “sign” runs as a Job/Pod without that label, it will still be default-denied.
Two robust options:
- Split into two policies: build.yaml (current selector) and sign.yaml (match the label your signing pods carry, e.g., app.kubernetes.io/component: sign or job-name: ...).
- Or, add the missing label to signing pods and update the selector to match it.
If you want, I can draft sign.yaml once you confirm the canonical labels on signing pods.
🤖 Prompt for AI Agents
In config/network-policy/build-and-sign.yaml around lines 7-10, the podSelector
only matches pods with openshift.io/build.name which excludes signing Job/Pod
pods so “sign” traffic remains default-denied; fix by either: (A) split into two
NetworkPolicy files — build.yaml using the existing openshift.io/build.name
selector and sign.yaml with a podSelector that matches the canonical label your
signing pods carry (e.g., matchLabels: app.kubernetes.io/component: sign or
matchLabels: job-name: <sign-job-name>), or (B) update this policy’s podSelector
to include the signing label as an additional selector (add a matchLabels entry
or an additional matchExpression that matches the signing pod label), or
alternatively ensure your signing Pods/Jobs are labeled with
openshift.io/build.name so the current selector applies.
| metadata: | ||
| name: controller | ||
| namespace: system |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Avoid hardcoding namespace in source manifests used for bundling.
This file is copied verbatim into the bundle, so metadata.namespace: system will ship as-is. Unless your OLM pipeline rewrites namespaces at install time, this can mis-target the resource. Prefer omitting metadata.namespace here and let overlays/OLM inject it.
metadata:
name: controller
- namespace: system🏁 Script executed:
#!/bin/bash
# Search for Kustomize configuration files in the repo
find . -type f -name 'kustomization.yaml'
echo
# Look for any overlays directories
find . -type d | grep -E 'overlays?'
echo
# List occurrences of 'namespace:' in the config directory
grep -R -n 'namespace:' configLength of output: 5279
🏁 Script executed:
#!/bin/bash
# Print the first 200 lines of the network-policy kustomization to check for namespace transformers
sed -n '1,200p' config/network-policy/kustomization.yamlLength of output: 206
🏁 Script executed:
#!/bin/bash
# Check for namespace injector in network-policy overlay
grep -n "namespace:" -C2 config/network-policy/kustomization.yaml || echo "No namespace directive in kustomization"Length of output: 154
Remove hardcoded namespace from network-policy controller manifest
The config/network-policy/controller.yaml file is included directly by its kustomization without any namespace transformer, so the metadata.namespace: system setting will be baked into the final bundle as-is. To ensure the resource is scoped to whatever target namespace the end-user or OLM installs into, omit the namespace field here and let overlays or OLM inject the proper namespace.
• File needing update:
- config/network-policy/controller.yaml
Suggested change:
metadata:
name: controller
- namespace: system📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| metadata: | |
| name: controller | |
| namespace: system | |
| metadata: | |
| name: controller |
🤖 Prompt for AI Agents
In config/network-policy/controller.yaml around lines 3 to 5, remove the
hardcoded metadata.namespace: system entry so the manifest does not bake a fixed
namespace into the kustomized bundle; leave the metadata block without a
namespace field so overlays or OLM can inject the target namespace at deploy
time and verify there are no other files or transformers that rely on this
hardcoded value.
| - default-deny.yaml | ||
| - controller.yaml | ||
| - webhook.yaml | ||
| - build-and-sign.yaml No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add missing newline at end of file.
The YAML file should end with a newline character as indicated by the static analysis tool.
-- build-and-sign.yaml
+- build-and-sign.yaml
+📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - build-and-sign.yaml | |
| - build-and-sign.yaml | |
🧰 Tools
🪛 YAMLlint (1.37.1)
[error] 7-7: no new line character at the end of file
(new-line-at-end-of-file)
🤖 Prompt for AI Agents
In config/network-policy/kustomization.yaml at line ~7, the file is missing a
trailing newline; update the file by ensuring there is a single newline
character at the end of the file (i.e., add an empty line after
"build-and-sign.yaml") and save so the file ends with a newline to satisfy the
linter.
| domain: sigs.x-k8s.io | ||
| layout: | ||
| - go.kubebuilder.io/v3 | ||
| - go.kubebuilder.io/v4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Kubebuilder layout bumped to v4 — confirm toolchain compatibility.
Layout v4 with config version "3" is fine, but ensure your pinned operator-sdk/kubebuilder versions in CI and dev images understand v4 layouts.
We can follow up with specific pin recommendations if needed.
🏁 Script executed:
#!/usr/bin/env bash
# If you pin operator-sdk/kubebuilder in Makefiles or images, surface them
rg -n -C2 -e 'operator-sdk|kubebuilder' Makefile hack/ Dockerfile* .githubLength of output: 2306
Action Required: Upgrade Operator-SDK and Kubebuilder CLI for go.kubebuilder.io/v4 support
The Makefile currently pins Operator-SDK to v1.32.0, which predates support for Kubebuilder v4 layouts (added in Operator-SDK v1.38.0) (newreleases.io, github.com). Without upgrading, commands like operator-sdk generate won’t recognize your layout: go.kubebuilder.io/v4 scaffold.
• Makefile (around line 278): change
- curl -Lo ${OPERATOR_SDK} 'https://github.com/operator-framework/operator-sdk/releases/download/v1.32.0/operator-sdk_linux_amd64'
+ curl -Lo ${OPERATOR_SDK} 'https://github.com/operator-framework/operator-sdk/releases/download/v1.38.0/operator-sdk_linux_amd64'(or bump to the latest v1.38.0+ tag)
• Dev/CI images or Dockerfiles: if you install the Kubebuilder CLI directly, pin it to a compatible v4.x release (e.g., v4.6.0) so that kubebuilder commands understand the v4 layout (github.com).
• Verify your ENVTEST_K8S_VERSION = 1.23 remains appropriate for your target Kubernetes version; no change is needed here unless you also bump test-env to match newer K8s minor versions.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In PROJECT around line 3 and Makefile around line 278, the repo still targets
go.kubebuilder.io/v4 but the Makefile pins operator-sdk to v1.32.0 which lacks
Kubebuilder v4 support; update the Makefile to bump operator-sdk to v1.38.0 or
later (or to the latest v1.38+ tag), and in any Dev/CI Dockerfiles or scripts
that install the Kubebuilder CLI pin kubebuilder to a v4.x release (e.g.,
v4.6.0) so kubebuilder commands understand the v4 layout; after changes, run the
operator-sdk generate commands in CI/dev and verify ENVTEST_K8S_VERSION
(currently 1.23) still matches your target Kubernetes versions, adjusting only
if you also upgrade test-env/k8s versions.
|
@TomerNewman: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
We are currently waiting until OLM is stable in older versions of OCP and Konflux allows NP manifests in the bundles. |
Due to security concerns, we need to allow KMM
operator only the nessecery traffic.
This commits adds Network policies for each kmm pod.
This commit also affects KMM bundle to include the network policy mainfests.
/hold
/cc @ybettan @yevgeny-shnaidman
Summary by CodeRabbit