Skip to content

MG-34: Add oc cli like must-gather collection with ServerPrompt#51

Merged
openshift-merge-bot[bot] merged 3 commits intoopenshift:mainfrom
swghosh:plan-mg-tool
Feb 24, 2026
Merged

MG-34: Add oc cli like must-gather collection with ServerPrompt#51
openshift-merge-bot[bot] merged 3 commits intoopenshift:mainfrom
swghosh:plan-mg-tool

Conversation

@swghosh
Copy link
Member

@swghosh swghosh commented Oct 10, 2025

plan_mustgather dynamic prompt for collecting must-gather(s) from OpenShift cluster

  • generates a pod spec that can either be applied by user manually or used with resource_create_or_update tool
  • alongside pod spec namespace, serviceaccount, clusterrolebinding are generated too
Details [MCP inspector](https://modelcontextprotocol.io/docs/tools/inspector):

Input (inferred defaults):

{
  "gather_command": "/usr/bin/gather",
  "source_dir": "/must-gather"
}

Output:

The generated plan contains YAML manifests for must-gather pods and required resources (namespace, serviceaccount, clusterrolebinding). Suggest how the user can apply the manifest and copy results locally (oc cp / kubectl cp).

Ask the user if they want to apply the plan

  • use the resource_create_or_update tool to apply the manifest
  • alternatively, advise the user to execute oc apply / kubectl apply instead.

Once the must-gather collection is completed, the user may which to cleanup the created resources.

  • use the resources_delete tool to delete the namespace and the clusterrolebinding
  • or, execute cleanup using kubectl delete.
apiVersion: v1
kind: Namespace
metadata:
  name: openshift-must-gather-tn7jzk
spec: {}
status: {}
apiVersion: v1
kind: ServiceAccount
metadata:
  name: must-gather-collector
  namespace: openshift-must-gather-tn7jzk
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: openshift-must-gather-tn7jzk-must-gather-collector
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: must-gather-collector
  namespace: openshift-must-gather-tn7jzk
apiVersion: v1
kind: Pod
metadata:
  generateName: must-gather-
  namespace: openshift-must-gather-tn7jzk
spec:
  containers:
  - command:
    - /usr/bin/gather
    image: registry.redhat.io/openshift4/ose-must-gather:latest
    imagePullPolicy: IfNotPresent
    name: gather
    resources: {}
    volumeMounts:
    - mountPath: /must-gather
      name: must-gather-output
  - command:
    - /bin/bash
    - -c
    - sleep infinity
    image: registry.redhat.io/ubi9/ubi-minimal
    imagePullPolicy: IfNotPresent
    name: wait
    resources: {}
    volumeMounts:
    - mountPath: /must-gather
      name: must-gather-output
  priorityClassName: system-cluster-critical
  restartPolicy: Never
  serviceAccountName: must-gather-collector
  tolerations:
  - operator: Exists
  volumes:
  - emptyDir: {}
    name: must-gather-output
status: {}

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Oct 10, 2025
@openshift-ci-robot
Copy link

openshift-ci-robot commented Oct 10, 2025

@swghosh: This pull request references MG-34 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set.

Details

In response to this:

plan_mustgather tool for collecting must-gather(s) from OpenShift cluster

  • generates a pod spec that can either be applied by user manually or used with resource_create_or_update tool
  • alongside pod spec namespace, serviceaccount, clusterrolebinding are generated too

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot requested review from Cali0707 and matzew October 10, 2025 19:36
@swghosh
Copy link
Member Author

swghosh commented Oct 10, 2025

@harche @ardaguclu referring to #38 (comment), should we move this into pkg/ocp? Given this is also an OpenShift specific tool.

@swghosh
Copy link
Member Author

swghosh commented Oct 10, 2025

/cc @Prashanth684 @shivprakashmuley

@Cali0707
Copy link

@harche @ardaguclu referring to #38 (comment), should we move this into pkg/ocp? Given this is also an OpenShift specific tool.

My thoughts are we should probably be making one or more OpenShift specific toolgroups eventually

@openshift-ci-robot
Copy link

openshift-ci-robot commented Oct 10, 2025

@swghosh: This pull request references MG-34 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set.

Details

In response to this:

plan_mustgather tool for collecting must-gather(s) from OpenShift cluster

  • generates a pod spec that can either be applied by user manually or used with resource_create_or_update tool
  • alongside pod spec namespace, serviceaccount, clusterrolebinding are generated too
[MCP inspector](https://modelcontextprotocol.io/docs/tools/inspector):

Input (inferred defaults):

{
 "gather_command": "/usr/bin/gather",
 "source_dir": "/must-gather",
 "timeout": "10m"
}

Output:

Save the following content to a file (e.g., must-gather-plan.yaml) and apply it with 'kubectl apply -f must-gather-plan.yaml'
Monitor the pod's logs to see when the must-gather process is complete:
kubectl logs -f -n openshift-must-gather-wwt74j -c gather
Once the logs indicate completion, copy the results with:
kubectl cp -n openshift-must-gather-wwt74j :/must-gather ./must-gather-output -c wait
Finally, clean up the resources with:
kubectl delete ns openshift-must-gather-wwt74j
kubectl delete clusterrolebinding openshift-must-gather-wwt74j-must-gather-collector

apiVersion: v1
kind: ServiceAccount
metadata:
 name: must-gather-collector
 namespace: openshift-must-gather-wwt74j
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: openshift-must-gather-wwt74j-must-gather-collector
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
- kind: ServiceAccount
 name: must-gather-collector
 namespace: openshift-must-gather-wwt74j
metadata:
 generateName: must-gather-
 namespace: openshift-must-gather-wwt74j
spec:
 containers:
 - command:
   - /usr/bin/timeout 10m /usr/bin/gather
   image: registry.redhat.io/openshift4/ose-must-gather:latest
   imagePullPolicy: IfNotPresent
   name: gather
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 - command:
   - /bin/bash
   - -c
   - sleep infinity
   image: registry.redhat.io/ubi9/ubi-minimal
   imagePullPolicy: IfNotPresent
   name: wait
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 priorityClassName: system-cluster-critical
 restartPolicy: Never
 serviceAccountName: must-gather-collector
 tolerations:
 - operator: Exists
 volumes:
 - emptyDir: {}
   name: must-gather-output
status: {}

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@Prashanth684
Copy link

@harche @ardaguclu referring to #38 (comment), should we move this into pkg/ocp? Given this is also an OpenShift specific tool.

My thoughts are we should probably be making one or more OpenShift specific toolgroups eventually

yes. maybe a pkg/toolsets/ocp/must-gather or equivalent.

@openshift-ci-robot
Copy link

openshift-ci-robot commented Oct 10, 2025

@swghosh: This pull request references MG-34 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set.

Details

In response to this:

plan_mustgather tool for collecting must-gather(s) from OpenShift cluster

  • generates a pod spec that can either be applied by user manually or used with resource_create_or_update tool
  • alongside pod spec namespace, serviceaccount, clusterrolebinding are generated too
[MCP inspector](https://modelcontextprotocol.io/docs/tools/inspector):

Input (inferred defaults):

{
 "gather_command": "/usr/bin/gather",
 "source_dir": "/must-gather",
 "timeout": "10m"
}

Output:

Save the following content to a file (e.g., must-gather-plan.yaml) and apply it with 'kubectl apply -f must-gather-plan.yaml'
Monitor the pod's logs to see when the must-gather process is complete:
kubectl logs -f -n openshift-must-gather-wwt74j -c gather
Once the logs indicate completion, copy the results with:
kubectl cp -n openshift-must-gather-wwt74j :/must-gather ./must-gather-output -c wait
Finally, clean up the resources with:
kubectl delete ns openshift-must-gather-wwt74j
kubectl delete clusterrolebinding openshift-must-gather-wwt74j-must-gather-collector

Save the following content to a file (e.g., must-gather-plan.yaml) and apply it with 'kubectl apply -f must-gather-plan.yaml'

Monitor the pod's logs to see when the must-gather process is complete:

kubectl logs -f -n openshift-must-gather-fzq7f5 -c gather

Once the logs indicate completion, copy the results with:

kubectl cp -n openshift-must-gather-fzq7f5 :/must-gather ./must-gather-output -c wait

Finally, clean up the resources with:

kubectl delete ns openshift-must-gather-fzq7f5

kubectl delete clusterrolebinding openshift-must-gather-fzq7f5-must-gather-collector

apiVersion: v1
kind: Namespace
metadata:
 name: openshift-must-gather-vhph8d
spec: {}
status: {}
---
---
apiVersion: v1
kind: ServiceAccount
metadata:
 name: must-gather-collector
 namespace: openshift-must-gather-vhph8d
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: openshift-must-gather-vhph8d-must-gather-collector
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
- kind: ServiceAccount
 name: must-gather-collector
 namespace: openshift-must-gather-vhph8d
metadata:
 generateName: must-gather-
 namespace: openshift-must-gather-vhph8d
spec:
 containers:
 - command:
   - /usr/bin/timeout 10m /usr/bin/gather
   image: registry.redhat.io/openshift4/ose-must-gather:latest
   imagePullPolicy: IfNotPresent
   name: gather
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 - command:
   - /bin/bash
   - -c
   - sleep infinity
   image: registry.redhat.io/ubi9/ubi-minimal
   imagePullPolicy: IfNotPresent
   name: wait
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 priorityClassName: system-cluster-critical
 restartPolicy: Never
 serviceAccountName: must-gather-collector
 tolerations:
 - operator: Exists
 volumes:
 - emptyDir: {}
   name: must-gather-output
status: {}

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link

openshift-ci-robot commented Oct 10, 2025

@swghosh: This pull request references MG-34 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set.

Details

In response to this:

plan_mustgather tool for collecting must-gather(s) from OpenShift cluster

  • generates a pod spec that can either be applied by user manually or used with resource_create_or_update tool
  • alongside pod spec namespace, serviceaccount, clusterrolebinding are generated too
[MCP inspector](https://modelcontextprotocol.io/docs/tools/inspector):

Input (inferred defaults):

{
 "gather_command": "/usr/bin/gather",
 "source_dir": "/must-gather",
 "timeout": "10m"
}

Output:

Save the following content to a file (e.g., must-gather-plan.yaml) and apply it with 'kubectl apply -f must-gather-plan.yaml'
Monitor the pod's logs to see when the must-gather process is complete:
kubectl logs -f -n openshift-must-gather-fzq7f5 -c gather
Once the logs indicate completion, copy the results with:
kubectl cp -n openshift-must-gather-fzq7f5 :/must-gather ./must-gather-output -c wait
Finally, clean up the resources with:
kubectl delete ns openshift-must-gather-fzq7f5
kubectl delete clusterrolebinding openshift-must-gather-fzq7f5-must-gather-collector

apiVersion: v1
kind: Namespace
metadata:
 name: openshift-must-gather-vhph8d
spec: {}
status: {}
---
---
apiVersion: v1
kind: ServiceAccount
metadata:
 name: must-gather-collector
 namespace: openshift-must-gather-vhph8d
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: openshift-must-gather-vhph8d-must-gather-collector
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
- kind: ServiceAccount
 name: must-gather-collector
 namespace: openshift-must-gather-vhph8d
metadata:
 generateName: must-gather-
 namespace: openshift-must-gather-vhph8d
spec:
 containers:
 - command:
   - /usr/bin/timeout 10m /usr/bin/gather
   image: registry.redhat.io/openshift4/ose-must-gather:latest
   imagePullPolicy: IfNotPresent
   name: gather
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 - command:
   - /bin/bash
   - -c
   - sleep infinity
   image: registry.redhat.io/ubi9/ubi-minimal
   imagePullPolicy: IfNotPresent
   name: wait
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 priorityClassName: system-cluster-critical
 restartPolicy: Never
 serviceAccountName: must-gather-collector
 tolerations:
 - operator: Exists
 volumes:
 - emptyDir: {}
   name: must-gather-output
status: {}

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link

openshift-ci-robot commented Oct 10, 2025

@swghosh: This pull request references MG-34 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set.

Details

In response to this:

plan_mustgather tool for collecting must-gather(s) from OpenShift cluster

  • generates a pod spec that can either be applied by user manually or used with resource_create_or_update tool
  • alongside pod spec namespace, serviceaccount, clusterrolebinding are generated too
[MCP inspector](https://modelcontextprotocol.io/docs/tools/inspector):

Input (inferred defaults):

{
 "gather_command": "/usr/bin/gather",
 "source_dir": "/must-gather",
 "timeout": "10m"
}

Output:

Save the following content to a file (e.g., must-gather-plan.yaml) and apply it with 'kubectl apply -f must-gather-plan.yaml'
Monitor the pod's logs to see when the must-gather process is complete:
kubectl logs -f -n openshift-must-gather-fzq7f5 -c gather
Once the logs indicate completion, copy the results with:
kubectl cp -n openshift-must-gather-fzq7f5 :/must-gather ./must-gather-output -c wait
Finally, clean up the resources with:
kubectl delete ns openshift-must-gather-fzq7f5
kubectl delete clusterrolebinding openshift-must-gather-fzq7f5-must-gather-collector

apiVersion: v1
kind: Namespace
metadata:
 name: openshift-must-gather-vhph8d
spec: {}
status: {}
---
apiVersion: v1
kind: ServiceAccount
metadata:
 name: must-gather-collector
 namespace: openshift-must-gather-vhph8d
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: openshift-must-gather-vhph8d-must-gather-collector
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
- kind: ServiceAccount
 name: must-gather-collector
 namespace: openshift-must-gather-vhph8d
metadata:
 generateName: must-gather-
 namespace: openshift-must-gather-vhph8d
spec:
 containers:
 - command:
   - /usr/bin/timeout 10m /usr/bin/gather
   image: registry.redhat.io/openshift4/ose-must-gather:latest
   imagePullPolicy: IfNotPresent
   name: gather
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 - command:
   - /bin/bash
   - -c
   - sleep infinity
   image: registry.redhat.io/ubi9/ubi-minimal
   imagePullPolicy: IfNotPresent
   name: wait
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 priorityClassName: system-cluster-critical
 restartPolicy: Never
 serviceAccountName: must-gather-collector
 tolerations:
 - operator: Exists
 volumes:
 - emptyDir: {}
   name: must-gather-output
status: {}

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link

openshift-ci-robot commented Oct 10, 2025

@swghosh: This pull request references MG-34 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set.

Details

In response to this:

plan_mustgather tool for collecting must-gather(s) from OpenShift cluster

  • generates a pod spec that can either be applied by user manually or used with resource_create_or_update tool
  • alongside pod spec namespace, serviceaccount, clusterrolebinding are generated too
[MCP inspector](https://modelcontextprotocol.io/docs/tools/inspector):

Input (inferred defaults):

{
 "gather_command": "/usr/bin/gather",
 "source_dir": "/must-gather",
 "timeout": "10m"
}

Output:

Save the following content to a file (e.g., must-gather-plan.yaml) and apply it with 'kubectl apply -f must-gather-plan.yaml'
Monitor the pod's logs to see when the must-gather process is complete:
kubectl logs -f -n openshift-must-gather-jkbn9p -c gather
Once the logs indicate completion, copy the results with:
kubectl cp -n openshift-must-gather-jkbn9p :/must-gather ./must-gather-output -c wait
Finally, clean up the resources with:
kubectl delete ns openshift-must-gather-jkbn9p
kubectl delete clusterrolebinding openshift-must-gather-jkbn9p-must-gather-collector

apiVersion: v1
kind: Namespace
metadata:
 name: openshift-must-gather-jkbn9p
spec: {}
status: {}
---
apiVersion: v1
kind: ServiceAccount
metadata:
 name: must-gather-collector
 namespace: openshift-must-gather-jkbn9p
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: openshift-must-gather-jkbn9p-must-gather-collector
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
- kind: ServiceAccount
 name: must-gather-collector
 namespace: openshift-must-gather-jkbn9p
---
apiVersion: v1
kind: Pod
metadata:
 generateName: must-gather-
 namespace: openshift-must-gather-jkbn9p
spec:
 containers:
 - command:
   - /usr/bin/timeout 10m /usr/bin/gather
   image: registry.redhat.io/openshift4/ose-must-gather:latest
   imagePullPolicy: IfNotPresent
   name: gather
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 - command:
   - /bin/bash
   - -c
   - sleep infinity
   image: registry.redhat.io/ubi9/ubi-minimal
   imagePullPolicy: IfNotPresent
   name: wait
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 priorityClassName: system-cluster-critical
 restartPolicy: Never
 serviceAccountName: must-gather-collector
 tolerations:
 - operator: Exists
 volumes:
 - emptyDir: {}
   name: must-gather-output
status: {}

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link

openshift-ci-robot commented Oct 10, 2025

@swghosh: This pull request references MG-34 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set.

Details

In response to this:

plan_mustgather tool for collecting must-gather(s) from OpenShift cluster

  • generates a pod spec that can either be applied by user manually or used with resource_create_or_update tool
  • alongside pod spec namespace, serviceaccount, clusterrolebinding are generated too
[MCP inspector](https://modelcontextprotocol.io/docs/tools/inspector):

Input (inferred defaults):

{
 "gather_command": "/usr/bin/gather",
 "source_dir": "/must-gather"
}

Output:

Save the following content to a file (e.g., must-gather-plan.yaml) and apply it with 'kubectl apply -f must-gather-plan.yaml'
Monitor the pod's logs to see when the must-gather process is complete:
kubectl logs -f -n openshift-must-gather-jkbn9p -c gather
Once the logs indicate completion, copy the results with:
kubectl cp -n openshift-must-gather-jkbn9p :/must-gather ./must-gather-output -c wait
Finally, clean up the resources with:
kubectl delete ns openshift-must-gather-jkbn9p
kubectl delete clusterrolebinding openshift-must-gather-jkbn9p-must-gather-collector

apiVersion: v1
kind: Namespace
metadata:
 name: openshift-must-gather-jkbn9p
spec: {}
status: {}
---
apiVersion: v1
kind: ServiceAccount
metadata:
 name: must-gather-collector
 namespace: openshift-must-gather-jkbn9p
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: openshift-must-gather-jkbn9p-must-gather-collector
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
- kind: ServiceAccount
 name: must-gather-collector
 namespace: openshift-must-gather-jkbn9p
---
apiVersion: v1
kind: Pod
metadata:
 generateName: must-gather-
 namespace: openshift-must-gather-jkbn9p
spec:
 containers:
 - command:
   - /usr/bin/gather
   image: registry.redhat.io/openshift4/ose-must-gather:latest
   imagePullPolicy: IfNotPresent
   name: gather
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 - command:
   - /bin/bash
   - -c
   - sleep infinity
   image: registry.redhat.io/ubi9/ubi-minimal
   imagePullPolicy: IfNotPresent
   name: wait
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 priorityClassName: system-cluster-critical
 restartPolicy: Never
 serviceAccountName: must-gather-collector
 tolerations:
 - operator: Exists
 volumes:
 - emptyDir: {}
   name: must-gather-output
status: {}

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@swghosh swghosh force-pushed the plan-mg-tool branch 2 times, most recently from c9616ae to f2b231f Compare October 10, 2025 20:44
@openshift-ci-robot
Copy link

openshift-ci-robot commented Oct 13, 2025

@swghosh: This pull request references MG-34 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set.

Details

In response to this:

plan_mustgather tool for collecting must-gather(s) from OpenShift cluster

  • generates a pod spec that can either be applied by user manually or used with resource_create_or_update tool
  • alongside pod spec namespace, serviceaccount, clusterrolebinding are generated too
[MCP inspector](https://modelcontextprotocol.io/docs/tools/inspector):

Input (inferred defaults):

{
 "gather_command": "/usr/bin/gather",
 "source_dir": "/must-gather"
}

Output:

The generated plan contains YAML manifests for must-gather pods and required resources (namespace, serviceaccount, clusterrolebinding). Suggest how the user can apply the manifest and copy results locally (oc cp / kubectl cp).

Ask the user if they want to apply the plan

  • use the resource_create_or_update tool to apply the manifest
  • alternatively, advise the user to execute oc apply / kubectl apply instead.

Once the must-gather collection is completed, the user may which to cleanup the created resources.

  • use the resources_delete tool to delete the namespace and the clusterrolebinding
  • or, execute cleanup using kubectl delete.
apiVersion: v1
kind: Namespace
metadata:
 name: openshift-must-gather-tn7jzk
spec: {}
status: {}
apiVersion: v1
kind: ServiceAccount
metadata:
 name: must-gather-collector
 namespace: openshift-must-gather-tn7jzk
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: openshift-must-gather-tn7jzk-must-gather-collector
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
- kind: ServiceAccount
 name: must-gather-collector
 namespace: openshift-must-gather-tn7jzk
apiVersion: v1
kind: Pod
metadata:
 generateName: must-gather-
 namespace: openshift-must-gather-tn7jzk
spec:
 containers:
 - command:
   - /usr/bin/gather
   image: registry.redhat.io/openshift4/ose-must-gather:latest
   imagePullPolicy: IfNotPresent
   name: gather
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 - command:
   - /bin/bash
   - -c
   - sleep infinity
   image: registry.redhat.io/ubi9/ubi-minimal
   imagePullPolicy: IfNotPresent
   name: wait
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 priorityClassName: system-cluster-critical
 restartPolicy: Never
 serviceAccountName: must-gather-collector
 tolerations:
 - operator: Exists
 volumes:
 - emptyDir: {}
   name: must-gather-output
status: {}

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@matzew
Copy link
Member

matzew commented Jan 8, 2026

yes. maybe a pkg/toolsets/ocp/must-gather or equivalent.

I guess we never did this, but the "core" here is also just one file: mustgather.go ?

CC @Cali0707 @Prashanth684

@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jan 8, 2026
@matzew
Copy link
Member

matzew commented Jan 8, 2026

There is a test failing - also it would be nice to do a rebase

@swghosh swghosh changed the title MG-34: Add oc cli like must-gather collection to plan_mustgather tool MG-34: Add oc cli like must-gather collection with ServerPrompt Feb 11, 2026
@swghosh
Copy link
Member Author

swghosh commented Feb 11, 2026

Have updated the PR to use ServerPrompt instead of ServerTool,
@matzew PTAL

(also closed #69 to avoid confusions)

Thanks in advance!

Comment on lines +223 to +244
clusterRoleBindingName := fmt.Sprintf("%s-must-gather-collector", namespace)
clusterRoleBinding := &rbacv1.ClusterRoleBinding{
TypeMeta: metav1.TypeMeta{
APIVersion: "rbac.authorization.k8s.io/v1",
Kind: "ClusterRoleBinding",
},
ObjectMeta: metav1.ObjectMeta{
Name: clusterRoleBindingName,
},
RoleRef: rbacv1.RoleRef{
APIGroup: "rbac.authorization.k8s.io",
Kind: "ClusterRole",
Name: "cluster-admin",
},
Subjects: []rbacv1.Subject{
{
Kind: "ServiceAccount",
Name: serviceAccountName,
Namespace: namespace,
},
},
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I get that this is normal for must gather, but binding to cluster admin on a command that is AI-triggerable is a little scary 😅

let's make sure we validate exactly what is being run by the agent when this rolebinding is present, otherwise this seems like a huge security vuln

cc @matzew @manusa @mrunalp

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Prashanth684 too, any thoughts?

The way this works right now is a prompt and user has to still approve the tool call for the resources_create_or_update before proceeding with adding the RBAC binding.

Copy link
Member Author

@swghosh swghosh Feb 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Cali0707 an alternative approach is to have a fine-grained RBAC that has only "GET" kinda RBAC more fine-grained, more suitable for must-gather (since it's essentially just a collection tool) but we do require nodes/exec to run some performance analysis (which is part of the default must-gather collection). An eg. role for reference.

The downside with the granular RBAC being that users cannot trigger custom must-gather images like a must-gather image specifically targetted by an operator (implemented in all_images flow), because that'll require some other privileges.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My main concern is that the must gather command (which runs in the pod with all these privileges) is currently something that the agent can set. So, the agent could in theory get around any RBAC/resource protections that are in place through this

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree with these concerns. what we can do here:

  • add an allowlist of commands
  • add registry allow list (for non custom images - for custom images, it is upto the user, but we still need to check that the desired image is used)
  • interactive confirmation
  • explicitly show security warnings
  • restrict this functionality to cluster admins

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated to use validation that checks for a well known allow list for:

  • gather commands specified by user
  • Red Hat approved registries for container images specified by user

We already raise a warning if the current user is not privileged enough to perform the action through SARC checks.

gatherCmd[0] = defaultGatherCmd
}

if !isAllowedGatherCommand(gatherCmd[0]) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a way for the user to override this if needed? maybe something through elicitation? we have the server initiate whether the user wants to use this image and is willing to undertake the risk

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Elicitation support is something I am looking into upstream, but it is not there yet...

Do you think we should wait for that in this PR, or add code using that as a follow up?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, no need to block the PR on that - we can follow up

Copy link

@Cali0707 Cali0707 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

/cc @matzew @manusa

@openshift-ci openshift-ci bot requested a review from manusa February 23, 2026 19:28
@openshift-ci openshift-ci bot added lgtm Indicates that a PR is ready to be merged. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Feb 23, 2026
Signed-off-by: Swarup Ghosh <swghosh@redhat.com>
Signed-off-by: Swarup Ghosh <swghosh@redhat.com>
Signed-off-by: Swarup Ghosh <swghosh@redhat.com>
@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Feb 24, 2026
@openshift-ci
Copy link

openshift-ci bot commented Feb 24, 2026

@swghosh: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@matzew
Copy link
Member

matzew commented Feb 24, 2026

/override "Konflux kflux-prd-rh02 / openshift-mcp-server-on-pull-request"

@openshift-ci
Copy link

openshift-ci bot commented Feb 24, 2026

@matzew: Overrode contexts on behalf of matzew: Konflux kflux-prd-rh02 / openshift-mcp-server-on-pull-request

Details

In response to this:

/override "Konflux kflux-prd-rh02 / openshift-mcp-server-on-pull-request"

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Copy link
Member

@matzew matzew left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/approve

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Feb 24, 2026
@openshift-ci
Copy link

openshift-ci bot commented Feb 24, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Cali0707, matzew, swghosh

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot
Copy link

openshift-ci-robot commented Feb 24, 2026

@swghosh: This pull request references MG-34 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

plan_mustgather dynamic prompt for collecting must-gather(s) from OpenShift cluster

  • generates a pod spec that can either be applied by user manually or used with resource_create_or_update tool
  • alongside pod spec namespace, serviceaccount, clusterrolebinding are generated too
[MCP inspector](https://modelcontextprotocol.io/docs/tools/inspector):

Input (inferred defaults):

{
 "gather_command": "/usr/bin/gather",
 "source_dir": "/must-gather"
}

Output:

The generated plan contains YAML manifests for must-gather pods and required resources (namespace, serviceaccount, clusterrolebinding). Suggest how the user can apply the manifest and copy results locally (oc cp / kubectl cp).

Ask the user if they want to apply the plan

  • use the resource_create_or_update tool to apply the manifest
  • alternatively, advise the user to execute oc apply / kubectl apply instead.

Once the must-gather collection is completed, the user may which to cleanup the created resources.

  • use the resources_delete tool to delete the namespace and the clusterrolebinding
  • or, execute cleanup using kubectl delete.
apiVersion: v1
kind: Namespace
metadata:
 name: openshift-must-gather-tn7jzk
spec: {}
status: {}
apiVersion: v1
kind: ServiceAccount
metadata:
 name: must-gather-collector
 namespace: openshift-must-gather-tn7jzk
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: openshift-must-gather-tn7jzk-must-gather-collector
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
- kind: ServiceAccount
 name: must-gather-collector
 namespace: openshift-must-gather-tn7jzk
apiVersion: v1
kind: Pod
metadata:
 generateName: must-gather-
 namespace: openshift-must-gather-tn7jzk
spec:
 containers:
 - command:
   - /usr/bin/gather
   image: registry.redhat.io/openshift4/ose-must-gather:latest
   imagePullPolicy: IfNotPresent
   name: gather
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 - command:
   - /bin/bash
   - -c
   - sleep infinity
   image: registry.redhat.io/ubi9/ubi-minimal
   imagePullPolicy: IfNotPresent
   name: wait
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 priorityClassName: system-cluster-critical
 restartPolicy: Never
 serviceAccountName: must-gather-collector
 tolerations:
 - operator: Exists
 volumes:
 - emptyDir: {}
   name: must-gather-output
status: {}

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-merge-bot openshift-merge-bot bot merged commit c8655b7 into openshift:main Feb 24, 2026
9 of 10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants