-
Notifications
You must be signed in to change notification settings - Fork 53
Allow Graceful Deletion of VMs from kubectl
#72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
132115f to
d9a03a2
Compare
239a831 to
c0b0591
Compare
|
Please inline the reference to
kubernetes/kubernetes#56567 : we already tried
Alternative 1 some 8 years ago.
…On Sun, Jul 6, 2025 at 5:39 PM dsionov ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In veps/sig-api/kubectl_vm_graceful_deletion.md
<#72 (comment)>:
> +
+- **Custom logic**: Requires KubeVirt-specific webhook to handle a standard flag.
+
+## API Usage Example
+
+```bash
+kubectl delete vm myvm --grace-period=5
+```
+
+This command deletes the VM `myvm` with a 5-second grace period, overriding any longer default period.
+
+## Alternatives
+
+### Option 1: Enhance Kubernetes Core
+
+Modify Kubernetes so that the `--grace-period` flag from `kubectl delete` is propagated to a resource’s `metadata.terminationGracePeriodSeconds`, even for custom resources like `VirtualMachine`.
there was a big discussion about this topic 7 years ago..
kubernetes/kubernetes#60744
<kubernetes/kubernetes#60744> - This pr tried to
do exactly what you suggesting
kubernetes/kubernetes#56567
<kubernetes/kubernetes#56567> - general
discussion about the fact the grace-period flag is ignored for CRs
—
Reply to this email directly, view it on GitHub
<#72 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA5F7XBPLCRVLZXFG2WQTZL3HEYLJAVCNFSM6AAAAACA3QIG3SVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZDSOJRGM4DANBYHE>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
|
/cc @iholder101 @enp0s3 |
c0b0591 to
f9911dd
Compare
done |
f9911dd to
e590c99
Compare
enp0s3
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dasionov Thank you! I have a few questions below
enp0s3
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dasionov I went over the KEP and my take is that I prefer the alternative approach from a couple of reasons:
- I think that making API calls from validating webhook context is a bad practice, it can introduce latency to the webhook.
- The webhook modifies the VMI spec which is also an anti-pattern, spec is something that should be modified by the user.
The spec modification is derived directly from a user-provided value in a command such as: |
Add support for the `--grace-period` flag when deleting VMs via `kubectl`, allowing graceful shutdown before termination. Improves consistency with Kubernetes behavior and user expectations. Signed-off-by: Daniel Sionov <[email protected]>
e590c99 to
aeadb7f
Compare
| ## Scalability | ||
| No scalability concerns are anticipated, as the webhook operates on individual `DELETE` operations. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think scalability is something to consider here.
If we're going to use a webhook, it means all load is on virt-api (which has scalability issues of its own...). Think about mass-deletion of thousands of VMs at once. This will definitely increase the load on virt-api.
I'm not saying it's an issue, but definitely something to explore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i agree with you here, i am also not a fan of this approach, working with k8s maintainers to try to reach an agreement and resolve this issue in k8s in some way
kubernetes/kubernetes#132913
given the outcome of this PR
the design can be adjusted to:
- user issue delete api call with the flag value provided
- virt-controller take into account the flag value from the metadata, and patches the
Spec.TerminationGracePeriodSeconds - since virt-handler: ensure grace period metadata sync before shutdown kubevirt#15170
the graceful time will be synced and updated and propagated properly to the launcher
no virt-api overload and everyone happy
virt-controller will issue the patch instead. what do we think about that?
will add this to the design once i see it's valid approach
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
VEP Metadata
Tracking issue: #71
SIG label: /sig api
What this PR does
Add support for the
--grace-periodflag when deleting VMs viakubectl, allowing graceful shutdown before termination. Improves consistency with Kubernetes behavior and user expectations.Special notes for your reviewer