Skip to content

manual upgrades: clarify that a drain is not required #381

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

kyrofa
Copy link

@kyrofa kyrofa commented Feb 3, 2025

In Kubernetes, pods are independent of containerd/kubelet, and thus restarting k3s without draining first is a safe thing to do. This is not immediately clear to folks new to hosting their own cluster, so add a reassuring note.

In Kubernetes, pods are independent of containerd/kubelet, and thus
restarting k3s without draining first is a safe thing to do. This is not
immediately clear to folks new to hosting their own cluster, so add a
reassuring note.

Signed-off-by: Kyle Fazzari <[email protected]>
@brandond
Copy link
Member

thanks for the contribution! looks like this needs a rebase.


:::note
It is generally safe to do this in Kubernetes without needing to drain the node (pods continue running and networking stays configured the same way it was), but you might consider draining first if you have pods that can't tolerate a short API server outage.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"It is generally safe to do this" , do we need to highlight what is 'this'

Copy link
Member

@brandond brandond left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about this?

@@ -42,6 +42,10 @@ Running the install script will:
2. Update the systemd unit or openrc init script to reflect the args passed to the install script
3. Restart the k3s service

:::note
This script does not drain the node before restarting k3s. This is generally safe in Kubernetes (pods continue running and networking stays configured the same way it was), but you might consider draining first if you have pods that can't tolerate a short API server outage.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This script does not drain the node before restarting k3s. This is generally safe in Kubernetes (pods continue running and networking stays configured the same way it was), but you might consider draining first if you have pods that can't tolerate a short API server outage.
Containers for Pods continue running even when K3s is stopped. The install script does not drain or cordon the node before restarting k3s. If your workload is sensitive to brief API server outages, you should manually [drain and cordon](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_drain/) the node using `kubectl` before re-running the install script to upgrade k3s or modify the configuration, and uncordon it afterwards.


:::note
It is generally safe to do this in Kubernetes without needing to drain the node (pods continue running and networking stays configured the same way it was), but you might consider draining first if you have pods that can't tolerate a short API server outage.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
It is generally safe to do this in Kubernetes without needing to drain the node (pods continue running and networking stays configured the same way it was), but you might consider draining first if you have pods that can't tolerate a short API server outage.
Containers for Pods continue running even when K3s is stopped. It is generally safe to restart K3s without draining pods and cordoning the node. If your workload is sensitive to brief API server outages, you should manually [drain and cordon](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_drain/) the node using `kubectl` before restarting K3s, and uncordon it afterwards.

@farazkhawaja
Copy link
Contributor

@brandond on a second thought, should we go this route?kubernetes/kubernetes#129385 (comment)
Upgrades to 1.31 leads to container restarts, i see upstream k8s maintainers recommend cordon+drain for this situation.

@brandond
Copy link
Member

brandond commented Aug 11, 2025

I still don't think its necessary to recommend always cordon+drain. As that comment says:

in most minor upgrades the kubelet remains mostly compatible with running containers, and mostly does not result in restarting running containers

This suggests that this should not occur during most minor-version upgrades, and will not occur across patch-version upgrades. The 1.30 -> 1.31 transition was unique.

@farazkhawaja
Copy link
Contributor

Makes sense. Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants