-
Notifications
You must be signed in to change notification settings - Fork 166
manual upgrades: clarify that a drain is not required #381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
In Kubernetes, pods are independent of containerd/kubelet, and thus restarting k3s without draining first is a safe thing to do. This is not immediately clear to folks new to hosting their own cluster, so add a reassuring note. Signed-off-by: Kyle Fazzari <[email protected]>
thanks for the contribution! looks like this needs a rebase. |
|
||
:::note | ||
It is generally safe to do this in Kubernetes without needing to drain the node (pods continue running and networking stays configured the same way it was), but you might consider draining first if you have pods that can't tolerate a short API server outage. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"It is generally safe to do this" , do we need to highlight what is 'this'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about this?
@@ -42,6 +42,10 @@ Running the install script will: | |||
2. Update the systemd unit or openrc init script to reflect the args passed to the install script | |||
3. Restart the k3s service | |||
|
|||
:::note | |||
This script does not drain the node before restarting k3s. This is generally safe in Kubernetes (pods continue running and networking stays configured the same way it was), but you might consider draining first if you have pods that can't tolerate a short API server outage. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This script does not drain the node before restarting k3s. This is generally safe in Kubernetes (pods continue running and networking stays configured the same way it was), but you might consider draining first if you have pods that can't tolerate a short API server outage. | |
Containers for Pods continue running even when K3s is stopped. The install script does not drain or cordon the node before restarting k3s. If your workload is sensitive to brief API server outages, you should manually [drain and cordon](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_drain/) the node using `kubectl` before re-running the install script to upgrade k3s or modify the configuration, and uncordon it afterwards. |
|
||
:::note | ||
It is generally safe to do this in Kubernetes without needing to drain the node (pods continue running and networking stays configured the same way it was), but you might consider draining first if you have pods that can't tolerate a short API server outage. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is generally safe to do this in Kubernetes without needing to drain the node (pods continue running and networking stays configured the same way it was), but you might consider draining first if you have pods that can't tolerate a short API server outage. | |
Containers for Pods continue running even when K3s is stopped. It is generally safe to restart K3s without draining pods and cordoning the node. If your workload is sensitive to brief API server outages, you should manually [drain and cordon](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_drain/) the node using `kubectl` before restarting K3s, and uncordon it afterwards. |
@brandond on a second thought, should we go this route?kubernetes/kubernetes#129385 (comment) |
I still don't think its necessary to recommend always cordon+drain. As that comment says:
This suggests that this should not occur during most minor-version upgrades, and will not occur across patch-version upgrades. The 1.30 -> 1.31 transition was unique. |
Makes sense. Thanks |
In Kubernetes, pods are independent of containerd/kubelet, and thus restarting k3s without draining first is a safe thing to do. This is not immediately clear to folks new to hosting their own cluster, so add a reassuring note.