-
Notifications
You must be signed in to change notification settings - Fork 146
Update eck pdb docs #2361
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Update eck pdb docs #2361
Changes from 2 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
@@ -12,7 +12,28 @@ products: | |||||||||||||
|
||||||||||||||
A [Pod Disruption Budget](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) (PDB) allows you to limit the disruption to your application when its pods need to be rescheduled for some reason such as upgrades or routine maintenance work on the Kubernetes nodes. | ||||||||||||||
|
||||||||||||||
ECK manages a default PDB per {{es}} resource. It allows one {{es}} Pod to be taken down, as long as the cluster has a `green` health. Single-node clusters are not considered highly available and can always be disrupted. | ||||||||||||||
ECK manages either a single default PDB, or multiple PDBs per {{es}} resource according to the license available. | ||||||||||||||
|
||||||||||||||
## Enterprise licensed customers | ||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I've implemented this suggestion, but why do we note There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. you can view |
||||||||||||||
|
||||||||||||||
A separate PDB is created for each type of nodeSet defined in the manifest allowing upgrade or maintenance operations to be more quickly executed. The PDBs allow one {{es}} Pod per nodeSet to simultaneously be taken down as long as the cluster has the health defined in the following table: | ||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||||
|
||||||||||||||
| Role | Cluster Health Required | Notes | | ||||||||||||||
naemono marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||
|------|------------------------|--------| | ||||||||||||||
| Master | Yellow | | | ||||||||||||||
| Data | Green | All Data roles are grouped together into a single PDB, except for data_frozen. | | ||||||||||||||
| Data Frozen | Yellow | Since the frozen tier are essentially stateless, managing searchable snapshots, additional disruptions are allowed. | | ||||||||||||||
Comment on lines
+32
to
+33
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. you say should update the role column to reflect this
Comment on lines
+32
to
+33
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
is this in comparison to the rest of the data nodes? |
||||||||||||||
| Ingest | Yellow | | | ||||||||||||||
| ML | Yellow | | | ||||||||||||||
| Coordinating | Yellow | | | ||||||||||||||
| Transform | Yellow | | | ||||||||||||||
| Remote Cluster Client | Yellow | | | ||||||||||||||
naemono marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||
|
||||||||||||||
Single-node clusters are not considered highly available and can always be disrupted. | ||||||||||||||
|
||||||||||||||
## Non-enterprise licensed customers | ||||||||||||||
naemono marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||
|
||||||||||||||
It allows one {{es}} Pod to be taken down, as long as the cluster has a `green` health. Single-node clusters are not considered highly available and can always be disrupted. | ||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||||
|
||||||||||||||
In the {{es}} specification, you can change the default behavior as follows: | ||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. do the instructions below this line only apply to non-enterprise? if not, you might want to change the headings a little:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I have also made this suggestion, as the instructions apply to both licensed and non-licensed customers. I'll see how it turns out when it deploys... |
||||||||||||||
|
||||||||||||||
|
@@ -34,7 +55,7 @@ spec: | |||||||||||||
elasticsearch.k8s.elastic.co/cluster-name: quickstart | ||||||||||||||
``` | ||||||||||||||
|
||||||||||||||
::::{note} | ||||||||||||||
::::{note} | ||||||||||||||
[`maxUnavailable`](https://kubernetes.io/docs/tasks/run-application/configure-pdb/#arbitrary-controllers-and-selectors) cannot be used with an arbitrary label selector, therefore `minAvailable` is used in this example. | ||||||||||||||
:::: | ||||||||||||||
|
||||||||||||||
|
@@ -81,6 +102,3 @@ spec: | |||||||||||||
4. Pod disruption budget applies on all master nodes. | ||||||||||||||
5. Specify pod disruption budget to have 1 hot node available. | ||||||||||||||
6. Pod disruption budget applies on nodes of the same nodeset. | ||||||||||||||
|
||||||||||||||
|
||||||||||||||
|
Uh oh!
There was an error while loading. Please reload this page.