Skip to content
Open
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 23 additions & 5 deletions deploy-manage/deploy/cloud-on-k8s/pod-disruption-budget.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,28 @@ products:

A [Pod Disruption Budget](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) (PDB) allows you to limit the disruption to your application when its pods need to be rescheduled for some reason such as upgrades or routine maintenance work on the Kubernetes nodes.

ECK manages a default PDB per {{es}} resource. It allows one {{es}} Pod to be taken down, as long as the cluster has a `green` health. Single-node clusters are not considered highly available and can always be disrupted.
ECK manages either a single default PDB, or multiple PDBs per {{es}} resource according to the license available.

## Enterprise licensed customers
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
## Enterprise licensed customers
## Enterprise licensed customers
```{applies_to}
deployment:
eck: ga 3.2
```

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've implemented this suggestion, but why do we note ga here?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can view ga as the default - we just want record the lifecycle state so it's clear the what state of the feature was in the indicated version. ga in the frontend doesn't render - only the version # renders.


A separate PDB is created for each type of nodeSet defined in the manifest allowing upgrade or maintenance operations to be more quickly executed. The PDBs allow one {{es}} Pod per nodeSet to simultaneously be taken down as long as the cluster has the health defined in the following table:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
A separate PDB is created for each type of nodeSet defined in the manifest allowing upgrade or maintenance operations to be more quickly executed. The PDBs allow one {{es}} Pod per nodeSet to simultaneously be taken down as long as the cluster has the health defined in the following table:
In {{eck}} clusters licensed with an enterprise license, a separate PDB is created for each type of nodeSet defined in the manifest allowing upgrade or maintenance operations to be more quickly executed. The PDBs allow one {{es}} Pod per nodeSet to simultaneously be taken down as long as the cluster has the health defined in the following table:


| Role | Cluster Health Required | Notes |
|------|------------------------|--------|
| Master | Yellow | |
| Data | Green | All Data roles are grouped together into a single PDB, except for data_frozen. |
| Data Frozen | Yellow | Since the frozen tier are essentially stateless, managing searchable snapshots, additional disruptions are allowed. |
Comment on lines +32 to +33
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you say data_frozen in two ways. would probably optimize for the way the role appears to the user (data_frozen?)

should update the role column to reflect this

Comment on lines +32 to +33
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the frozen tier are essentially stateless, managing searchable snapshots, additional disruptions are allowed.

is this in comparison to the rest of the data nodes?

| Ingest | Yellow | |
| ML | Yellow | |
| Coordinating | Yellow | |
| Transform | Yellow | |
| Remote Cluster Client | Yellow | |

Single-node clusters are not considered highly available and can always be disrupted.

## Non-enterprise licensed customers

It allows one {{es}} Pod to be taken down, as long as the cluster has a `green` health. Single-node clusters are not considered highly available and can always be disrupted.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
It allows one {{es}} Pod to be taken down, as long as the cluster has a `green` health. Single-node clusters are not considered highly available and can always be disrupted.
In {{eck}} clusters that do not have an enterprise license, one {{es}} Pod can be taken down at a time, as long as the cluster has a health status of `green`. Single-node clusters are not considered highly available and can always be disrupted.


In the {{es}} specification, you can change the default behavior as follows:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do the instructions below this line only apply to non-enterprise? if not, you might want to change the headings a little:

## Default behavior
... 
### Enterprise licensed customers
...
### Non-enterprise licensed customers
...
## Override the default behavior
...
## Pod disruption budget ...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have also made this suggestion, as the instructions apply to both licensed and non-licensed customers. I'll see how it turns out when it deploys...


Expand All @@ -34,7 +55,7 @@ spec:
elasticsearch.k8s.elastic.co/cluster-name: quickstart
```

::::{note}
::::{note}
[`maxUnavailable`](https://kubernetes.io/docs/tasks/run-application/configure-pdb/#arbitrary-controllers-and-selectors) cannot be used with an arbitrary label selector, therefore `minAvailable` is used in this example.
::::

Expand Down Expand Up @@ -81,6 +102,3 @@ spec:
4. Pod disruption budget applies on all master nodes.
5. Specify pod disruption budget to have 1 hot node available.
6. Pod disruption budget applies on nodes of the same nodeset.



Loading