Skip to content

Commit 636f61d

Browse files
authored
Add extra information about etcd-snapshot (#454)
* Add extra information from RKE2 docs about etcd-snapshot Signed-off-by: manuelbuil <[email protected]>
1 parent b4efb88 commit 636f61d

File tree

1 file changed

+37
-2
lines changed

1 file changed

+37
-2
lines changed

docs/cli/etcd-snapshot.md

Lines changed: 37 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ The following options control the operation of scheduled snapshots:
3737

3838
The data-dir value defaults to `/var/lib/rancher/k3s` and can be changed independently by setting the `--data-dir` flag.
3939

40-
Scheduled snapshots are saved to the path set by the server's `--etcd-snapshot-dir` value. If you want them replicated in S3 compatible object stores, refer to [S3 configuration options](https://docs.k3s.io/cli/etcd-snapshot#s3-compatible-object-store-support)
40+
Scheduled snapshots are saved to the path set by the server's `--etcd-snapshot-dir` value. If you want them replicated in S3 compatible object stores, refer to [S3 configuration options](#s3-compatible-object-store-support)
4141

4242
</TabItem>
4343
<TabItem value="On-demand">
@@ -56,7 +56,7 @@ The data-dir value defaults to `/var/lib/rancher/k3s` and can be changed indepen
5656

5757
The `--name` flag can only be set when running the `k3s etcd-snapshot save` command. The other two can also be part of the `k3s server` [configuration file](../installation/configuration.md#configuration-file)
5858

59-
On-demand snapshots are saved to the path set by the server's `--etcd-snapshot-dir` value. If you want them replicated in S3 compatible object stores, refer to [S3 configuration options](https://docs.k3s.io/cli/etcd-snapshot#s3-compatible-object-store-support)
59+
On-demand snapshots are saved to the path set by the server's `--etcd-snapshot-dir` value. If you want them replicated in S3 compatible object stores, refer to [S3 configuration options](#s3-compatible-object-store-support)
6060

6161
</TabItem>
6262
</Tabs>
@@ -180,6 +180,8 @@ K3s runs through several steps when restoring a snapshot:
180180
8. (optional) Agents and control-plane servers can be started normally.
181181
8. (optional) Etcd servers can be restarted to rejoin to the cluster after removing old database files.
182182

183+
When restoring a snapshot, you don't need to use the same K3s version that created it; a higher minor version is also acceptable.
184+
183185
### Snapshot Restore Steps
184186

185187
Select the tab below that matches your cluster configuration.
@@ -212,6 +214,10 @@ Select the tab below that matches your cluster configuration.
212214
```bash
213215
systemctl start k3s
214216
```
217+
If an etcd-s3 backup configuration is defined within the K3s config file, the k3s restore will attempt to pull the snapshot file from the configured S3 bucket. In this instance only the snapshot filename should be passed in the argument `--cluster-reset-restore-path`. To restore from a local snapshot file, where an etcd-s3 backup configuration is present, add the argument `--etcd-s3=false` and pass the full path to the local snapshot file in the argument `--cluster-reset-restore-path`.
218+
219+
As a safety mechanism, when K3s resets the cluster, it creates an empty file at `/var/lib/rancher/k3s/server/db/reset-flag` that prevents users from accidentally running multiple cluster resets in succession. This file is deleted when K3s starts normally.
220+
215221
</TabItem>
216222
<TabItem value="Multiple Servers">
217223

@@ -253,9 +259,38 @@ In this example there are 3 servers, `S1`, `S2`, and `S3`. The snapshot is locat
253259
```bash
254260
systemctl start k3s
255261
```
262+
263+
If an etcd-s3 backup configuration is defined within the K3s config file, the k3s restore will attempt to pull the snapshot file from the configured S3 bucket. In this instance only the snapshot filename should be passed in the argument `--cluster-reset-restore-path`. To restore from a local snapshot file, where an etcd-s3 backup configuration is present, add the argument `--etcd-s3=false` and pass the full path to the local snapshot file in the argument `--cluster-reset-restore-path`.
264+
265+
As a safety mechanism, when K3s resets the cluster, it creates an empty file at `/var/lib/rancher/k3s/server/db/reset-flag` that prevents users from accidentally running multiple cluster resets in succession. This file is deleted when K3s starts normally.
266+
256267
</TabItem>
257268
</Tabs>
258269

270+
#### Restoring To New Hosts
271+
272+
It is possible to restore an etcd snapshot to a different host than it was taken on. When doing so, you must pass the [server token](token.md#server) that was originally used when taking the snapshot, as it is used to decrypt the bootstrap data inside the snapshot. The process is the same as above but changing step 2 by:
273+
274+
1. In the node that took the snapshot save the value of: `/var/lib/rancher/k3s/server/token`. This is `<BACKED-UP-TOKEN-VALUE>` in step 3.
275+
276+
2. Copy the snapshot to the new node. The path in the node is `<PATH-TO-SNAPSHOT>` in step 3
277+
278+
3. Initiate the restore from snapshot on the first server node with the following commands:
279+
280+
```bash
281+
k3s server \
282+
--cluster-reset \
283+
--cluster-reset-restore-path=<PATH-TO-SNAPSHOT>
284+
--token=<BACKED-UP-TOKEN-VALUE>
285+
```
286+
The token value can also be set in the K3s config file.
287+
288+
289+
:::warning
290+
1. Node resources are also included in the etcd snapshot. If restoring to a new set of nodes, you will need to manually delete any old nodes that are no longer present in the cluster.
291+
2. If there is a token set in the K3s config file, make sure it is the same as the `<BACKED-UP-TOKEN-VALUE>`, otherwise k3s will fail to start.
292+
:::
293+
259294

260295
## ETCDSnapshotFile Custom Resources
261296

0 commit comments

Comments
 (0)