You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/cli/etcd-snapshot.md
+37-2Lines changed: 37 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,7 +37,7 @@ The following options control the operation of scheduled snapshots:
37
37
38
38
The data-dir value defaults to `/var/lib/rancher/k3s` and can be changed independently by setting the `--data-dir` flag.
39
39
40
-
Scheduled snapshots are saved to the path set by the server's `--etcd-snapshot-dir` value. If you want them replicated in S3 compatible object stores, refer to [S3 configuration options](https://docs.k3s.io/cli/etcd-snapshot#s3-compatible-object-store-support)
40
+
Scheduled snapshots are saved to the path set by the server's `--etcd-snapshot-dir` value. If you want them replicated in S3 compatible object stores, refer to [S3 configuration options](#s3-compatible-object-store-support)
41
41
42
42
</TabItem>
43
43
<TabItemvalue="On-demand">
@@ -56,7 +56,7 @@ The data-dir value defaults to `/var/lib/rancher/k3s` and can be changed indepen
56
56
57
57
The `--name` flag can only be set when running the `k3s etcd-snapshot save` command. The other two can also be part of the `k3s server`[configuration file](../installation/configuration.md#configuration-file)
58
58
59
-
On-demand snapshots are saved to the path set by the server's `--etcd-snapshot-dir` value. If you want them replicated in S3 compatible object stores, refer to [S3 configuration options](https://docs.k3s.io/cli/etcd-snapshot#s3-compatible-object-store-support)
59
+
On-demand snapshots are saved to the path set by the server's `--etcd-snapshot-dir` value. If you want them replicated in S3 compatible object stores, refer to [S3 configuration options](#s3-compatible-object-store-support)
60
60
61
61
</TabItem>
62
62
</Tabs>
@@ -180,6 +180,8 @@ K3s runs through several steps when restoring a snapshot:
180
180
8. (optional) Agents and control-plane servers can be started normally.
181
181
8. (optional) Etcd servers can be restarted to rejoin to the cluster after removing old database files.
182
182
183
+
When restoring a snapshot, you don't need to use the same K3s version that created it; a higher minor version is also acceptable.
184
+
183
185
### Snapshot Restore Steps
184
186
185
187
Select the tab below that matches your cluster configuration.
@@ -212,6 +214,10 @@ Select the tab below that matches your cluster configuration.
212
214
```bash
213
215
systemctl start k3s
214
216
```
217
+
If an etcd-s3 backup configuration is defined within the K3s config file, the k3s restore will attempt to pull the snapshot file from the configured S3 bucket. In this instance only the snapshot filename should be passed in the argument `--cluster-reset-restore-path`. To restore from a local snapshot file, where an etcd-s3 backup configuration is present, add the argument `--etcd-s3=false` and pass the full path to the local snapshot file in the argument `--cluster-reset-restore-path`.
218
+
219
+
As a safety mechanism, when K3s resets the cluster, it creates an empty file at `/var/lib/rancher/k3s/server/db/reset-flag` that prevents users from accidentally running multiple cluster resets in succession. This file is deleted when K3s starts normally.
220
+
215
221
</TabItem>
216
222
<TabItem value="Multiple Servers">
217
223
@@ -253,9 +259,38 @@ In this example there are 3 servers, `S1`, `S2`, and `S3`. The snapshot is locat
253
259
```bash
254
260
systemctl start k3s
255
261
```
262
+
263
+
If an etcd-s3 backup configuration is defined within the K3s config file, the k3s restore will attempt to pull the snapshot file from the configured S3 bucket. In this instance only the snapshot filename should be passed in the argument `--cluster-reset-restore-path`. To restore from a local snapshot file, where an etcd-s3 backup configuration is present, add the argument `--etcd-s3=false` and pass the full path to the local snapshot file in the argument `--cluster-reset-restore-path`.
264
+
265
+
As a safety mechanism, when K3s resets the cluster, it creates an empty file at `/var/lib/rancher/k3s/server/db/reset-flag` that prevents users from accidentally running multiple cluster resets in succession. This file is deleted when K3s starts normally.
266
+
256
267
</TabItem>
257
268
</Tabs>
258
269
270
+
#### Restoring To New Hosts
271
+
272
+
It is possible to restore an etcd snapshot to a different host than it was taken on. When doing so, you must pass the [server token](token.md#server) that was originally used when taking the snapshot, as it is used to decrypt the bootstrap data inside the snapshot. The process is the same as above but changing step 2 by:
273
+
274
+
1. In the node that took the snapshot save the value of: `/var/lib/rancher/k3s/server/token`. This is `<BACKED-UP-TOKEN-VALUE>` in step 3.
275
+
276
+
2. Copy the snapshot to the new node. The path in the node is `<PATH-TO-SNAPSHOT>` in step 3
277
+
278
+
3. Initiate the restore from snapshot on the first server node with the following commands:
279
+
280
+
```bash
281
+
k3s server \
282
+
--cluster-reset \
283
+
--cluster-reset-restore-path=<PATH-TO-SNAPSHOT>
284
+
--token=<BACKED-UP-TOKEN-VALUE>
285
+
```
286
+
The token value can also be set in the K3s config file.
287
+
288
+
289
+
:::warning
290
+
1. Node resources are also included in the etcd snapshot. If restoring to a new set of nodes, you will need to manually delete any old nodes that are no longer present in the cluster.
291
+
2. If there is a token set in the K3s config file, make sure it is the same as the `<BACKED-UP-TOKEN-VALUE>`, otherwise k3s will fail to start.
0 commit comments