k3s etcd-snapshot

This page documents the management of etcd snapshots using the k3s etcd-snapshot CLI, as well as configuration of etcd scheduled snapshots for the k3s server process, and use of the k3s server --cluster-reset command to reset etcd cluster membership and optionally restore etcd snapshots.

Creating Snapshots

Snapshots are saved to the path set by the server’s --etcd-snapshot-dir value, which defaults to ${data-dir}/server/db/snapshots. The data-dir value defaults to /var/lib/rancher/k3s and can be changed independently by setting the --data-dir flag.

Scheduled Snapshots

Scheduled snapshots are enabled by default, at 00:00 and 12:00 system time, with 5 snapshots retained. To configure the snapshot interval or the number of retained snapshots, refer to the snapshot configuration options.

Scheduled snapshots have a name that starts with etcd-snapshot, followed by the node name and timestamp. The base name can be changed with the --etcd-snapshot-name flag in the server configuration.

On-demand Snapshots

Snapshots can be saved manually by running the k3s etcd-snapshot save command.

On-demand snapshots have a name that starts with on-demand, followed by the node name and timestamp. The base name can be changed with the --name flag when saving the snapshot.

Snapshot Configuration Options

These flags can be passed to the k3s server command to reset the etcd cluster, and optionally restore from a snapshot.

FlagDescription
—cluster-resetForget all peers and become sole member of a new cluster. This can also be set with the environment variable [$K3S_CLUSTER_RESET]
—cluster-reset-restore-pathPath to snapshot file to be restored

These flags are valid for both k3s server and k3s etcd-snapshot, however when passed to k3s etcd-snapshot the --etcd- prefix can be omitted to avoid redundancy. Flags can be passed in with the command line, or in the configuration file, which may be easier to use.

FlagDescription
—etcd-disable-snapshotsDisable scheduled snapshots
—etcd-snapshot-compressCompress etcd snapshots
—etcd-snapshot-dirDirectory to save db snapshots. (Default location: ${data-dir}/db/snapshots)
—etcd-snapshot-retentionNumber of snapshots to retain (default: 5)
—etcd-snapshot-schedule-cronSnapshot interval time in cron spec. eg. every 5 hours 0 /5 (default: 0 /12 )

S3 Compatible Object Store Support

K3s supports writing etcd snapshots to and restoring etcd snapshots from S3-compatible object stores. S3 support is available for both on-demand and scheduled snapshots.

FlagDescription
—etcd-s3Enable backup to S3
—etcd-s3-endpointS3 endpoint url
—etcd-s3-endpoint-caS3 custom CA cert to connect to S3 endpoint
—etcd-s3-skip-ssl-verifyDisables S3 SSL certificate validation
—etcd-s3-access-keyS3 access key
—etcd-s3-secret-keyS3 secret key
—etcd-s3-bucketS3 bucket name
—etcd-s3-regionS3 region / bucket location (optional). defaults to us-east-1
—etcd-s3-folderS3 folder
—etcd-s3-proxyProxy server to use when connecting to S3, overriding any proxy-releated environment variables
—etcd-s3-insecureDisables S3 over HTTPS
—etcd-s3-timeoutS3 timeout (default: 5m0s)
—etcd-s3-config-secretName of secret in the kube-system namespace used to configure S3, if etcd-s3 is enabled and no other etcd-s3 options are set

To perform an on-demand etcd snapshot and save it to S3:

  1. k3s etcd-snapshot save \
  2. --s3 \
  3. --s3-bucket=<S3-BUCKET-NAME> \
  4. --s3-access-key=<S3-ACCESS-KEY> \
  5. --s3-secret-key=<S3-SECRET-KEY>

To perform an on-demand etcd snapshot restore from S3, first make sure that K3s isn’t running. Then run the following commands:

  1. k3s server \
  2. --cluster-init \
  3. --cluster-reset \
  4. --etcd-s3 \
  5. --cluster-reset-restore-path=<SNAPSHOT-NAME> \
  6. --etcd-s3-bucket=<S3-BUCKET-NAME> \
  7. --etcd-s3-access-key=<S3-ACCESS-KEY> \
  8. --etcd-s3-secret-key=<S3-SECRET-KEY>

S3 Configuration Secret Support

etcd-snapshot - 图1Version Gate

S3 Configuration Secret support is available as of the August 2024 releases: v1.30.4+k3s1, v1.29.8+k3s1, v1.28.13+k3s1

K3s supports reading etcd S3 snapshot configuration from a Kubernetes Secret. This may be preferred to hardcoding credentials in K3s CLI flags or config files for security reasons, or if credentials need to be rotated without restarting K3s. To pass S3 snapshot configuration via a Secret, start K3s with --etcd-s3 and --etcd-s3-config-secret=<SECRET-NAME>. The Secret does not need to exist when K3s is started, but it will be checked for every time a snapshot save/list/delete/prune operation is performed.

The S3 config Secret cannot be used when restoring a snapshot, as the apiserver is not available to provide the secret during a restore. S3 configuration must be passed via the CLI when restoring a snapshot stored on S3.

etcd-snapshot - 图2note

Pass only the the --etcd-s3 and --etcd-s3-config-secret flags to enable the Secret.
If any other S3 configuration flags are set, the Secret will be ignored.

Keys in the Secret correspond to the --etcd-s3-* CLI flags listed above. The etcd-s3-endpoint-ca key accepts a PEM-encoded CA bundle, or the etcd-s3-endpoint-ca-name key may be used to specify the name of a ConfigMap in the kube-system namespace containing one or more PEM-encoded CA bundles.

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: k3s-etcd-snapshot-s3-config
  5. namespace: kube-system
  6. type: etcd.k3s.cattle.io/s3-config-secret
  7. stringData:
  8. etcd-s3-endpoint: ""
  9. etcd-s3-endpoint-ca: ""
  10. etcd-s3-endpoint-ca-name: ""
  11. etcd-s3-skip-ssl-verify: "false"
  12. etcd-s3-access-key: "AWS_ACCESS_KEY_ID"
  13. etcd-s3-secret-key: "AWS_SECRET_ACCESS_KEY"
  14. etcd-s3-bucket: "bucket"
  15. etcd-s3-folder: "folder"
  16. etcd-s3-region: "us-east-1"
  17. etcd-s3-insecure: "false"
  18. etcd-s3-timeout: "5m"
  19. etcd-s3-proxy: ""

Managing Snapshots

k3s supports a set of subcommands for working with your etcd snapshots.

SubcommandDescription
deleteDelete given snapshot(s)
ls, list, lList snapshots
pruneRemove snapshots that exceed the configured retention count
saveTrigger an immediate etcd snapshot

These commands will perform as expected whether the etcd snapshots are stored locally or in an S3 compatible object store.

For additional information on the etcd snapshot subcommands, run k3s etcd-snapshot --help.

Delete a snapshot from S3.

  1. k3s etcd-snapshot delete \
  2. --s3 \
  3. --s3-bucket=<S3-BUCKET-NAME> \
  4. --s3-access-key=<S3-ACCESS-KEY> \
  5. --s3-secret-key=<S3-SECRET-KEY> \
  6. <SNAPSHOT-NAME>

Prune local snapshots with the default retention policy (5). The prune subcommand takes an additional flag --snapshot-retention that allows for overriding the default retention policy.

  1. k3s etcd-snapshot prune
  1. k3s etcd-snapshot prune --snapshot-retention 10

ETCDSnapshotFile Custom Resources

etcd-snapshot - 图3Version Gate

ETCDSnapshotFiles are available as of the November 2023 releases: v1.28.4+k3s2, v1.27.8+k3s2, v1.26.11+k3s2, v1.25.16+k3s4

Snapshots can be viewed remotely using any Kubernetes client by listing or describing cluster-scoped ETCDSnapshotFile resources. Unlike the k3s etcd-snapshot list command, which only shows snapshots visible to that node, ETCDSnapshotFile resources track all snapshots present on cluster members.

  1. root@k3s-server-1:~# kubectl get etcdsnapshotfile
  2. NAME SNAPSHOTNAME NODE LOCATION SIZE CREATIONTIME
  3. local-on-demand-k3s-server-1-1730308816-3e9290 on-demand-k3s-server-1-1730308816 k3s-server-1 file:///var/lib/rancher/k3s/server/db/snapshots/on-demand-k3s-server-1-1730308816 2891808 2024-10-30T17:20:16Z
  4. s3-on-demand-k3s-server-1-1730308816-79b15c on-demand-k3s-server-1-1730308816 s3 s3://etcd/k3s-test/on-demand-k3s-server-1-1730308816 2891808 2024-10-30T17:20:16Z
  1. root@k3s-server-1:~# kubectl describe etcdsnapshotfile s3-on-demand-k3s-server-1-1730308816-79b15c
  2. Name: s3-on-demand-k3s-server-1-1730308816-79b15c
  3. Namespace:
  4. Labels: etcd.k3s.cattle.io/snapshot-storage-node=s3
  5. Annotations: etcd.k3s.cattle.io/snapshot-token-hash: b4b83cda3099
  6. API Version: k3s.cattle.io/v1
  7. Kind: ETCDSnapshotFile
  8. Metadata:
  9. Creation Timestamp: 2024-10-30T17:20:16Z
  10. Finalizers:
  11. wrangler.cattle.io/managed-etcd-snapshots-controller
  12. Generation: 1
  13. Resource Version: 790
  14. UID: bec9a51c-dbbe-4746-922e-a5136bef53fc
  15. Spec:
  16. Location: s3://etcd/k3s-test/on-demand-k3s-server-1-1730308816
  17. Node Name: s3
  18. s3:
  19. Bucket: etcd
  20. Endpoint: s3.example.com
  21. Prefix: k3s-test
  22. Region: us-east-1
  23. Skip SSL Verify: true
  24. Snapshot Name: on-demand-k3s-server-1-1730308816
  25. Status:
  26. Creation Time: 2024-10-30T17:20:16Z
  27. Ready To Use: true
  28. Size: 2891808
  29. Events:
  30. Type Reason Age From Message
  31. ---- ------ ---- ---- -------
  32. Normal ETCDSnapshotCreated 113s k3s-supervisor Snapshot on-demand-k3s-server-1-1730308816 saved on S3

Restoring Snapshots

K3s runs through several steps when restoring a snapshot:

  1. If the snapshot is stored on S3, the file is downloaded into the snapshot directory.
  2. If the snapshot is compressed, it is decompressed.
  3. If present, the current etcd database files are moved to ${data-dir}/server/db/etcd-old-$TIMESTAMP/.
  4. The snapshot’s contents are extracted out to disk, and the checksum is verified.
  5. Etcd is started, and all etcd cluster members except the current node are removed from the cluster.
  6. CA Certificates and other confidential data are extracted from the datastore and written to disk, for later use.
  7. The restore is complete, and K3s can be restarted and used normally on the server where the restore was performed.
  8. (optional) Agents and control-plane servers can be started normally.
  9. (optional) Etcd servers can be restarted to rejoin to the cluster after removing old database files.

Snapshot Restore Steps

Select the tab below that matches your cluster configuration.

  • Single Server
  • Multiple Servers
  1. Stop the K3s service:

    1. systemctl stop k3s
  2. Run k3s server with the --cluster-reset flag, and --cluster-reset-restore-path indicating the path to the snapshot to restore. If the snapshot is stored on S3, provide S3 configuration flags (--etcd-s3, --etcd-s3-bucket, and so on), and give only the filename name of the snapshot as the restore path.

    etcd-snapshot - 图4note

    Using the --cluster-reset flag without specifying a snapshot to restore simply resets the etcd cluster to a single member without restoring a snapshot.

    1. k3s server \
    2. --cluster-reset \
    3. --cluster-reset-restore-path=<PATH-TO-SNAPSHOT>

    Result: K3s restores the snapshot and resets cluster membership, then prints a message indicating that it is ready to be restarted:
    Managed etcd cluster membership has been reset, restart without --cluster-reset flag now.

  3. Start K3s again:

    1. systemctl start k3s

In this example there are 3 servers, S1, S2, and S3. The snapshot is located on S1.

  1. Stop K3s on all servers:

    1. systemctl stop k3s
  2. On S1, run k3s server with the --cluster-reset option, and --cluster-reset-restore-path indicating the path to the snapshot to restore. If the snapshot is stored on S3, provide S3 configuration flags (--etcd-s3, --etcd-s3-bucket, and so on), and give only the filename name of the snapshot as the restore path.

    etcd-snapshot - 图5note

    Using the --cluster-reset flag without specifying a snapshot to restore simply resets the etcd cluster to a single member without restoring a snapshot.

    1. k3s server \
    2. --cluster-reset \
    3. --cluster-reset-restore-path=<PATH-TO-SNAPSHOT>

    Result: K3s restores the snapshot and resets cluster membership, then prints a message indicating that it is ready to be restarted:
    Managed etcd cluster membership has been reset, restart without --cluster-reset flag now.
    Backup and delete ${datadir}/server/db on each peer etcd server and rejoin the nodes.

  3. On S1, start K3s again:

    1. systemctl start k3s
  4. On S2 and S3, delete the data directory, /var/lib/rancher/k3s/server/db/:

    1. rm -rf /var/lib/rancher/k3s/server/db/
  5. On S2 and S3, start K3s again to join the restored cluster:

    1. systemctl start k3s