Take and restore snapshots

Snapshots aren’t instantaneous. They take time to complete and do not represent perfect point-in-time views of the cluster. While a snapshot is in progress, you can still index documents and send other requests to the cluster, but new documents and updates to existing documents generally aren’t included in the snapshot. The snapshot includes primary shards as they existed when OpenSearch initiated the snapshot. Depending on the size of your snapshot thread pool, different shards might be included in the snapshot at slightly different times.

OpenSearch snapshots are incremental, meaning that they only store data that has changed since the last successful snapshot. The difference in disk usage between frequent and infrequent snapshots is often minimal.

In other words, taking hourly snapshots for a week (for a total of 168 snapshots) might not use much more disk space than taking a single snapshot at the end of the week. Also, the more frequently you take snapshots, the less time they take to complete. Some OpenSearch users take snapshots as often as every 30 minutes.

If you need to delete a snapshot, be sure to use the OpenSearch API rather than navigating to the storage location and purging files. Incremental snapshots from a cluster often share a lot of the same data; when you use the API, OpenSearch only removes data that no other snapshot is using.


Table of contents


Register repository

Before you can take a snapshot, you have to “register” a snapshot repository. A snapshot repository is just a storage location: a shared file system, Amazon Simple Storage Service (Amazon S3), Hadoop Distributed File System (HDFS), or Azure Storage.

Shared file system

  1. To use a shared file system as a snapshot repository, add it to opensearch.yml:

    1. path.repo: ["/mnt/snapshots"]

    On the RPM and Debian installs, you can then mount the file system. If you’re using the Docker install, add the file system to each node in docker-compose.yml before starting the cluster:

    1. volumes:
    2. - /Users/jdoe/snapshots:/mnt/snapshots
  2. Then register the repository using the REST API:

    1. PUT /_snapshot/my-fs-repository
    2. {
    3. "type": "fs",
    4. "settings": {
    5. "location": "/mnt/snapshots"
    6. }
    7. }

    copy

You will most likely not need to specify any parameters except for location. For allowed request parameters, see Register or update snapshot repository API.

Amazon S3

  1. To use an Amazon S3 bucket as a snapshot repository, install the repository-s3 plugin on all nodes:

    1. sudo ./bin/opensearch-plugin install repository-s3

    If you’re using the Docker installation, see Working with plugins. Your Dockerfile should look something like this:

    1. FROM opensearchproject/opensearch:2.18.0
    2. ENV AWS_ACCESS_KEY_ID <access-key>
    3. ENV AWS_SECRET_ACCESS_KEY <secret-key>
    4. # Optional
    5. ENV AWS_SESSION_TOKEN <optional-session-token>
    6. RUN /usr/share/opensearch/bin/opensearch-plugin install --batch repository-s3
    7. RUN /usr/share/opensearch/bin/opensearch-keystore create
    8. RUN echo $AWS_ACCESS_KEY_ID | /usr/share/opensearch/bin/opensearch-keystore add --stdin s3.client.default.access_key
    9. RUN echo $AWS_SECRET_ACCESS_KEY | /usr/share/opensearch/bin/opensearch-keystore add --stdin s3.client.default.secret_key
    10. # Optional
    11. RUN echo $AWS_SESSION_TOKEN | /usr/share/opensearch/bin/opensearch-keystore add --stdin s3.client.default.session_token

    After the Docker cluster starts, skip to step 7.

    If you’re using AWS IAM instance profile to allow OpenSearch nodes on AWS EC2 instances to inherit roles for policies when granting access to AWS S3 buckets, skip to step 8.

  2. Add your AWS access and secret keys to the OpenSearch keystore:

    1. sudo ./bin/opensearch-keystore add s3.client.default.access_key
    2. sudo ./bin/opensearch-keystore add s3.client.default.secret_key
  3. (Optional) If you’re using a custom S3 endpoint (for example, MinIO), disable the Amazon EC2 metadata connection:

    1. export AWS_EC2_METADATA_DISABLED=true

    If you’re installing OpenSearch using Helm, update the following settings in your values file:

    1. extraEnvs:
    2. - name: AWS_EC2_METADATA_DISABLED
    3. value: "true"
  4. (Optional) If you’re using temporary credentials, add your session token:

    1. sudo ./bin/opensearch-keystore add s3.client.default.session_token
  5. (Optional) If you connect to the internet through a proxy, add those credentials:

    1. sudo ./bin/opensearch-keystore add s3.client.default.proxy.username
    2. sudo ./bin/opensearch-keystore add s3.client.default.proxy.password
  6. (Optional) Add other settings to opensearch.yml:

    1. s3.client.default.endpoint: s3.amazonaws.com # S3 has alternate endpoints, but you probably don't need to change this value.
    2. s3.client.default.max_retries: 3 # number of retries if a request fails
    3. s3.client.default.path_style_access: false # whether to use the deprecated path-style bucket URLs.
    4. # You probably don't need to change this value, but for more information, see https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html#path-style-access.
    5. s3.client.default.protocol: https # http or https
    6. s3.client.default.proxy.host: my-proxy-host # the hostname for your proxy server
    7. s3.client.default.proxy.port: 8080 # port for your proxy server
    8. s3.client.default.read_timeout: 50s # the S3 connection timeout
    9. s3.client.default.use_throttle_retries: true # whether the client should wait a progressively longer amount of time (exponential backoff) between each successive retry
    10. s3.client.default.region: us-east-2 # AWS region to use. For non-AWS S3 storage, this value is required but has no effect.
  7. (Optional) If you don’t want to use AWS access and secret keys, you could configure the S3 plugin to use AWS Identity and Access Management (IAM) roles for service accounts:

    1. sudo ./bin/opensearch-keystore add s3.client.default.role_arn
    2. sudo ./bin/opensearch-keystore add s3.client.default.role_session_name

    If you don’t want to configure AWS access and secret keys, modify the following opensearch.yml setting. Make sure the file is accessible by the repository-s3 plugin:

    1. s3.client.default.identity_token_file: /usr/share/opensearch/plugins/repository-s3/token

    If copying is not an option, you can create a symlink to the web identity token file in the ${OPENSEARCH_PATH_CONFIG} folder:

    1. ln -s $AWS_WEB_IDENTITY_TOKEN_FILE "${OPENSEARCH_PATH_CONFIG}/aws-web-identity-token-file"

    You can reference the web identity token file in the following opensearch.yml setting by specifying the relative path that is resolved against ${OPENSEARCH_PATH_CONFIG}:

    1. s3.client.default.identity_token_file: aws-web-identity-token-file

    IAM roles require at least one of the above settings. Other settings will be taken from environment variables (if available): AWS_ROLE_ARN, AWS_WEB_IDENTITY_TOKEN_FILE, AWS_ROLE_SESSION_NAME.

  8. If you changed opensearch.yml, you must restart each node in the cluster. Otherwise, you only need to reload secure cluster settings:

    1. POST /_nodes/reload_secure_settings

    copy

  9. Create an S3 bucket if you don’t already have one. To take snapshots, you need permissions to access the bucket. The following IAM policy is an example of those permissions:

    1. {
    2. "Version": "2012-10-17",
    3. "Statement": [{
    4. "Action": [
    5. "s3:*"
    6. ],
    7. "Effect": "Allow",
    8. "Resource": [
    9. "arn:aws:s3:::your-bucket",
    10. "arn:aws:s3:::your-bucket/*"
    11. ]
    12. }]
    13. }
  10. Register the repository using the REST API:

    1. PUT /_snapshot/my-s3-repository
    2. {
    3. "type": "s3",
    4. "settings": {
    5. "bucket": "my-s3-bucket",
    6. "base_path": "my/snapshot/directory"
    7. }
    8. }

    copy

You will most likely not need to specify any parameters except for bucket and base_path. For allowed request parameters, see Register or update snapshot repository API.

Registering a Microsoft Azure storage account using Helm

Use the following steps to register a snapshot repository backed by an Azure storage account for an OpenSearch cluster deployed using Helm.

  1. Create an Azure storage account. Then create a container within the storage account. For more information, see Introduction to Azure Storage.

  2. Create an OpenSearch keystore file using a bash script. To create the bash script, copy the contents of the following example into a file named create-keystore.sh:

    1. #!/bin/bash
    2. /usr/share/opensearch/bin/opensearch-keystore create
    3. echo $AZURE_SNAPSHOT_STORAGE_ACCOUNT | /usr/share/opensearch/bin/opensearch-keystore add --stdin azure.client.default.account
    4. echo $AZURE_SNAPSHOT_STORAGE_ACCOUNT_KEY | /usr/share/opensearch/bin/opensearch-keystore add --stdin azure.client.default.key
    5. cp /usr/share/opensearch/config/opensearch.keystore /tmp/keystore/opensearch.keystore
  3. Create a Docker file. This file contains the details of your keystore, the OpenSearch instance, and the Azure repository. To create the file, copy the following example and save it as a Dockerfile:

    1. FROM opensearchproject/opensearch:2.18.0
    2. RUN /usr/share/opensearch/bin/opensearch-plugin install --batch repository-azure
    3. COPY --chmod=0775 create-keystore.sh create-keystore.sh
  4. Use the following docker build command to build an OpenSearch image from your Dockerfile:

    1. docker build -t opensearch-custom:2.18.0 -f Dockerfile .
  5. Create a Kubernetes secret containing the Azure storage account key by using the following manifest and command:

    1. apiVersion: v1
    2. kind: Secret
    3. metadata:
    4. name: opensearch
    5. data:
    6. azure-snapshot-storage-account-key: ### Insert base64 encoded key
  6. Deploy OpenSearch using Helm with the following additional values. Specify the value of the storage account in the AZURE_SNAPSHOT_STORAGE_ACCOUNT environment variable:

    1. extraInitContainers:
    2. - name: keystore-generator
    3. image: opensearch-custom:2.18.0
    4. command: ["/bin/bash", "-c"]
    5. args: ["bash create-keystore.sh"]
    6. env:
    7. - name: AZURE_SNAPSHOT_STORAGE_ACCOUNT
    8. value: ### Insert storage account name
    9. - name: AZURE_SNAPSHOT_STORAGE_ACCOUNT_KEY
    10. valueFrom:
    11. secretKeyRef:
    12. name: opensearch
    13. key: azure-snapshot-storage-account-key
    14. volumeMounts:
    15. - name: keystore
    16. mountPath: /tmp/keystore
    17. extraVolumeMounts:
    18. - name: keystore
    19. mountPath: /usr/share/opensearch/config/opensearch.keystore
    20. subPath: opensearch.keystore
    21. extraVolumes:
    22. - name: keystore
    23. emptyDir: {}
    24. image:
    25. repository: "opensearch-custom"
    26. tag: 2.18.0
  7. Register the repository using the Snapshot API. Replace snapshot_container with the name you specified in step 1, as shown in the following command:

    1. PUT /_snapshot/my-azure-snapshot
    2. {
    3. "type": "azure",
    4. "settings": {
    5. "client": "default",
    6. "container": "snapshot_container"
    7. }
    8. }

Set up Microsoft Azure Blob Storage

To use Azure Blob Storage as a snapshot repository, follow these steps:

  1. Install the repository-azure plugin on all nodes with the following command:

    1. ./bin/opensearch-plugin install repository-azure
  2. After the repository-azure plugin is installed, define your Azure Blob Storage settings before initializing the node. Start by defining your Azure Storage account name using the following secure setting:

    1. ./bin/opensearch-keystore add azure.client.default.account

Choose one of the following options for setting up your Azure Blob Storage authentication credentials.

Using an Azure Storage account key

Use the following setting to specify your Azure Storage account key:

  1. ./bin/opensearch-keystore add azure.client.default.key

Shared access signature

Use the following setting when accessing Azure with a shared access signature (SAS):

  1. ./bin/opensearch-keystore add azure.client.default.sas_token

Azure token credential

Starting in OpenSearch 2.15, you have the option to configure a token credential authentication flow in opensearch.yml. This method is distinct from connection string authentication, which requires a SAS or an account key.

If you choose to use token credential authentication, you will need to choose a token credential type. Although Azure offers multiple token credential types, as of OpenSearch version 2.15, only managed identity is supported.

To use managed identity, add your token credential type to opensearch.yml using either the managed or managed_identity value. This indicates that managed identity is being used to perform token credential authentication:

  1. azure.client.default.token_credential_type: "managed_identity"

Note the following when using Azure token credentials:

  • Token credential support is disabled in opensearch.yml by default.
  • A token credential takes precedence over an Azure Storage account key or a SAS when multiple options are configured.

Take snapshots

You specify two pieces of information when you create a snapshot:

  • Name of your snapshot repository
  • Name for the snapshot

The following snapshot includes all indexes and the cluster state:

  1. PUT /_snapshot/my-repository/1

copy

You can also add a request body to include or exclude certain indexes or specify other settings:

  1. PUT /_snapshot/my-repository/2
  2. {
  3. "indices": "opensearch-dashboards*,my-index*,-my-index-2016",
  4. "ignore_unavailable": true,
  5. "include_global_state": false,
  6. "partial": false
  7. }

copy

Request fieldsDescription
indicesThe indexes you want to include in the snapshot. You can use , to create a list of indexes, * to specify an index pattern, and - to exclude certain indexes. Don’t put spaces between items. Default is all indexes.
ignore_unavailableIf an index from the indices list doesn’t exist, whether to ignore it rather than fail the snapshot. Default is false.
include_global_stateWhether to include cluster state in the snapshot. Default is true.
partialWhether to allow partial snapshots. Default is false, which fails the entire snapshot if one or more shards fails to store.

If you request the snapshot immediately after taking it, you might see something like this:

  1. GET /_snapshot/my-repository/2
  2. {
  3. "snapshots": [{
  4. "snapshot": "2",
  5. "version": "6.5.4",
  6. "indices": [
  7. "opensearch_dashboards_sample_data_ecommerce",
  8. "my-index",
  9. "opensearch_dashboards_sample_data_logs",
  10. "opensearch_dashboards_sample_data_flights"
  11. ],
  12. "include_global_state": true,
  13. "state": "IN_PROGRESS",
  14. ...
  15. }]
  16. }

copy

Note that the snapshot is still in progress. If you want to wait for the snapshot to finish before continuing, add the wait_for_completion parameter to your request. Snapshots can take a while to complete, so consider whether or not this option fits your use case:

  1. PUT _snapshot/my-repository/3?wait_for_completion=true

copy

Snapshots have the following states:

StateDescription
SUCCESSThe snapshot successfully stored all shards.
IN_PROGRESSThe snapshot is currently running.
PARTIALAt least one shard failed to store successfully. Can only occur if you set partial to true when taking the snapshot.
FAILEDThe snapshot encountered an error and stored no data.
INCOMPATIBLEThe snapshot is incompatible with the version of OpenSearch running on this cluster. See Conflicts and compatibility.

You can’t take a snapshot if one is currently in progress. To check the status:

  1. GET /_snapshot/_status

copy

Restore snapshots

The first step in restoring a snapshot is retrieving existing snapshots. To see all snapshot repositories:

  1. GET /_snapshot/_all

copy

To see all snapshots in a repository:

  1. GET /_snapshot/my-repository/_all

copy

Then restore a snapshot:

  1. POST /_snapshot/my-repository/2/_restore

copy

Just like when taking a snapshot, you can add a request body to include or exclude certain indexes or specify some other settings:

  1. POST /_snapshot/my-repository/2/_restore
  2. {
  3. "indices": "opensearch-dashboards*,my-index*",
  4. "ignore_unavailable": true,
  5. "include_global_state": false,
  6. "include_aliases": false,
  7. "partial": false,
  8. "rename_pattern": "opensearch-dashboards(.+)",
  9. "rename_replacement": "restored-opensearch-dashboards$1",
  10. "index_settings": {
  11. "index.blocks.read_only": false
  12. },
  13. "ignore_index_settings": [
  14. "index.refresh_interval"
  15. ]
  16. }

copy

Request parametersDescription
indicesThe indexes you want to restore. You can use , to create a list of indexes, * to specify an index pattern, and - to exclude certain indexes. Don’t put spaces between items. Default is all indexes.
ignore_unavailableIf an index from the indices list doesn’t exist, whether to ignore it rather than fail the restore operation. Default is false.
include_global_stateWhether to restore the cluster state. Default is false.
include_aliasesWhether to restore aliases alongside their associated indexes. Default is true.
partialWhether to allow the restoration of partial snapshots. Default is false.
rename_patternIf you want to rename indexes, use this option to specify a regular expression that matches all the indexes that you want to restore and rename. Use capture groups (()) to reuse portions of the index name.
rename_replacementIf you want to rename indexes, use this option to specify the name replacement pattern. Use $0 to include the entire matching index name or the number of the capture group. For example, $1 would include the content of the first capture group.
rename_alias_patternIf you want to rename aliases, use this option to specify a regular expression that matches all the aliases you want to restore and rename. Use capture groups (()) to reuse portions of the alias name.
rename_alias_replacementIf you want to rename aliases, use this option to specify the name replacement pattern. Use $0 to include the entire matching alias name or the number of the capture group. For example, $1 would include the content of the first capture group.
index_settingsIf you want to change index settings applied during the restore operation, specify them here. You cannot change index.number_of_shards.
ignore_index_settingsRather than explicitly specifying new settings with index_settings, you can ignore certain index settings in the snapshot and use the cluster defaults applied during restore. You cannot ignore index.number_of_shards, index.number_of_replicas, or index.auto_expand_replicas.
storage_typelocal indicates that all snapshot metadata and index data will be downloaded to local storage.

remote_snapshot indicates that snapshot metadata will be downloaded to the cluster, but the remote repository will remain the authoritative store of the index data. Data will be downloaded and cached as necessary to service queries. At least one node in the cluster must be configured with the search role in order to restore a snapshot using the type remote_snapshot.

Defaults to local.

Conflicts and compatibility

One way to avoid index naming conflicts when restoring indexes is to use the rename_pattern and rename_replacement options. You can then, if necessary, use the _reindex API to combine the two. However, it may be simpler to delete the indexes that caused the conflict prior to restoring them from a snapshot.

Similarly, to avoid alias naming conflicts when restoring indexes with aliases, you can use the rename_alias_pattern and rename_alias_replacement options.

You can use the _close API to close existing indexes prior to restoring from a snapshot, but the index in the snapshot has to have the same number of shards as the existing index.

We recommend ceasing write requests to a cluster before restoring from a snapshot, which helps avoid scenarios such as:

  1. You delete an index, which also deletes its alias.
  2. A write request to the now-deleted alias creates a new index with the same name as the alias.
  3. The alias from the snapshot fails to restore due to a naming conflict with the new index.

Snapshots are only forward compatible by one major version. Snapshots taken by earlier OpenSearch versions can continue to be restored by the version of OpenSearch that originally took the snapshot, even after a version upgrade. For example, a snapshot taken by OpenSearch 2.11 or earlier can continue to be restored by a 2.11 cluster even after upgrading to 2.12.

If you have an old snapshot taken from an earlier major OpenSearch version, you can restore it to an intermediate cluster one major version newer than the snapshot’s version, reindex all indexes, take a new snapshot, and repeat until you arrive at your desired major version, but you may find it easier to manually index your data in the new cluster.

Security considerations

If you’re using the Security plugin, snapshots have some additional restrictions:

  • To perform snapshot and restore operations, users must have the built-in manage_snapshots role.
  • You can’t restore snapshots that contain a global state or the .opendistro_security index.

If a snapshot contains a global state, you must exclude it when performing the restore. If your snapshot also contains the .opendistro_security index, either exclude it or list all the other indexes you want to include:

  1. POST /_snapshot/my-repository/3/_restore
  2. {
  3. "indices": "-.opendistro_security",
  4. "include_global_state": false
  5. }

copy

The .opendistro_security index contains sensitive data, so we recommend excluding it when you take a snapshot. If you do need to restore the index from a snapshot, you must include an admin certificate in the request:

  1. curl -k --cert ./kirk.pem --key ./kirk-key.pem -XPOST 'https://localhost:9200/_snapshot/my-repository/3/_restore?pretty'

copy

We strongly recommend against restoring .opendistro_security using an admin certificate because doing so can alter the security posture of the entire cluster. See A word of caution for a recommended process to back up and restore your Security plugin configuration.

Index codec considerations

For index codec considerations, see Index codecs.