Remote-backed storage

Introduced 2.10

Remote-backed storage offers OpenSearch users a new way to protect against data loss by automatically creating backups of all index transactions and sending them to remote storage. In order to expose this feature, segment replication must also be enabled. See Segment replication for additional information.

With remote-backed storage, when a write request lands on the primary shard, the request is indexed to Lucene on the primary shard only. The corresponding translog is then uploaded to remote store. OpenSearch does not send the write request to the replicas, but rather performs a primary term validation to confirm that the request originator shard is still the primary shard. Primary term validation ensures that the acting primary shard fails if it becomes isolated and is unaware of the cluster manager electing a new primary.

After segments are created on the primary shard as part of the refresh, flush, and merge flow, the segments are uploaded to remote segment store and the replica shards source a copy from the same remote segment store. This prevents the primary shard from having to perform any write operations.

Configuring remote-backed storage

Remote-backed storage is a cluster level setting. It can only be enabled when bootstrapping to the cluster. After bootstrapping completes, the remote-backed storage cannot be enabled or disabled. This provides durability at the cluster level.

Communication with the configured remote cluster happens in the Repository plugin interface. All the existing implementations of the Repository plugin, such as Azure Blob Storage, Google Cloud Storage, and Amazon Simple Storage Service (Amazon S3), are compatible with remote-backed storage.

Make sure remote store settings are configured the same way across all nodes in the cluster. If not, bootstrapping will fail for nodes whose attributes are different from the elected cluster manager node.

To enable remote-backed storage for a given cluster, provide the remote store repository details as node attributes in opensearch.yml, as shown in the following example:

  1. # Repository name
  2. node.attr.remote_store.segment.repository: my-repo-1
  3. node.attr.remote_store.translog.repository: my-repo-2
  4. # Segment repository settings
  5. node.attr.remote_store.repository.my-repo-1.type: s3
  6. node.attr.remote_store.repository.my-repo-1.settings.bucket: <Bucket Name 1>
  7. node.attr.remote_store.repository.my-repo-1.settings.base_path: <Bucket Base Path 1>
  8. node.attr.remote_store.repository.my-repo-1.settings.region: us-east-1
  9. # Translog repository settings
  10. node.attr.remote_store.repository.my-repo-2.type: s3
  11. node.attr.remote_store.repository.my-repo-2.settings.bucket: <Bucket Name 2>
  12. node.attr.remote_store.repository.my-repo-2.settings.base_path: <Bucket Base Path 2>
  13. node.attr.remote_store.repository.my-repo-2.settings.region: us-east-1

copy

For more information about configuring settings for the remote cluster state, see Remote Cluster State. This is required in order for cluster metadata to persist on the remote store.

You do not have to use three different remote store repositories for segment, translog, and state. All three stores can share the same repository.

During the bootstrapping process, the remote-backed repositories listed in opensearch.yml are automatically registered. After the cluster is created with the remote_store settings, all indexes created in that cluster will start uploading data to the configured remote store.

You can use the following cluster settings to tune how remote-backed clusters handle each workload.

FieldData typeDescription
cluster.default.index.refresh_intervalTime unitSets the refresh interval when the index.refresh_interval setting is not provided. This setting can be useful when you want to set a default refresh interval across all indexes in a cluster and also support the searchIdle setting. You cannot set the interval lower than the cluster.minimum.index.refresh_interval setting.
cluster.minimum.index.refresh_intervalTime unitSets the minimum refresh interval and applies it to all indexes in the cluster. The cluster.default.index.refresh_interval setting should be higher than this setting’s value. If, during index creation, the index.refresh_interval setting is lower than the minimum, index creation fails.
cluster.remote_store.translog.buffer_intervalTime unitThe default value of the translog buffer interval used when performing periodic translog updates. This setting is only effective when the index setting index.remote_store.translog.buffer_interval is not present.

Restoring from a backup

To restore an index from a remote backup, such as in the event of a node failure, use one of the following options:

Restore only unassigned shards

  1. curl -X POST "https://localhost:9200/_remotestore/_restore" -H 'Content-Type: application/json' -d'
  2. {
  3. "indices": ["my-index-1", "my-index-2"]
  4. }
  5. '

Restore all shards of a given index

  1. curl -X POST "https://localhost:9200/_remotestore/_restore?restore_all_shards=true" -ku admin:<custom-admin-password> -H 'Content-Type: application/json' -d'
  2. {
  3. "indices": ["my-index"]
  4. }
  5. '

If the Security plugin is enabled, a user must have the cluster:admin/remotestore/restore permission. See Access control for information about configuring user permissions.

Potential use cases

You can use remote-backed storage to:

  • Restore red clusters or indexes.
  • Recover all data up to the last acknowledged write, regardless of replica count, if index.translog.durability is set to request.

Benchmarks

The OpenSearch Project has run remote store using multiple workload options available within the OpenSearch Benchmark tool. This section summarizes the benchmark results for the following workloads:

Each workload was tested against multiple bulk indexing client configurations in order to simulate varying degrees of request concurrency.

Your results may vary based on your cluster topology, hardware, shard count, and merge settings.

Cluster, shard, and test configuration

For these benchmarks, we used the following cluster, shard, and test configuration:

  • Nodes: Three nodes, each using the data, ingest, and cluster manager roles
  • Node instance: Amazon EC2 r6g.xlarge
  • OpenSearch Benchmark host: Single Amazon EC2 m5.2xlarge instance
  • Shard configuration: Three shards with one replica
  • The repository-s3 plugin installed with the default S3 settings

StackOverflow

The following table lists the benchmarking results for the so workload with a remote translog buffer interval of 250 ms.

  8 bulk indexing clients (Default)  16 bulk indexing clients  24 bulk indexing clients  
  Document replicationRemote enabledPercent differenceDocument replicationRemote enabledPercent differenceDocument replicationRemote enabledPercent difference
Indexing throughputMean29582.540667.437.4731154.947862.353.6331777.251123.260.88
Indexing throughputP5028915.440343.439.5230406.447472.556.1330852.150547.263.84
Indexing latencyP901716.341469.5-14.383709.772799.82-24.535768.683794.13-34.23

HTTP logs

The following table lists the benchmarking results for the http_logs workload with a remote translog buffer interval of 200 ms.

  8 bulk indexing clients (Default)  16 bulk indexing clients  24 bulk indexing clients  
  Document replicationRemote enabledPercent differenceDocument replicationRemote enabledPercent differenceDocument replicationRemote enabledPercent difference
Indexing throughputMean14906282198.7-44.8613469614874910.4313305019723948.24
Indexing throughputP5014812381656.1-44.8713359114885911.4313287219745548.61
Indexing latencyP90327.011610.03686.55751.705669.073-10.991145.19817.185-28.64

NYC taxis

The following table lists the benchmarking results for the http_logs workload with a remote translog buffer interval of 250 ms.

  8 bulk indexing clients (Default)  16 bulk indexing clients  24 bulk indexing clients  
  Document replicationRemote enabledPercent differenceDocument replicationRemote enabledPercent differenceDocument replicationRemote enabledPercent difference
Indexing throughputMean93383.994186.10.8691624.812577037.2793627.713200640.99
Indexing throughputP5091645.193906.72.4789659.812544339.9191120.313216645.05
Indexing latencyP90995.2171014.011.892236.331750.06-21.743353.452472-26.28

As shown by the results, there are consistent gains in cases where the indexing latency is more than the average remote upload time. When you increase the number of bulk indexing clients, a remote-enabled configuration provides indexing throughput gains of up to 60–65%. For more detailed results, see Issue #9790.

Next steps

To track future enhancements to remote-backed storage, see Issue #10181.