Reindex data

After creating an index, you might need to make an extensive change such as adding a new field to every document or combining multiple indexes to form a new one. Rather than deleting your index, making the change offline, and then indexing your data again, you can use the reindex operation.

With the reindex operation, you can copy all or a subset of documents that you select through a query to another index. Reindex is a POST operation. In its most basic form, you specify a source index and a destination index.

Reindexing can be an expensive operation depending on the size of your source index. We recommend you disable replicas in your destination index by setting number_of_replicas to 0 and re-enable them once the reindex process is complete.



Reindex all documents

You can copy all documents from one index to another.

You first need to create a destination index with your desired field mappings and settings or you can copy the ones from your source index:

  1. PUT destination
  2. {
  3. "mappings":{
  4. "Add in your desired mappings"
  5. },
  6. "settings":{
  7. "Add in your desired settings"
  8. }
  9. }

This reindex command copies all the documents from a source index to a destination index:

  1. POST _reindex
  2. {
  3. "source":{
  4. "index":"source"
  5. },
  6. "dest":{
  7. "index":"destination"
  8. }
  9. }

If the destination index is not already created, the reindex operation creates a new destination index with default configurations.

Reindex from a remote cluster

You can copy documents from an index in a remote cluster. Use the remote option to specify the remote hostname and the required login credentials.

This command reaches out to a remote cluster, logs in with the username and password, and copies all the documents from the source index in that remote cluster to the destination index in your local cluster:

  1. POST _reindex
  2. {
  3. "source":{
  4. "remote":{
  5. "host":"https://<REST_endpoint_of_remote_cluster>:9200",
  6. "username":"YOUR_USERNAME",
  7. "password":"YOUR_PASSWORD"
  8. },
  9. "index": "source"
  10. },
  11. "dest":{
  12. "index":"destination"
  13. }
  14. }

You can specify the following options:

OptionsValid valuesDescriptionRequired
hostStringThe REST endpoint of the remote cluster.Yes
usernameStringThe username to log into the remote cluster.No
passwordStringThe password to log into the remote cluster.No
socket_timeoutTime UnitThe wait time for socket reads (default 30s).No
connect_timeoutTime UnitThe wait time for remote connection timeouts (default 30s).No

The following table lists the retry policy cluster settings.

SettingDescriptionDefault value
reindex.remote.retry.initial_backoffThe initial backoff time for retries. Subsequent retries will follow exponential backoff based on the initial backoff time.500 ms
reindex.remote.retry.max_countThe maximum number of retry attempts.15

Reindex a subset of documents

You can copy a specific set of documents that match a search query.

This command copies only a subset of documents matched by a query operation to the destination index:

  1. POST _reindex
  2. {
  3. "source":{
  4. "index":"source",
  5. "query": {
  6. "match": {
  7. "field_name": "text"
  8. }
  9. }
  10. },
  11. "dest":{
  12. "index":"destination"
  13. }
  14. }

For a list of all query operations, see Full-text queries.

Combine one or more indexes

You can combine documents from one or more indexes by adding the source indexes as a list.

This command copies all documents from two source indexes to one destination index:

  1. POST _reindex
  2. {
  3. "source":{
  4. "index":[
  5. "source_1",
  6. "source_2"
  7. ]
  8. },
  9. "dest":{
  10. "index":"destination"
  11. }
  12. }

Make sure the number of shards for your source and destination indexes is the same.

Reindex only unique documents

You can copy only documents missing from a destination index by setting the op_type option to create. In this case, if a document with the same ID already exists, the operation ignores the one from the source index. To ignore all version conflicts of documents, set the conflicts option to proceed.

  1. POST _reindex
  2. {
  3. "conflicts":"proceed",
  4. "source":{
  5. "index":"source"
  6. },
  7. "dest":{
  8. "index":"destination",
  9. "op_type":"create"
  10. }
  11. }

Transform documents during reindexing

You can transform your data during the reindexing process using the script option. We recommend Painless for scripting in OpenSearch.

This command runs the source index through a Painless script that increments a number field inside an account object before copying it to the destination index:

  1. POST _reindex
  2. {
  3. "source":{
  4. "index":"source"
  5. },
  6. "dest":{
  7. "index":"destination"
  8. },
  9. "script":{
  10. "lang":"painless",
  11. "source":"ctx._account.number++"
  12. }
  13. }

You can also specify an ingest pipeline to transform your data during the reindexing process.

You would first have to create a pipeline with processors defined. You have a number of different processors available to use in your ingest pipeline.

Here’s a sample ingest pipeline that defines a split processor that splits a text field based on a space separator and stores it in a new word field. The script processor is a Painless script that finds the length of the word field and stores it in a new word_count field. The remove processor removes the test field.

  1. PUT _ingest/pipeline/pipeline-test
  2. {
  3. "description": "Splits the text field into a list. Computes the length of the 'word' field and stores it in a new 'word_count' field. Removes the 'test' field.",
  4. "processors": [
  5. {
  6. "split": {
  7. "field": "text",
  8. "separator": "\\s+",
  9. "target_field": "word"
  10. },
  11. }
  12. {
  13. "script": {
  14. "lang": "painless",
  15. "source": "ctx.word_count = ctx.word.length"
  16. }
  17. },
  18. {
  19. "remove": {
  20. "field": "test"
  21. }
  22. }
  23. ]
  24. }

After creating a pipeline, you can use the reindex operation:

  1. POST _reindex
  2. {
  3. "source": {
  4. "index": "source",
  5. },
  6. "dest": {
  7. "index": "destination",
  8. "pipeline": "pipeline-test"
  9. }
  10. }

Update documents in the current index

To update the data in your current index itself without copying it to a different index, use the update_by_query operation.

The update_by_query operation is POST operation that you can perform on a single index at a time.

  1. POST <index_name>/_update_by_query

If you run this command with no parameters, it increments the version number for all documents in the index.

Source index options

You can specify the following options for your source index:

OptionValid valuesDescriptionRequired
indexStringThe name of the source index. You can provide multiple source indexes as a list.Yes
max_docsIntegerThe maximum number of documents to reindex.No
queryObjectThe search query to use for the reindex operation.No
sizeIntegerThe number of documents to reindex.No
sliceStringSpecify manual or automatic slicing to parallelize reindexing.No

Destination index options

You can specify the following options for your destination index:

OptionValid valuesDescriptionRequired
indexStringThe name of the destination index.Yes
version_typeEnumThe version type for the indexing operation. Valid values: internal, external, external_gt, external_gte.No

Index codec considerations

For index codec considerations, see Index codecs.