Rolling upgrade lab

You can follow these steps on your own compatible host to recreate the same cluster state the OpenSearch Project used for testing rolling upgrades. This exercise is useful if you want to test the upgrade process in a development environment.

The steps used in this lab were validated on an arbitrarily chosen Amazon Elastic Compute Cloud (Amazon EC2) t2.large instance using Amazon Linux 2 kernel version Linux 5.10.162-141.675.amzn2.x86_64 and Docker version 20.10.17, build 100c701. The instance was provisioned with an attached 20 GiB gp2 Amazon EBS root volume. These specifications are included for informational purposes and do not represent hardware requirements for OpenSearch or OpenSearch Dashboards.

References in this procedure to the $HOME path on the host machine in this procedure are represented by the tilde character (“~”) to make the instructions more portable. If you would prefer to specify an absolute path, modify the volume paths defined in upgrade-demo-cluster.sh and used throughout relevant commands in this document to reflect your environment.

Setting up the environment

As you follow the steps in this document, you will define several Docker resources, including containers, volumes, and a dedicated Docker network, using a script we provide. You can clean up your environment with the following command if you want to restart the process:

  1. docker container stop $(docker container ls -aqf name=os-); \
  2. docker container rm $(docker container ls -aqf name=os-); \
  3. docker volume rm -f $(docker volume ls -q | egrep 'data-0|repo-0'); \
  4. docker network rm opensearch-dev-net

copy

The command removes container names matching the regular expression os-*, data volumes matching data-0* and repo-0*, and the Docker network named opensearch-dev-net. If you have other Docker resources running on your host, then you should review and modify the command to avoid removing other resources unintentionally. This command does not revert host configuration changes, like memory swapping behavior.

After selecting a host, you can begin the lab:

  1. Install the appropriate version of Docker Engine for your Linux distribution and system architecture.
  2. Configure important system settings on your host:

    1. Disable memory paging and swapping on the host to improve performance:

      1. sudo swapoff -a

      copy

    2. Increase the number of memory maps available to OpenSearch. Open the sysctl configuration file for editing. This example command uses the vim text editor, but you can use any available text editor:

      1. sudo vim /etc/sysctl.conf

      copy

    3. Add the following line to /etc/sysctl.conf:

      1. vm.max_map_count=262144

      copy

    4. Save and quit. If you use the vi or vim text editors, you save and quit by switching to command mode, and entering :wq! or ZZ.

    5. Apply the configuration change:

      1. sudo sysctl -p

      copy

  3. Create a new directory called deploy in your home directory, then navigate to it. You will use ~/deploy for paths in the deployment script, configuration files, and TLS certificates:

    1. mkdir ~/deploy && cd ~/deploy

    copy

  4. Download upgrade-demo-cluster.sh from the OpenSearch Project documentation-website repository:

    1. wget https://raw.githubusercontent.com/opensearch-project/documentation-website/main/assets/examples/upgrade-demo-cluster.sh

    copy

  5. Run the script without any modifications in order to deploy four containers running OpenSearch and one container running OpenSearch Dashboards, with custom, self-signed TLS certificates and a pre-defined set of internal users:

    1. sh upgrade-demo-cluster.sh

    copy

  6. Confirm that the containers were launched successfully:

    1. docker container ls

    copy

    Example response

    1. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    2. 6e5218c8397d opensearchproject/opensearch-dashboards:1.3.7 "./opensearch-dashbo…" 24 seconds ago Up 22 seconds 0.0.0.0:5601->5601/tcp, :::5601->5601/tcp os-dashboards-01
    3. cb5188308b21 opensearchproject/opensearch:1.3.7 "./opensearch-docker…" 25 seconds ago Up 24 seconds 9300/tcp, 9650/tcp, 0.0.0.0:9204->9200/tcp, :::9204->9200/tcp, 0.0.0.0:9604->9600/tcp, :::9604->9600/tcp os-node-04
    4. 71b682aa6671 opensearchproject/opensearch:1.3.7 "./opensearch-docker…" 26 seconds ago Up 25 seconds 9300/tcp, 9650/tcp, 0.0.0.0:9203->9200/tcp, :::9203->9200/tcp, 0.0.0.0:9603->9600/tcp, :::9603->9600/tcp os-node-03
    5. f894054a9378 opensearchproject/opensearch:1.3.7 "./opensearch-docker…" 27 seconds ago Up 26 seconds 9300/tcp, 9650/tcp, 0.0.0.0:9202->9200/tcp, :::9202->9200/tcp, 0.0.0.0:9602->9600/tcp, :::9602->9600/tcp os-node-02
    6. 2e9c91c959cd opensearchproject/opensearch:1.3.7 "./opensearch-docker…" 28 seconds ago Up 27 seconds 9300/tcp, 9650/tcp, 0.0.0.0:9201->9200/tcp, :::9201->9200/tcp, 0.0.0.0:9601->9600/tcp, :::9601->9600/tcp os-node-01
  7. The amount of time OpenSearch needs to initialize the cluster varies depending on the performance capabilities of the underlying host. You can follow container logs to see what OpenSearch is doing during the bootstrap process:

    1. Enter the following command to display logs for container os-node-01 in the terminal window:

      1. docker logs -f os-node-01

      copy

    2. You will see a log entry resembling the following example when the node is ready:

      Example

      1. [INFO ][o.o.s.c.ConfigurationRepository] [os-node-01] Node 'os-node-01' initialized
    3. Press Ctrl+C to stop following container logs and return to the command prompt.

  8. Use cURL to query the OpenSearch REST API. In the following command, os-node-01 is queried by sending the request to host port 9201, which is mapped to port 9200 on the container:

    1. curl -s "https://localhost:9201" -ku admin:<custom-admin-password>

    copy

    Example response

    1. {
    2. "name" : "os-node-01",
    3. "cluster_name" : "opensearch-dev-cluster",
    4. "cluster_uuid" : "g1MMknuDRuuD9IaaNt56KA",
    5. "version" : {
    6. "distribution" : "opensearch",
    7. "number" : "1.3.7",
    8. "build_type" : "tar",
    9. "build_hash" : "db18a0d5a08b669fb900c00d81462e221f4438ee",
    10. "build_date" : "2022-12-07T22:59:20.186520Z",
    11. "build_snapshot" : false,
    12. "lucene_version" : "8.10.1",
    13. "minimum_wire_compatibility_version" : "6.8.0",
    14. "minimum_index_compatibility_version" : "6.0.0-beta1"
    15. },
    16. "tagline" : "The OpenSearch Project: https://opensearch.org/"
    17. }

Tip: Use the -s option with curl to hide the progress meter and error messages.

Adding data and configuring OpenSearch Security

Now that the OpenSearch cluster is running, it’s time to add data and configure some OpenSearch Security settings. The data you add and settings you configure will be validated again after the version upgrade is complete.

This section can be broken down into two parts:

Indexing data with the REST API

  1. Download the sample field mappings file:

    1. wget https://raw.githubusercontent.com/opensearch-project/documentation-website/main/assets/examples/ecommerce-field_mappings.json

    copy

  2. Next, download the bulk data that you will ingest into this index:

    1. wget https://raw.githubusercontent.com/opensearch-project/documentation-website/main/assets/examples/ecommerce.ndjson

    copy

  3. Use the Create index API to create an index using the mappings defined in ecommerce-field_mappings.json:

    1. curl -H "Content-Type: application/json" \
    2. -X PUT "https://localhost:9201/ecommerce?pretty" \
    3. --data-binary "@ecommerce-field_mappings.json" \
    4. -ku admin:<custom-admin-password>

    copy

    Example response

    1. {
    2. "acknowledged" : true,
    3. "shards_acknowledged" : true,
    4. "index" : "ecommerce"
    5. }
  4. Use the Bulk API to add data to the new ecommerce index from ecommerce.ndjson:

    1. curl -H "Content-Type: application/x-ndjson" \
    2. -X PUT "https://localhost:9201/ecommerce/_bulk?pretty" \
    3. --data-binary "@ecommerce.ndjson" \
    4. -ku admin:<custom-admin-password>

    copy

    Example response (truncated)

    1. {
    2. "took" : 3323,
    3. "errors" : false,
    4. "items" : [
    5. ...
    6. "index" : {
    7. "_index" : "ecommerce",
    8. "_type" : "_doc",
    9. "_id" : "4674",
    10. "_version" : 1,
    11. "result" : "created",
    12. "_shards" : {
    13. "total" : 2,
    14. "successful" : 2,
    15. "failed" : 0
    16. },
    17. "_seq_no" : 4674,
    18. "_primary_term" : 1,
    19. "status" : 201
    20. }
    21. ]
    22. }
  5. A search query can also confirm that the data was indexed successfully. The following query returns the number of documents in which keyword `customer_first_name` equals `Sonya`:

    1. curl -H 'Content-Type: application/json' \
    2. -X GET "https://localhost:9201/ecommerce/_search?pretty=true&filter_path=hits.total" \
    3. -d'{"query":{"match":{"customer_first_name":"Sonya"}}}' \
    4. -ku admin:<custom-admin-password>

    copy

    Example response

    1. {
    2. "hits" : {
    3. "total" : {
    4. "value" : 106,
    5. "relation" : "eq"
    6. }
    7. }
    8. }

Adding data using OpenSearch Dashboards

  1. Open a web browser and navigate to port 5601 on your Docker host (for example, https://HOST_ADDRESS:5601). If OpenSearch Dashboards is running and you have network access to the host from your browser client, then you will be redirected to a login page.
    1. If the web browser throws an error because the TLS certificates are self-signed, then you might need to bypass certificate checks in your browser. Refer to the browser’s documentation for information about bypassing certificate checks. The common name (CN) for each certificate is generated according to the container and node names for intracluster communication, so connecting to the host from a browser will still result in an “invalid CN” warning.
  2. Enter the default username (admin) and password (admin).
  3. On the OpenSearch Dashboards Home page, select Add sample data.
  4. Under Sample web logs, select Add data.
    1. Optional: Select View data to review the [Logs] Web Traffic dashboard.
  5. Select the Menu button to open the Navigation pane, then go to Security > Internal users.
  6. Select Create internal user.
  7. Provide a Username and Password.
  8. In the Backend role field, enter admin.
  9. Select Create.

Backing up important files

Always create backups before making changes to your cluster, especially if the cluster is running in a production environment.

In this section you will be:

Registering a snapshot repository

  1. Register a repository using the volume that was mapped by upgrade-demo-cluster.sh:

    1. curl -H 'Content-Type: application/json' \
    2. -X PUT "https://localhost:9201/_snapshot/snapshot-repo?pretty" \
    3. -d '{"type":"fs","settings":{"location":"/usr/share/opensearch/snapshots"}}' \
    4. -ku admin:<custom-admin-password>

    copy

    Example response

    1. {
    2. "acknowledged" : true
    3. }
  2. Optional: Perform an additional check to verify that the repository was created successfully:

    1. curl -H 'Content-Type: application/json' \
    2. -X POST "https://localhost:9201/_snapshot/snapshot-repo/_verify?timeout=0s&master_timeout=50s&pretty" \
    3. -ku admin:<custom-admin-password>

    copy

    Example response

    1. {
    2. "nodes" : {
    3. "UODBXfAlRnueJ67grDxqgw" : {
    4. "name" : "os-node-03"
    5. },
    6. "14I_OyBQQXio8nmk0xsVcQ" : {
    7. "name" : "os-node-04"
    8. },
    9. "tQp3knPRRUqHvFNKpuD2vQ" : {
    10. "name" : "os-node-02"
    11. },
    12. "rPe8D6ssRgO5twIP00wbCQ" : {
    13. "name" : "os-node-01"
    14. }
    15. }
    16. }

Creating a snapshot

Snapshots are backups of a cluster’s indexes and state. See Snapshots to learn more.

  1. Create a snapshot that includes all indexes and the cluster state:

    1. curl -H 'Content-Type: application/json' \
    2. -X PUT "https://localhost:9201/_snapshot/snapshot-repo/cluster-snapshot-v137?wait_for_completion=true&pretty" \
    3. -ku admin:<custom-admin-password>

    copy

    Example response

    1. {
    2. "snapshot" : {
    3. "snapshot" : "cluster-snapshot-v137",
    4. "uuid" : "-IYB8QNPShGOTnTtMjBjNg",
    5. "version_id" : 135248527,
    6. "version" : "1.3.7",
    7. "indices" : [
    8. "opensearch_dashboards_sample_data_logs",
    9. ".opendistro_security",
    10. "security-auditlog-2023.02.27",
    11. ".kibana_1",
    12. ".kibana_92668751_admin_1",
    13. "ecommerce",
    14. "security-auditlog-2023.03.06",
    15. "security-auditlog-2023.02.28",
    16. "security-auditlog-2023.03.07"
    17. ],
    18. "data_streams" : [ ],
    19. "include_global_state" : true,
    20. "state" : "SUCCESS",
    21. "start_time" : "2023-03-07T18:33:00.656Z",
    22. "start_time_in_millis" : 1678213980656,
    23. "end_time" : "2023-03-07T18:33:01.471Z",
    24. "end_time_in_millis" : 1678213981471,
    25. "duration_in_millis" : 815,
    26. "failures" : [ ],
    27. "shards" : {
    28. "total" : 9,
    29. "failed" : 0,
    30. "successful" : 9
    31. }
    32. }
    33. }

Backing up security settings

Cluster administrators can modify OpenSearch Security settings by using any of the following methods:

  • Modifying YAML files and running securityadmin.sh
  • Making REST API requests using the admin certificate
  • Making changes with OpenSearch Dashboards

Regardless of the method you choose, OpenSearch Security writes your configuration to a special system index called .opendistro_security. This system index is preserved through the upgrade process, and it is also saved in the snapshot you created. However, restoring system indexes requires elevated access granted by the admin certificate. To learn more, see System indexes and Configuring TLS certificates.

You can also export your OpenSearch Security settings as YAML files by running securityadmin.sh with the -backup option on any of your OpenSearch nodes. These YAML files can be used to reinitialize the .opendistro_security index with your existing configuration. The following steps will guide you through generating these backup files and copying them to your host for storage:

  1. Open an interactive pseudo-TTY session with os-node-01:

    1. docker exec -it os-node-01 bash

    copy

  2. Create a directory called backups and navigate to it:

    1. mkdir /usr/share/opensearch/backups && cd /usr/share/opensearch/backups

    copy

  3. Use securityadmin.sh to create backups of your OpenSearch Security settings in /usr/share/opensearch/backups/:

    1. /usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh \
    2. -backup /usr/share/opensearch/backups \
    3. -icl \
    4. -nhnv \
    5. -cacert /usr/share/opensearch/config/root-ca.pem \
    6. -cert /usr/share/opensearch/config/admin.pem \
    7. -key /usr/share/opensearch/config/admin-key.pem

    copy

    Example response

    1. Security Admin v7
    2. Will connect to localhost:9300 ... done
    3. Connected as CN=A,OU=DOCS,O=OPENSEARCH,L=PORTLAND,ST=OREGON,C=US
    4. OpenSearch Version: 1.3.7
    5. OpenSearch Security Version: 1.3.7.0
    6. Contacting opensearch cluster 'opensearch' and wait for YELLOW clusterstate ...
    7. Clustername: opensearch-dev-cluster
    8. Clusterstate: GREEN
    9. Number of nodes: 4
    10. Number of data nodes: 4
    11. .opendistro_security index already exists, so we do not need to create one.
    12. Will retrieve '/config' into /usr/share/opensearch/backups/config.yml
    13. SUCC: Configuration for 'config' stored in /usr/share/opensearch/backups/config.yml
    14. Will retrieve '/roles' into /usr/share/opensearch/backups/roles.yml
    15. SUCC: Configuration for 'roles' stored in /usr/share/opensearch/backups/roles.yml
    16. Will retrieve '/rolesmapping' into /usr/share/opensearch/backups/roles_mapping.yml
    17. SUCC: Configuration for 'rolesmapping' stored in /usr/share/opensearch/backups/roles_mapping.yml
    18. Will retrieve '/internalusers' into /usr/share/opensearch/backups/internal_users.yml
    19. SUCC: Configuration for 'internalusers' stored in /usr/share/opensearch/backups/internal_users.yml
    20. Will retrieve '/actiongroups' into /usr/share/opensearch/backups/action_groups.yml
    21. SUCC: Configuration for 'actiongroups' stored in /usr/share/opensearch/backups/action_groups.yml
    22. Will retrieve '/tenants' into /usr/share/opensearch/backups/tenants.yml
    23. SUCC: Configuration for 'tenants' stored in /usr/share/opensearch/backups/tenants.yml
    24. Will retrieve '/nodesdn' into /usr/share/opensearch/backups/nodes_dn.yml
    25. SUCC: Configuration for 'nodesdn' stored in /usr/share/opensearch/backups/nodes_dn.yml
    26. Will retrieve '/whitelist' into /usr/share/opensearch/backups/whitelist.yml
    27. SUCC: Configuration for 'whitelist' stored in /usr/share/opensearch/backups/whitelist.yml
    28. Will retrieve '/audit' into /usr/share/opensearch/backups/audit.yml
    29. SUCC: Configuration for 'audit' stored in /usr/share/opensearch/backups/audit.yml
  4. Optional: Create a backup directory for TLS certificates and store copies of the certificates. Repeat this for each node if you use unique TLS certificates:

    1. mkdir /usr/share/opensearch/backups/certs && cp /usr/share/opensearch/config/*pem /usr/share/opensearch/backups/certs/

    copy

  5. Terminate the pseudo-TTY session:

    1. exit

    copy

  6. Copy the files to your host:

    1. docker cp os-node-01:/usr/share/opensearch/backups ~/deploy/

    copy

Performing the upgrade

Now that the cluster is configured and you have made backups of important files and settings, it’s time to begin the version upgrade.

Some steps included in this section, like disabling shard replication and flushing the transaction log, will not impact the performance of your cluster. These steps are included as best practices and can significantly improve cluster performance in situations where clients continue interacting with the OpenSearch cluster throughout the upgrade, such as by querying existing data or indexing documents.

  1. Disable shard replication to stop the movement of Lucene index segments within your cluster:

    1. curl -H 'Content-type: application/json' \
    2. -X PUT "https://localhost:9201/_cluster/settings?pretty" \
    3. -d'{"persistent":{"cluster.routing.allocation.enable":"primaries"}}' \
    4. -ku admin:<custom-admin-password>

    copy

    Example response

    1. {
    2. "acknowledged" : true,
    3. "persistent" : {
    4. "cluster" : {
    5. "routing" : {
    6. "allocation" : {
    7. "enable" : "primaries"
    8. }
    9. }
    10. }
    11. },
    12. "transient" : { }
    13. }
  2. Perform a flush operation on the cluster to commit transaction log entries to the Lucene index:

    1. curl -X POST "https://localhost:9201/_flush?pretty" -ku admin:<custom-admin-password>

    copy

    Example response

    1. {
    2. "_shards" : {
    3. "total" : 20,
    4. "successful" : 20,
    5. "failed" : 0
    6. }
    7. }
  3. Select a node to upgrade. You can upgrade nodes in any order because all of the nodes in this demo cluster are eligible cluster managers. The following command will stop and remove container os-node-01 without removing the mounted data volume:

    1. docker stop os-node-01 && docker container rm os-node-01

    copy

  4. Start a new container named os-node-01 with the opensearchproject/opensearch:2.5.0 image and using the same mapped volumes as the original container:

    1. docker run -d \
    2. -p 9201:9200 -p 9601:9600 \
    3. -e "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" \
    4. --ulimit nofile=65536:65536 --ulimit memlock=-1:-1 \
    5. -v data-01:/usr/share/opensearch/data \
    6. -v repo-01:/usr/share/opensearch/snapshots \
    7. -v ~/deploy/opensearch-01.yml:/usr/share/opensearch/config/opensearch.yml \
    8. -v ~/deploy/root-ca.pem:/usr/share/opensearch/config/root-ca.pem \
    9. -v ~/deploy/admin.pem:/usr/share/opensearch/config/admin.pem \
    10. -v ~/deploy/admin-key.pem:/usr/share/opensearch/config/admin-key.pem \
    11. -v ~/deploy/os-node-01.pem:/usr/share/opensearch/config/os-node-01.pem \
    12. -v ~/deploy/os-node-01-key.pem:/usr/share/opensearch/config/os-node-01-key.pem \
    13. --network opensearch-dev-net \
    14. --ip 172.20.0.11 \
    15. --name os-node-01 \
    16. opensearchproject/opensearch:2.5.0

    copy

    Example response

    1. d26d0cb2e1e93e9c01bb00f19307525ef89c3c3e306d75913860e6542f729ea4
  5. Optional: Query the cluster to determine which node is acting as the cluster manager. You can run this command at any time during the process to see when a new cluster manager is elected:

    1. curl -s "https://localhost:9201/_cat/nodes?v&h=name,version,node.role,master" \
    2. -ku admin:<custom-admin-password> | column -t

    copy

    Example response

    1. name version node.role master
    2. os-node-01 2.5.0 dimr -
    3. os-node-04 1.3.7 dimr *
    4. os-node-02 1.3.7 dimr -
    5. os-node-03 1.3.7 dimr -
  6. Optional: Query the cluster to see how shard allocation changes as nodes are removed and replaced. You can run this command at any time during the process to see how shard statuses change:

    1. curl -s "https://localhost:9201/_cat/shards" \
    2. -ku admin:<custom-admin-password>

    copy

    Example response

    1. security-auditlog-2023.03.06 0 p STARTED 53 214.5kb 172.20.0.13 os-node-03
    2. security-auditlog-2023.03.06 0 r UNASSIGNED
    3. .kibana_1 0 p STARTED 3 14.5kb 172.20.0.12 os-node-02
    4. .kibana_1 0 r STARTED 3 14.5kb 172.20.0.13 os-node-03
    5. ecommerce 0 p STARTED 4675 3.9mb 172.20.0.12 os-node-02
    6. ecommerce 0 r STARTED 4675 3.9mb 172.20.0.14 os-node-04
    7. security-auditlog-2023.03.07 0 p STARTED 37 175.7kb 172.20.0.14 os-node-04
    8. security-auditlog-2023.03.07 0 r UNASSIGNED
    9. .opendistro_security 0 p STARTED 10 67.9kb 172.20.0.12 os-node-02
    10. .opendistro_security 0 r STARTED 10 67.9kb 172.20.0.13 os-node-03
    11. .opendistro_security 0 r STARTED 10 64.5kb 172.20.0.14 os-node-04
    12. .opendistro_security 0 r UNASSIGNED
    13. security-auditlog-2023.02.27 0 p STARTED 4 80.5kb 172.20.0.12 os-node-02
    14. security-auditlog-2023.02.27 0 r UNASSIGNED
    15. security-auditlog-2023.02.28 0 p STARTED 6 104.1kb 172.20.0.14 os-node-04
    16. security-auditlog-2023.02.28 0 r UNASSIGNED
    17. opensearch_dashboards_sample_data_logs 0 p STARTED 14074 9.1mb 172.20.0.12 os-node-02
    18. opensearch_dashboards_sample_data_logs 0 r STARTED 14074 8.9mb 172.20.0.13 os-node-03
    19. .kibana_92668751_admin_1 0 r STARTED 33 37.3kb 172.20.0.13 os-node-03
    20. .kibana_92668751_admin_1 0 p STARTED 33 37.3kb 172.20.0.14 os-node-04
  7. Stop os-node-02:

    1. docker stop os-node-02 && docker container rm os-node-02

    copy

  8. Start a new container named os-node-02 with the opensearchproject/opensearch:2.5.0 image and using the same mapped volumes as the original container:

    1. docker run -d \
    2. -p 9202:9200 -p 9602:9600 \
    3. -e "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" \
    4. --ulimit nofile=65536:65536 --ulimit memlock=-1:-1 \
    5. -v data-02:/usr/share/opensearch/data \
    6. -v repo-01:/usr/share/opensearch/snapshots \
    7. -v ~/deploy/opensearch-02.yml:/usr/share/opensearch/config/opensearch.yml \
    8. -v ~/deploy/root-ca.pem:/usr/share/opensearch/config/root-ca.pem \
    9. -v ~/deploy/admin.pem:/usr/share/opensearch/config/admin.pem \
    10. -v ~/deploy/admin-key.pem:/usr/share/opensearch/config/admin-key.pem \
    11. -v ~/deploy/os-node-02.pem:/usr/share/opensearch/config/os-node-02.pem \
    12. -v ~/deploy/os-node-02-key.pem:/usr/share/opensearch/config/os-node-02-key.pem \
    13. --network opensearch-dev-net \
    14. --ip 172.20.0.12 \
    15. --name os-node-02 \
    16. opensearchproject/opensearch:2.5.0

    copy

    Example response

    1. 7b802865bd6eb420a106406a54fc388ed8e5e04f6cbd908c2a214ea5ce72ac00
  9. Stop os-node-03:

    1. docker stop os-node-03 && docker container rm os-node-03

    copy

  10. Start a new container named os-node-03 with the opensearchproject/opensearch:2.5.0 image and using the same mapped volumes as the original container:

    1. docker run -d \
    2. -p 9203:9200 -p 9603:9600 \
    3. -e "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" \
    4. --ulimit nofile=65536:65536 --ulimit memlock=-1:-1 \
    5. -v data-03:/usr/share/opensearch/data \
    6. -v repo-01:/usr/share/opensearch/snapshots \
    7. -v ~/deploy/opensearch-03.yml:/usr/share/opensearch/config/opensearch.yml \
    8. -v ~/deploy/root-ca.pem:/usr/share/opensearch/config/root-ca.pem \
    9. -v ~/deploy/admin.pem:/usr/share/opensearch/config/admin.pem \
    10. -v ~/deploy/admin-key.pem:/usr/share/opensearch/config/admin-key.pem \
    11. -v ~/deploy/os-node-03.pem:/usr/share/opensearch/config/os-node-03.pem \
    12. -v ~/deploy/os-node-03-key.pem:/usr/share/opensearch/config/os-node-03-key.pem \
    13. --network opensearch-dev-net \
    14. --ip 172.20.0.13 \
    15. --name os-node-03 \
    16. opensearchproject/opensearch:2.5.0

    copy

    Example response

    1. d7f11726841a89eb88ff57a8cbecab392399f661a5205f0c81b60a995fc6c99d
  11. Stop os-node-04:

    1. docker stop os-node-04 && docker container rm os-node-04

    copy

  12. Start a new container named os-node-04 with the opensearchproject/opensearch:2.5.0 image and using the same mapped volumes as the original container:

    1. docker run -d \
    2. -p 9204:9200 -p 9604:9600 \
    3. -e "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" \
    4. --ulimit nofile=65536:65536 --ulimit memlock=-1:-1 \
    5. -v data-04:/usr/share/opensearch/data \
    6. -v repo-01:/usr/share/opensearch/snapshots \
    7. -v ~/deploy/opensearch-04.yml:/usr/share/opensearch/config/opensearch.yml \
    8. -v ~/deploy/root-ca.pem:/usr/share/opensearch/config/root-ca.pem \
    9. -v ~/deploy/admin.pem:/usr/share/opensearch/config/admin.pem \
    10. -v ~/deploy/admin-key.pem:/usr/share/opensearch/config/admin-key.pem \
    11. -v ~/deploy/os-node-04.pem:/usr/share/opensearch/config/os-node-04.pem \
    12. -v ~/deploy/os-node-04-key.pem:/usr/share/opensearch/config/os-node-04-key.pem \
    13. --network opensearch-dev-net \
    14. --ip 172.20.0.14 \
    15. --name os-node-04 \
    16. opensearchproject/opensearch:2.5.0

    copy

    Example response

    1. 26f8286ab11e6f8dcdf6a83c95f265172f9557578a1b292af84c6f5ef8738e1d
  13. Confirm that your cluster is running the new version:

    1. curl -s "https://localhost:9201/_cat/nodes?v&h=name,version,node.role,master" \
    2. -ku admin:<custom-admin-password> | column -t

    copy

    Example response

    1. name version node.role master
    2. os-node-01 2.5.0 dimr *
    3. os-node-02 2.5.0 dimr -
    4. os-node-04 2.5.0 dimr -
    5. os-node-03 2.5.0 dimr -
  14. The last component you should upgrade is the OpenSearch Dashboards node. First, stop and remove the old container:

    1. docker stop os-dashboards-01 && docker rm os-dashboards-01

    copy

  15. Create a new container running the target version of OpenSearch Dashboards:

    1. docker run -d \
    2. -p 5601:5601 --expose 5601 \
    3. -v ~/deploy/opensearch_dashboards.yml:/usr/share/opensearch-dashboards/config/opensearch_dashboards.yml \
    4. -v ~/deploy/root-ca.pem:/usr/share/opensearch-dashboards/config/root-ca.pem \
    5. -v ~/deploy/os-dashboards-01.pem:/usr/share/opensearch-dashboards/config/os-dashboards-01.pem \
    6. -v ~/deploy/os-dashboards-01-key.pem:/usr/share/opensearch-dashboards/config/os-dashboards-01-key.pem \
    7. --network opensearch-dev-net \
    8. --ip 172.20.0.10 \
    9. --name os-dashboards-01 \
    10. opensearchproject/opensearch-dashboards:2.5.0

    copy

    Example response

    1. 310de7a24cf599ca0b39b241db07fa8865592ebe15b6f5fda26ad19d8e1c1e09
  16. Make sure the OpenSearch Dashboards container started properly. A command like the following can be used to confirm that requests to https://HOST_ADDRESS:5601 are redirected (HTTP status code 302) to /app/login?:

    1. curl https://localhost:5601 -kI

    copy

    Example response

    1. HTTP/1.1 302 Found
    2. location: /app/login?
    3. osd-name: opensearch-dashboards-dev
    4. cache-control: private, no-cache, no-store, must-revalidate
    5. set-cookie: security_authentication=; Max-Age=0; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Secure; HttpOnly; Path=/
    6. content-length: 0
    7. Date: Wed, 08 Mar 2023 15:36:53 GMT
    8. Connection: keep-alive
    9. Keep-Alive: timeout=120
  17. Re-enable allocation of replica shards:

    1. curl -H 'Content-type: application/json' \
    2. -X PUT "https://localhost:9201/_cluster/settings?pretty" \
    3. -d'{"persistent":{"cluster.routing.allocation.enable":"all"}}' \
    4. -ku admin:<custom-admin-password>

    copy

    Example response

    1. {
    2. "acknowledged" : true,
    3. "persistent" : {
    4. "cluster" : {
    5. "routing" : {
    6. "allocation" : {
    7. "enable" : "all"
    8. }
    9. }
    10. }
    11. },
    12. "transient" : { }
    13. }

Validating the upgrade

You successfully deployed a secure OpenSearch cluster, indexed data, created a dashboard populated with sample data, created a new internal user, backed up your important files, and upgraded the cluster from version 1.3.7 to 2.5.0. Before you continue exploring and experimenting with OpenSearch and OpenSearch Dashboards, you should validate the outcome of the upgrade.

For this cluster, post-upgrade validation steps can include verifying the following:

Verifying the new running version

  1. Verify the current running version of your OpenSearch nodes:

    1. curl -s "https://localhost:9201/_cat/nodes?v&h=name,version,node.role,master" \
    2. -ku admin:<custom-admin-password> | column -t

    copy

    Example response

    1. name version node.role master
    2. os-node-01 2.5.0 dimr *
    3. os-node-02 2.5.0 dimr -
    4. os-node-04 2.5.0 dimr -
    5. os-node-03 2.5.0 dimr -
  2. Verify the current running version of OpenSearch Dashboards:

    1. Option 1: Verify the OpenSearch Dashboards version from the web interface.
      1. Open a web browser and navigate to port 5601 on your Docker host (for example, https://HOST_ADDRESS:5601).
      2. Log in with the default username (admin) and default password (admin).
      3. Select the Help button in the upper-right corner. The version is displayed in a pop-up window.
      4. Select the Help button again to close the pop-up window.
    2. Option 2: Verify the OpenSearch Dashboards version by inspecting manifest.yml.

      1. From the command line, open an interactive pseudo-TTY session with the OpenSearch Dashboards container:

        1. docker exec -it os-dashboards-01 bash

        copy

      2. Check manifest.yml for the version:

        1. head -n 5 manifest.yml

        copy

        Example response

        1. ---
        2. schema-version: '1.1'
        3. build:
        4. name: OpenSearch Dashboards
        5. version: 2.5.0
      3. Terminate the pseudo-TTY session:

        1. exit

        copy

Verifying cluster health and shard allocation

  1. Query the Cluster health API endpoint to see information about the health of your cluster. You should see a status of green, which indicates that all primary and replica shards are allocated:

    1. curl -s "https://localhost:9201/_cluster/health?pretty" -ku admin:<custom-admin-password>

    copy

    Example response

    1. {
    2. "cluster_name" : "opensearch-dev-cluster",
    3. "status" : "green",
    4. "timed_out" : false,
    5. "number_of_nodes" : 4,
    6. "number_of_data_nodes" : 4,
    7. "discovered_master" : true,
    8. "discovered_cluster_manager" : true,
    9. "active_primary_shards" : 16,
    10. "active_shards" : 36,
    11. "relocating_shards" : 0,
    12. "initializing_shards" : 0,
    13. "unassigned_shards" : 0,
    14. "delayed_unassigned_shards" : 0,
    15. "number_of_pending_tasks" : 0,
    16. "number_of_in_flight_fetch" : 0,
    17. "task_max_waiting_in_queue_millis" : 0,
    18. "active_shards_percent_as_number" : 100.0
    19. }
  2. Query the CAT shards API endpoint to see how shards are allocated after the cluster is upgrade:

    1. curl -s "https://localhost:9201/_cat/shards" -ku admin:<custom-admin-password>

    copy

    Example response

    1. security-auditlog-2023.02.27 0 r STARTED 4 80.5kb 172.20.0.13 os-node-03
    2. security-auditlog-2023.02.27 0 p STARTED 4 80.5kb 172.20.0.11 os-node-01
    3. security-auditlog-2023.03.08 0 p STARTED 30 95.2kb 172.20.0.13 os-node-03
    4. security-auditlog-2023.03.08 0 r STARTED 30 123.8kb 172.20.0.11 os-node-01
    5. ecommerce 0 p STARTED 4675 3.9mb 172.20.0.12 os-node-02
    6. ecommerce 0 r STARTED 4675 3.9mb 172.20.0.13 os-node-03
    7. .kibana_1 0 p STARTED 3 5.9kb 172.20.0.12 os-node-02
    8. .kibana_1 0 r STARTED 3 5.9kb 172.20.0.11 os-node-01
    9. .kibana_92668751_admin_1 0 p STARTED 33 37.3kb 172.20.0.13 os-node-03
    10. .kibana_92668751_admin_1 0 r STARTED 33 37.3kb 172.20.0.11 os-node-01
    11. opensearch_dashboards_sample_data_logs 0 p STARTED 14074 9.1mb 172.20.0.12 os-node-02
    12. opensearch_dashboards_sample_data_logs 0 r STARTED 14074 9.1mb 172.20.0.14 os-node-04
    13. security-auditlog-2023.02.28 0 p STARTED 6 26.2kb 172.20.0.11 os-node-01
    14. security-auditlog-2023.02.28 0 r STARTED 6 26.2kb 172.20.0.14 os-node-04
    15. .opendistro-reports-definitions 0 p STARTED 0 208b 172.20.0.12 os-node-02
    16. .opendistro-reports-definitions 0 r STARTED 0 208b 172.20.0.13 os-node-03
    17. .opendistro-reports-definitions 0 r STARTED 0 208b 172.20.0.14 os-node-04
    18. security-auditlog-2023.03.06 0 r STARTED 53 174.6kb 172.20.0.12 os-node-02
    19. security-auditlog-2023.03.06 0 p STARTED 53 174.6kb 172.20.0.14 os-node-04
    20. .kibana_101107607_newuser_1 0 r STARTED 1 5.1kb 172.20.0.13 os-node-03
    21. .kibana_101107607_newuser_1 0 p STARTED 1 5.1kb 172.20.0.11 os-node-01
    22. .opendistro_security 0 r STARTED 10 64.5kb 172.20.0.12 os-node-02
    23. .opendistro_security 0 r STARTED 10 64.5kb 172.20.0.13 os-node-03
    24. .opendistro_security 0 r STARTED 10 64.5kb 172.20.0.11 os-node-01
    25. .opendistro_security 0 p STARTED 10 64.5kb 172.20.0.14 os-node-04
    26. .kibana_-152937574_admintenant_1 0 r STARTED 1 5.1kb 172.20.0.12 os-node-02
    27. .kibana_-152937574_admintenant_1 0 p STARTED 1 5.1kb 172.20.0.14 os-node-04
    28. security-auditlog-2023.03.07 0 r STARTED 37 175.7kb 172.20.0.12 os-node-02
    29. security-auditlog-2023.03.07 0 p STARTED 37 175.7kb 172.20.0.14 os-node-04
    30. .kibana_92668751_admin_2 0 p STARTED 34 38.6kb 172.20.0.13 os-node-03
    31. .kibana_92668751_admin_2 0 r STARTED 34 38.6kb 172.20.0.11 os-node-01
    32. .kibana_2 0 p STARTED 3 6kb 172.20.0.13 os-node-03
    33. .kibana_2 0 r STARTED 3 6kb 172.20.0.14 os-node-04
    34. .opendistro-reports-instances 0 r STARTED 0 208b 172.20.0.12 os-node-02
    35. .opendistro-reports-instances 0 r STARTED 0 208b 172.20.0.11 os-node-01
    36. .opendistro-reports-instances 0 p STARTED 0 208b 172.20.0.14 os-node-04

Verifying data consistency

You need to query the ecommerce index again in order to confirm that the sample data is still present:

  1. Compare the response to this query with the response you received in the last step of Indexing data with the REST API:

    1. curl -H 'Content-Type: application/json' \
    2. -X GET "https://localhost:9201/ecommerce/_search?pretty=true&filter_path=hits.total" \
    3. -d'{"query":{"match":{"customer_first_name":"Sonya"}}}' \
    4. -ku admin:<custom-admin-password>

    copy

    Example response

    1. {
    2. "hits" : {
    3. "total" : {
    4. "value" : 106,
    5. "relation" : "eq"
    6. }
    7. }
    8. }
  2. Open a web browser and navigate to port 5601 on your Docker host (for example, https://HOST_ADDRESS:5601).

  3. Enter the default username (admin) and password (admin).
  4. On the OpenSearch Dashboards Home page, select the Menu button in the upper-left corner of the web interface to open the Navigation pane.
  5. Select Dashboard.
  6. Choose [Logs] Web Traffic to open the dashboard that was created when you added sample data earlier in the process.
  7. When you are done reviewing the dashboard, select the Profile button. Choose Log out so you can log in as a different user.
  8. Enter the username and password you created before upgrading, then select Log in.

Next steps

Review the following resoures to learn more about how OpenSearch works: