DC/OS on AWS using the Universal Installer

Guide for DC/OS on AWS using the Universal Installer

IMPORTANT: The Universal Installer has now been upgraded to `0.3`. If you already have a terraform cluster running please see the update information at the bottom of this page.

This guide is meant to take an operator through all steps necessary for a successful installation of DC/OS using Terraform. If you are already familiar with the prerequisites, you can jump to creating a DC/OS Cluster.

Prerequisites

  • Linux, macOS, or Windows
  • command-line shell terminal such as Bash or PowerShell
  • verified Amazon Web Services (AWS) account and AWS IAM user profile with permissions

Install Terraform

  1. Visit the Terraform releases page for bundled installations and support for Linux, macOS and Windows. Choose the latest 0.12 version.

    If you’re on a Mac environment with Homebrew installed, simply run the following command:

    1. brew unlink terraform || true
    2. brew install tfenv
    3. tfenv install 0.12.25
    4. tfenv use 0.12.25

    Windows users that have Chocolatey installed, run:

    1. choco install terraform --version 0.12.25 -y

Ensure your cloud provider credentials

There are many ways of passing in your credentials in order for Terraform to authenticate with your cloud provider. Most likely, you already have your cloud provider credentials loaded through the AWS CLI. Terraform will automatically detect those credentials during initialization for you. See configuring the AWS CLI for more information on setting up credentials and user profile.

Alternatively, you can pass in your access_key and secret_key through the configuration file which you will create. The properties listed here are the three things that Terraform needs on your behalf. See the provider configuration reference for more information on how this works under the hood. Also, please keep in mind storing your credentials outside of your version control for security.

  1. provider "aws" {
  2. access_key = "foo"
  3. secret_key = "bar"
  4. region = "us-east-1"
  5. }

Set up SSH credentials for your cluster

Terraform will need to send out SSH keys to connect securely to the nodes it creates. If you already have a key-pair available and added to your SSH-agent, you can skip this section.

  1. Not sure if you have a keypair you want to use? List the contents of your ssh directory.

    1. ls ~/.ssh
  2. If you don’t have one you like, start the ssh-keygen program to create a new key pair, following the prompts.

    1. ssh-keygen -t rsa
  3. Add the key to your SSH agent by starting the agent if it isn’t already running and then loading your key:

    1. eval "$(ssh-agent -s)"
    1. ssh-add ~/.ssh/<your-key-name>

Verify you have a license key for Enterprise Edition

DC/OS Enterprise requires a valid license key provided by Mesosphere that will be passed into the main.tf configuration file as dcos_license_key_contents. If you do not set a password, the default superuser and password will be available for log in:

Username: bootstrapuser
Password: deleteme

IMPORTANT: You should NOT use the default credentials in a production environment. When you create or identify an administrative account for the production environment, you also need to generate a password hash for the account.

To set superuser credentials for the first log in, add the following values into your main.tf along with your license key. The password will need to be hashed to SHA-512.

  1. dcos_superuser_username = "superuser-name"
  2. dcos_superuser_password_hash = "${file("./dcos_superuser_password_hash")}

Creating a DC/OS Cluster

  1. Let’s start by creating a local folder and cd’ing into it. This folder will be used as the staging ground for downloading all required Terraform modules and holding the configuration for the cluster you are about to create.

    1. mkdir dcos-demo && cd dcos-demo
  2. Create a file in that folder called main.tf, which is the configuration file that will be called on each time when terraform runs. The name of this file should always be main.tf. Open the file in the code editor of your choice and paste in the following. Notice the copy icon in the upper right hand corner of the code block to copy the code to your clipboard:

    1. provider "aws" {
    2. # Change your default region here
    3. region = "us-east-1"
    4. }
    5. # Used to determine your public IP for forwarding rules
    6. data "http" "whatismyip" {
    7. url = "http://whatismyip.akamai.com/"
    8. }
    9. module "dcos" {
    10. source = "dcos-terraform/dcos/aws"
    11. version = "~> 0.3.0"
    12. providers = {
    13. aws = aws
    14. }
    15. cluster_name = "my-dcos-demo"
    16. ssh_public_key_file = "<path-to-public-key-file>"
    17. admin_ips = ["${data.http.whatismyip.body}/32"]
    18. num_masters = 3
    19. num_private_agents = 2
    20. num_public_agents = 1
    21. dcos_version = "2.1"
    22. # dcos_variant = "ee"
    23. # dcos_license_key_contents = "${file("./license.txt")}"
    24. # Make sure to set your credentials if you do not want the default EE
    25. # dcos_superuser_username = "superuser-name"
    26. # dcos_superuser_password_hash = "${file("./dcos_superuser_password_hash.sha512")}"
    27. dcos_variant = "open"
    28. dcos_instance_os = "centos_7.5"
    29. bootstrap_instance_type = "m5.large"
    30. masters_instance_type = "m5.2xlarge"
    31. private_agents_instance_type = "m5.xlarge"
    32. public_agents_instance_type = "m5.xlarge"
    33. }
    34. output "masters-ips" {
    35. value = module.dcos.masters-ips
    36. }
    37. output "cluster-address" {
    38. value = module.dcos.masters-loadbalancer
    39. }
    40. output "public-agents-loadbalancer" {
    41. value = module.dcos.public-agents-loadbalancer
    42. }
  3. There is a main variable that must be set to complete the main.tf:

    • ssh_public_key_file = "<path-to-public-key-file>": the path to the public key for your cluster, following our example it would be:

      1. "~/.ssh/aws-key.pub"
  4. region is a setting that sets the AWS region that this DC/OS cluster will spin up on. While this setting is currently set to “us-east-1”, it can be changed to any other region (e.g “us-west-1”, “us-west-2”, “us-east-2”, etc). For a complete list, please refer to the configuration reference.

  5. The bootstrap_instance_type, masters_instance_type, private_agents_instance_type, and public_agents_instance_type variables control which AWS instance type will be used for each node type. What instance types are available can vary by region and change over time. Ensure the instance types you select meet DC/OS’ minimum system requirements.

  6. Enterprise users, uncomment/comment the section for the dcos_variant to look like this, inserting the location to your license key, and adding superuser credentials if needed. Enterprise

    1. dcos_variant = "ee"
    2. dcos_license_key_contents = "${file("./license.txt")}"
    3. # dcos_variant = "open"
  7. This sample configuration file will get you started on the installation of an open source DC/OS 1.13.3 cluster with the following nodes:

    • 3 Master
    • 2 Private Agents
    • 1 Public Agent

    If you want to change the cluster name or vary the number of masters/agents, feel free to adjust those values now as well. Cluster names must be unique, consist of alphanumeric characters, ‘-’, ‘_’ or ‘.’, start and end with an alphanumeric character, and be no longer than 24 characters. You can find additional input variables and their descriptions here.

    There are also simple helpers listed underneath the module which find your public ip and specify that the following output should be printed once cluster creation is complete:

    • master-ips A list of Your DC/OS master nodes
    • cluster-address The URL you use to access DC/OS UI after the cluster is setup.
    • public-agent-loadbalancer The URL of your Public routable services.
  8. Check that you have inserted your cloud provider and public key paths to main.tf, changed or added any other additional variables as wanted, then save and close your file.

Initialize Terraform and create a cluster

  1. Now the action of actually creating your cluster and installing DC/OS begins. First, initialize the project’s local settings and data. Make sure you are still working in the same folder where you created your main.tf file, and run the initialization.

    1. terraform init -upgrade
    1. Terraform has been successfully initialized!
    2. You may now begin working with Terraform. Try running "terraform plan" to see
    3. any changes that are required for your infrastructure. All Terraform commands
    4. should now work.
    5. If you ever set or change modules or backend configuration for Terraform,
    6. rerun this command to reinitialize your environment. If you forget, other
    7. commands will detect it and remind you to do so if necessary.

    Note: If terraform is not able to connect to your provider, ensure that you are logged in and are exporting your credentials and necessary region information for your cloud provider.

  2. After Terraform has been initialized, the next step is to run the execution planner and save the plan to a static file - in this case, plan.out.

    1. terraform plan -out=plan.out

    Writing the execution plan to a file allows us to pass the execution plan to the apply command below as well help us guarantee the accuracy of the plan. Note that this file is ONLY readable by Terraform.

    Afterwards, we should see a message like the one below, confirming that we have successfully saved to the plan.out file. This file should appear in your dcos-demo folder alongside main.tf.

    1. lan: 97 to add, 0 to change, 0 to destroy.
    2. ------------------------------------------------------------------------
    3. This plan was saved to: plan.out
    4. To perform exactly these actions, run the following command to apply:
    5. terraform apply "plan.out"

    Every time you run terraform plan, the output will always detail the resources your plan will be adding, changing or destroying. Since we are creating our DC/OS cluster for the very first time, our output tells us that our plan will result in adding 38 pieces of infrastructure/resources.

  3. The next step is to get Terraform to build/deploy our plan. Run the command below.

    1. terraform apply plan.out

    Sit back and enjoy! The infrastructure of your DC/OS cluster is being created while you watch. This may take a few minutes.

    Once Terraform has completed applying the plan, you should see output similar to the following:

    1. Apply complete! Resources: 97 added, 0 changed, 0 destroyed.
    2. The state of your infrastructure has been saved to the path
    3. below. This state is required to modify and destroy your
    4. infrastructure, so keep it safe. To inspect the complete state
    5. use the `terraform show` command.
    6. State path: terraform.tfstate
    7. Outputs:
    8. cluster-address = my-dcos-demo-705740393.us-east-1.elb.amazonaws.com
    9. masters-ips = [
    10. "3.83.23.21",
    11. "18.212.236.17",
    12. "54.91.103.151",
    13. ]
    14. public-agents-loadbalancer = ext-my-dcos-demo-cd20e369b92d07f7.elb.us-east-1.amazonaws.com

    And congratulations - you’re up and running!

Logging in to DC/OS

  1. To log in and start exploring your cluster, navigate to the cluster-address listed in the output of the CLI. From here you can choose your provider to create the superuser account Open Source, or log in with your specified Enterprise credentials Enterprise.

AWS - 图1

AWS - 图2

Scaling Your Cluster

Terraform makes it easy to scale your cluster to add additional agents (public or private) once the initial cluster has been created. Simply follow the instructions below.

  1. Increase the value for the num_private_agents and/or num_public_agents in your main.tf file. In this example we are going to scale our cluster from 2 private agents to 3, changing just that line, and saving the file.

    1. num_masters = "1"
    2. num_private_agents = "3"
    3. num_public_agents = "1"
  2. Now that we’ve made changes to our main.tf, we need to re-run our new execution plan.

    1. terraform plan -out=plan.out

    Doing this helps us to ensure that our state is stable and to confirm that we will only be creating the resources necessary to scale our Private Agents to the desired number.

    AWS - 图3

    You should see a message similar to above. There will be 3 resources added as a result of scaling up our cluster’s Private Agents (1 instance resource & 2 null resources which handle the DC/OS installation & prerequisites behind the scenes).

  3. Now that our plan is set, just like before, let’s get Terraform to build/deploy it.

    1. terraform apply plan.out

    AWS - 图4

    Once you see an output like the message above, check your DC/OS cluster to ensure the additional agents have been added.

    You should see now 4 total nodes connected like below via the DC/OS UI.

    AWS - 图5

Upgrading Your Cluster

Terraform also makes it easy to upgrade our cluster to a newer version of DC/OS. If you are interested in learning more about the upgrade procedure that Terraform performs, please see the official DC/OS Upgrade documentation.

  1. In order to perform an upgrade, we need to go back to our main.tf and modify the current DC/OS Version (dcos_version) to a newer version, such as 1.13.3 for this example.

    1. dcos_version = "2.1"
  2. We also should make sure having the latest version of the Terraform modules. So we tell Terraform to request those from the registry.

    1. terraform get -update
  3. Re-run the execution plan, terraform will notice the change in version and run accordingly.

    1. terraform plan -out=plan.out

    You should see an output like below, with your main.tf now set for normal operations on a new version of DC/OS.

    AWS - 图6

  4. Apply the plan.

    1. terraform apply plan.out

    Once the apply completes, you can verify that the cluster was upgraded via the DC/OS UI.

    AWS - 图7

Deleting Your Cluster

If you want to destroy your cluster, then use the following command and wait for it to complete.

  1. terraform destroy

Important: Running this command will cause your entire cluster and all of its associated resources to be destroyed. Only run this command if you are absolutely sure you no longer need access to your cluster.

You will be required to enter yes to verify.

AWS - 图8

[

Multi-Region DC/OS on AWS using the Universal Installer

ENTERPRISE

Guide for DC/OS on AWS using the Universal Installer adding a remote region.

]($5a037c4b3008be49.md)[

Replaceable masters on AWS using the Universal Installer

Replaceable Masters on AWS using the Universal Installer

]($7d8e99e3c98d6528.md)[

Configuration Reference - AWS

Configuring your DC/OS installation on AWS using the Universal Installer

]($1ed375a183edc4d1.md)[

Universal Installer FAQ & Troubleshooting Guide

FAQ and Common Issues with Universal Installer

]($9c65e917b6381401.md)