Production Installation

Installing production-ready DC/OS

This page outlines how to install DC/OS for production. Using this method, you can package the DC/OS distribution and connect to every node manually to run the DC/OS installation commands. This installation method is recommended if you want to integrate with an existing system or if you do not have SSH access to your cluster.

The DC/OS installation process requires a bootstrap node, master node, public agent node, and a private agent node. You can view the nodes documentation for more information.

Production Installation Process

The following steps are required to install DC/OS clusters:

  1. Configure bootstrap node
  2. Install DC/OS on master node
  3. Install DC/OS on agent node

Production Installation Process Figure 1. The production installation process

This installation method requires the following:

  • The bootstrap node must be network accessible from the cluster nodes.
  • The bootstrap node must have the HTTP(S) ports open from the cluster nodes.

The DC/OS installation creates the following folders:

FolderDescription
/opt/mesosphereContains the DC/OS binaries, libraries, and cluster configuration. Do not modify.
/etc/systemd/system/dcos.target.wantsContains the systemd services that start the systemd components. They must be located outside of /opt/mesosphere because of systemd constraints.
/etc/systemd/system/dcos.<units>Contains copies of the units in /etc/systemd/system/dcos.target.wants. They must be at the top folder as well as inside dcos.target.wants.
/var/lib/dcos/exhibitor/zookeeperContains the ZooKeeper data.
/var/lib/dockerContains the Docker data.
/var/lib/dcosContains the DC/OS data.
/var/lib/mesosContains the Mesos data.

WARNING: Changes to /opt/mesosphere are unsupported. They can lead to unpredictable behavior in DC/OS and prevent upgrades.

Prerequisites

Before installing DC/OS, your cluster must meet the software and hardware requirements.

Configure your cluster

  1. Create a directory named genconf on your bootstrap node and navigate to it.

    1. mkdir -p genconf

Store license file Enterprise

  1. Create a license file containing the license text received in email sent by your Authorized Support Contact and save as genconf/license.txt.

Create an IP detection script

In this step, an IP detection script is created. This script reports the IP address of each node across the cluster. Each node in a DC/OS cluster has a unique IP address that is used to communicate between nodes in the cluster. The IP detection script prints the unique IPv4 address of a node to STDOUT each time DC/OS is started on the node.

NOTE: The IP address of a node must not change after DC/OS is installed on the node. For example, the IP address should not change when a node is rebooted or if the DHCP lease is renewed. If the IP address of a node does change, the node must be uninstalled.

NOTE: The script must return the same IP address as specified in the config.yaml. For example, if the private master IP is specified as 10.2.30.4 in the config.yaml, your script should return this same value when run on the master.

  1. Create an IP detection script for your environment and save as genconf/ip-detect. This script needs to be UTF-8 encoded and have a valid shebang) line. You can use the examples below.

    • Use the AWS Metadata Server

      This method uses the AWS Metadata service to get the IP address:

      1. #!/bin/sh
      2. # Example ip-detect script using an external authority
      3. # Uses the AWS Metadata Service to get the node's internal
      4. # ipv4 address
      5. curl -fsSL http://169.254.169.254/latest/meta-data/local-ipv4
    • Use the GCE Metadata Server

      This method uses the GCE Metadata Server to get the IP address:

      1. #!/bin/sh
      2. # Example ip-detect script using an external authority
      3. # Uses the GCE metadata server to get the node's internal
      4. # ipv4 address
      5. curl -fsSl -H "Metadata-Flavor: Google" http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip
    • Use the IP address of an existing interface

      This method discovers the IP address of a particular interface of the node.

      If you have multiple generations of hardware with different internal IP address, the interface names can change between hosts. The IP detect script must account for the interface name changes. The example script could also be confused if you attach multiple IP addresses to a single interface, or do complex Linux networking, etc.

      1. #!/usr/bin/env bash
      2. set -o nounset -o errexit
      3. export PATH=/usr/sbin:/usr/bin:$PATH
      4. echo $(ip addr show eth0 | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' | head -1)
    • Use the network route to the Mesos master

      This method uses the route to a Mesos master to find the source IP address to then communicate with that node.

      In this example, we assume that the Mesos master has an IP address of 172.28.128.3. You can use any language for this script. Your Shebang line must be pointed at the correct environment for the language used and the output must be the correct IP address.

      Enterprise

  1. #!/usr/bin/env bash
  2. set -o nounset -o errexit
  3. MASTER_IP="172.28.128.3"
  4. echo $(ip route show to match $MASTER_IP | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' | tail -1)

Open Source

  1. #!/usr/bin/env bash
  2. set -o nounset -o errexit -o pipefail
  3. export PATH=/sbin:/usr/sbin:/bin:/usr/bin:$PATH
  4. MASTER_IP=$(dig +short master.mesos || true)
  5. MASTER_IP=${MASTER_IP:-172.28.128.3}
  6. INTERFACE_IP=$(ip r g ${MASTER_IP} | \
  7. awk -v master_ip=${MASTER_IP} '
  8. BEGIN { ec = 1 }
  9. {
  10. if($1 == master_ip) {
  11. print $7
  12. ec = 0
  13. } else if($1 == "local") {
  14. print $6
  15. ec = 0
  16. }
  17. if (ec == 0) exit;
  18. }
  19. END { exit ec }
  20. ')
  21. echo $INTERFACE_IP

Create a fault domain detection script Enterprise

By default, DC/OS clusters have fault domain awareness enabled, so no changes to your config.yaml are required to use this feature. However, you must include a fault domain detection script named fault-domain-detect in your ./genconf directory. To opt out of fault domain awareness, set the fault_domain_enabled parameter of your config.yaml file to false.

  1. Create a fault domain detect script named fault-domain-detect to run on each node to detect the node’s fault domain. During installation, the output of this script is passed to Mesos.

    We recommend a script like this:

    1. #!/bin/sh
    2. REGION="<enter region name>"
    3. ZONE="<enter zone name>"
    4. echo "{
    5. \"fault_domain\": {
    6. \"region\": {
    7. \"name\": \"${REGION}\"
    8. },
    9. \"zone\": {
    10. \"name\": \"${ZONE}\"
    11. }
    12. }
    13. }"

    We provide fault domain detect scripts for AWS and Azure nodes. For a cluster that has aws nodes and azure nodes you would combine the two into one script. You can use these as a model for creating a fault domain detect script for an on premises cluster.

    IMPORTANT: This script will not work if you use proxies in your environment. If you use a proxy, modifications will be required.

  1. Add your newly created fault-domain-detect script to the /genconf directory of your bootstrap node.

Create a configuration file

In this step, you can create a YAML configuration file that is customized for your environment. DC/OS uses this configuration file during installation to generate your cluster installation files.

Set up a super user password Enterprise

In the following instructions, we assume that you are using ZooKeeper for shared storage.

  1. From the bootstrap node, run this command to create a hashed password for superuser authentication, where <superuser_password> is the superuser password.

  2. Save the hashed password key for use in the superuser_password_hash parameter in your config.yaml file.

    1. sudo bash dcos_generate_config.ee.sh --hash-password <superuser_password>

    Here is an example of a hashed password output.

    1. Extracting an image from this script and loading it into a docker daemon, can take a few minutes.
    2. dcos-genconf.9eda4ae45de5488c0c-c40556fa73a00235f1.tar
    3. Running mesosphere/dcos-genconf docker with BUILD_DIR set to /home/centos/genconf
    4. 00:42:10 dcos_installer.action_lib.prettyprint:: ====> HASHING PASSWORD TO SHA512
    5. 00:42:11 root:: Hashed password for 'password' key:
    6. $6$rounds=656000$v55tdnlMGNoSEgYH$1JAznj58MR.Bft2wd05KviSUUfZe45nsYsjlEl84w34pp48A9U2GoKzlycm3g6MBmg4cQW9k7iY4tpZdkWy9t1

Create the configuration

  1. Create a configuration file and save as genconf/config.yaml. You can use this template to get started.

The Enterprise template specifies three Mesos masters, static master discovery list, internal storage backend for Exhibitor, a custom proxy, security mode specified, and cloud specific DNS resolvers. Enterprise

The Open Source template specifies three Mesos masters, three ZooKeeper instances for Exhibitor storage, static master discovery list, internal storage backend for Exhibitor, a custom proxy, and cloud specific DNS resolvers. Open Source

If your servers are installed with a domain name in your /etc/resolv.conf, add the dns_search parameter. For parameter descriptions and configuration examples, see the documentation.

NOTE: If AWS DNS IP is not available in your country, you can replace the AWS DNS IP servers 8.8.8.8 and 8.8.4.4 with your local DNS servers.

NOTE: If you specify master_discovery: static, you must also create a script to map internal IPs to public IPs on your bootstrap node (for example, genconf/ip-detect-public). This script is then referenced in ip_detect_public_filename: "relative-path-from-dcos-generate-config.sh".

NOTE: In AWS, or any other environment where you can not control a node’s IP address, master_discovery needs to be set to use master_http_load_balancer, and a load balancer needs to be set up.

Enterprise template Enterprise

  1. bootstrap_url: http://<bootstrap_ip>:80
  2. cluster_name: <cluster-name>
  3. superuser_username:
  4. superuser_password_hash:
  5. exhibitor_storage_backend: static
  6. master_discovery: static
  7. ip_detect_public_filename: <relative-path-to-ip-script>
  8. master_list:
  9. - <master-private-ip-1>
  10. - <master-private-ip-2>
  11. - <master-private-ip-3>
  12. resolvers:
  13. - 169.254.169.253
  14. # Choose your security mode: permissive or strict
  15. security: <security-mode>
  16. superuser_password_hash: <hashed-password> # Generated above
  17. superuser_username: <username> # This can be whatever you like
  18. # A custom proxy is optional. For details, see the configuration documentation.
  19. use_proxy: 'true'
  20. http_proxy: http://<user>:<pass>@<proxy_host>:<http_proxy_port>
  21. https_proxy: https://<user>:<pass>@<proxy_host>:<https_proxy_port>
  22. no_proxy:
  23. - 'foo.bar.com'
  24. - '.baz.com'
  25. fault_domain_enabled: false
  26. #If IPv6 is disabled in your kernel, you must disable it in the config.yaml
  27. enable_ipv6: 'false'

Open Source template Open Source

  1. bootstrap_url: http://<bootstrap_ip>:80
  2. cluster_name: <cluster-name>
  3. exhibitor_storage_backend: static
  4. master_discovery: static
  5. ip_detect_public_filename: <relative-path-to-ip-script>
  6. master_list:
  7. - <master-private-ip-1>
  8. - <master-private-ip-2>
  9. - <master-private-ip-3>
  10. resolvers:
  11. - 169.254.169.253
  12. use_proxy: 'true'
  13. http_proxy: http://<user>:<pass>@<proxy_host>:<http_proxy_port>
  14. https_proxy: https://<user>:<pass>@<proxy_host>:<https_proxy_port>
  15. no_proxy:
  16. - 'foo.bar.com'
  17. - '.baz.com'

Create a bootstrap pre-shared key (Optional) Enterprise

For additional security, create a random pre-shared key. This key will be used to authenticate requests very early in the installation process. This key will later be transferred to your master nodes and should be present on your bootstrap node at genconf/ca/psk for the duration of the installation process.

  1. mkdir genconf/ca
  2. cat /dev/urandom | tr -dc 'a-z' | fold -w 16 | head -n1 > genconf/ca/psk
  3. chmod 600 genconf/ca/psk

Install DC/OS

In this step, you will create a custom DC/OS build file on your bootstrap node and then install DC/OS onto your cluster. With this method you

  1. Package the DC/OS distribution yourself
  2. Connect to every server manually
  3. Run the commands

NOTE: Due to a cluster configuration issue with overlay networks, we recommend setting enable_ipv6 to false in config.yaml when upgrading or configuring a new cluster. If you have already upgraded to DC/OS 1.12.x without configuring enable_ipv6 or if config.yaml file is set to true, then do not add new nodes.

You can find additional information and a more detailed remediation procedure in our latest critical product advisory. Enterprise

IMPORTANT: Do not install DC/OS until you have these items working: ip-detect script, DNS, and NTP on all DC/OS nodes with time synchronized. See troubleshooting for more information.

NOTE: If something goes wrong and you want to rerun your setup, use the cluster uninstall instructions.

Prerequisites

  • A genconf/config.yaml file that is optimized for manual distribution of DC/OS across your nodes.
  • A genconf/license.txt file containing your DC/OS Enterprise license. Enterprise
  • A genconf/ip-detect script.

The term dcos_generate_config file refers to either a dcos_generate_config.ee.sh file or dcos_generate_config.sh file, based on whether you are using the Enterprise or Open Source version of DC/OS.

  • Download and save the dcos_generate_config file to your bootstrap node. This file is used to create your customized DC/OS build file. Contact your sales representative or sales@mesosphere.com for access to this file. Enterprise

    OR

  • Download and save the dcos_generate_config file to your bootstrap node. This file is used to create your customized DC/OS build file. Open Source

    1. curl -O https://downloads.dcos.io/dcos/stable/dcos_generate_config.sh
  1. From the bootstrap node, run the DC/OS installer shell script to generate a customized DC/OS build file. The setup script extracts a Docker container that uses the generic DC/OS install files to create customized DC/OS build files for your cluster. The build files are output to ./genconf/serve/.

    You can view all of the automated command line installer options with:

    • dcos_generate_config.ee.sh --help flag Enterprise OR
    • dcos_generate_config.sh --help flag. Open Source

Enterprise

  1. sudo bash dcos_generate_config.ee.sh

At this point your directory structure should resemble:

  1. ├── dcos-genconf.c9722490f11019b692-cb6b6ea66f696912b0.tar
  2. ├── dcos_generate_config.ee.sh
  3. ├── genconf
  4. ├── config.yaml
  5. ├── ip-detect
  6. ├── license.txt

Open Source

  1. sudo bash dcos_generate_config.sh

At this point your directory structure should resemble:

  1. ├── dcos-genconf.<HASH>.tar
  2. ├── dcos_generate_config.sh
  3. ├── genconf
  4. ├── config.yaml
  5. ├── ip-detect
  • For the install script to work, you must have created genconf/config.yaml and genconf/ip-detect.
  1. From your home directory, run the following command to host the DC/OS install package through an NGINX Docker container. For <your-port>, specify the port value that is used in the bootstrap_url.

    1. sudo docker run -d -p <your-port>:80 -v $PWD/genconf/serve:/usr/share/nginx/html:ro nginx
  2. Run the following commands on each of your master nodes in succession to install DC/OS using your custom build file:

    • If created, copy the pre-shared key to your master nodes at /var/lib/dcos/.dcos-bootstrap-ca-psk

      1. scp -p genconf/ca/psk <master-ip>:/var/lib/dcos/.dcos-bootstrap-ca-psk
    • SSH to your master nodes.

      1. ssh <master-ip>
    • Make a new directory and navigate to it.

      1. mkdir /tmp/dcos && cd /tmp/dcos
    • Download the DC/OS installer from the NGINX Docker container, where <bootstrap-ip> and <your_port> are specified in bootstrap_url.

      1. curl -O http://<bootstrap-ip>:<your_port>/dcos_install.sh
    • Run the following command to install DC/OS on your master nodes.

      1. sudo bash dcos_install.sh master
  1. **NOTE:** Although there is no actual harm to your cluster, DC/OS may issue error messages until all of your master nodes are configured.
  1. Run the following commands on each of your agent nodes to install DC/OS using your custom build file:

    • SSH to your agent nodes.

      1. ssh <agent-ip>
    • Make a new directory and navigate to it.

      1. mkdir /tmp/dcos && cd /tmp/dcos
    • Download the DC/OS installer from the NGINX Docker container, where <bootstrap-ip> and <your_port> are specified in bootstrap_url.

      1. curl -O http://<bootstrap-ip>:<your_port>/dcos_install.sh
    • Run this command to install DC/OS on your agent nodes. You must designate your agent nodes as Public agent nodes or Private agent nodes.

      • Private agent nodes:

        1. sudo bash dcos_install.sh slave
      • Public agent nodes:

        1. sudo bash dcos_install.sh slave_public
  1. **Note:** If you encounter errors such as `Time is marked as bad`, `adjtimex`, or `Time not in sync` in journald, verify that Network Time Protocol (NTP) is enabled on all nodes. For more information, see the [system requirements]($4f5c34c09730d3ef.md) documentation.
  1. Monitor the DC/OS web interface and wait for it to display at: http://<master-node-public-ip>/.

    NOTE: This process can take about 10 minutes.

    NOTE: After clicking Log In To DC/OS, your browser may show a warning that your connection is not secure. This is because DC/OS uses self-signed certificates. You can ignore this error and click to proceed.

If the panel does not load, take a look at the troubleshooting documentation.

  1. Enter your administrator username and password.

Login screen

Figure 3. Sign in dialogue

You are done! The UI dashboard will now be displayed.

UI dashboard

Figure 4. DC/OS UI dashboard

NOTE: You can also use Universal Installer to deploy DC/OS on AWS, Azure, or GCP in production.

Next Steps: Enterprise and Open Source users

You can find information on the next steps listed below: