Installation (cephadm)

A new Ceph cluster is deployed by bootstrapping a cluster on a singlenode, and then adding additional nodes and daemons via the CLI or GUIdashboard.

The following example installs a basic three-node cluster. Eachnode will be identified by its prompt. For example, “[monitor 1]”identifies the first monitor, “[monitor 2]” identifies the secondmonitor, and “[monitor 3]” identifies the third monitor. Thisinformation is provided in order to make clear which commandsshould be issued on which systems.

“[any node]” identifies any Ceph node, and in the contextof this installation guide means that the associated commandcan be run on any node.

Get cephadm

The cephadm utility is used to bootstrap a new Ceph Cluster.

Use curl to fetch the standalone script:

  1. [monitor 1] # curl --silent --remote-name --location https://github.com/ceph/ceph/raw/master/src/cephadm/cephadm
  2. [monitor 1] # chmod +x cephadm

You can also get the utility by installing a package provided byyour Linux distribution:

  1. [monitor 1] # apt install -y cephadm # or
  2. [monitor 1] # dnf install -y cephadm # or
  3. [monitor 1] # yum install -y cephadm # or
  4. [monitor 1] # zypper install -y cephadm

Bootstrap a new cluster

To create a new cluster, you need to know:

  • Which IP address to use for the cluster’s first monitor. This isnormally just the IP for the first cluster node. If there aremultiple networks and interfaces, be sure to choose one that will beaccessible by any hosts accessing the Ceph cluster.

To bootstrap the cluster run the following command:

  1. [node 1] $ sudo cephadm bootstrap --mon-ip *<mon-ip>*

This command does a few things:

  • Creates a monitor and manager daemon for the new cluster on thelocal host. A minimal configuration file needed to communicate withthe new cluster is written to ceph.conf in the local directory.

  • A copy of the client.admin administrative (privileged!) secretkey is written to ceph.client.admin.keyring in the local directory.

  • Generates a new SSH key, and adds the public key to the local root user’s/root/.ssh/authorized_keys file. A copy of the public key is writtento ceph.pub in the local directory.

Interacting with the cluster

To interact with your cluster, start up a container that has all ofthe Ceph packages installed:

  1. [any node] $ sudo cephadm shell --config ceph.conf --keyring ceph.keyring

The —config and —keyring arguments will bind those localfiles to the default locations in /etc/ceph inside the containerto allow the ceph CLI utility to work without additionalarguments. Inside the container, you can check the cluster status with:

  1. [ceph: root@monitor_1_hostname /]# ceph status

In order to interact with the Ceph cluster outside of a container(that is, from the command line), install the Cephclient packages and install the configuration and privilegedadministrator key in a global location:

  1. [any node] $ sudo apt install -y ceph-common # or,
  2. [any node] $ sudo dnf install -y ceph-common # or,
  3. [any node] $ sudo yum install -y ceph-common
  4.  
  5. [any node] $ sudo install -m 0644 ceph.conf /etc/ceph/ceph.conf
  6. [any node] $ sudo install -m 0600 ceph.keyring /etc/ceph/ceph.keyring

Adding hosts to the cluster

For each new host you’d like to add to the cluster, you need to do two things:

  • Install the cluster’s public SSH key in the new host’s root user’sauthorized_keys file. For example,:
  1. [monitor 1] # cat ceph.pub | ssh root@*newhost* tee -a /root/.ssh/authorized_keys
  • Tell Ceph that the new node is part of the cluster:
  1. [monitor 1] # ceph orchestrator host add *newhost*

Deploying additional monitors

Normally a Ceph cluster has at least three (or, preferably, five)monitor daemons spread across different hosts. Since we are deployinga monitor, we again need to specify what IP address it will use,either as a simple IP address or as a CIDR network name.

To deploy additional monitors,:

  1. [monitor 1] # ceph orchestrator mon update *<new-num-monitors>* *<host1:network1> [<host1:network2>...]*

For example, to deploy a second monitor on newhost using an IPaddress in network 10.1.2.0/24,:

  1. [monitor 1] # ceph orchestrator mon update 2 newhost:10.1.2.0/24

Deploying OSDs

To add an OSD to the cluster, you need to know the device name for theblock device (hard disk or SSD) that will be used. Then,:

  1. [monitor 1] # ceph orchestrator osd create *<host>*:*<path-to-device>*

For example, to deploy an OSD on host newhost’s SSD,:

  1. [monitor 1] # ceph orchestrator osd create newhost:/dev/disk/by-id/ata-WDC_WDS200T2B0A-00SM50_182294800028

Deploying manager daemons

It is a good idea to have at least one backup manager daemon. Todeploy one or more new manager daemons,:

  1. [monitor 1] # ceph orchestrator mgr update *<new-num-mgrs>* [*<host1>* ...]

Deploying MDSs

In order to use the CephFS file system, one or more MDS daemons is needed.

TBD