Ceph Glossary

Ceph is growing rapidly. As firms deploy Ceph, the technical terms such as“RADOS”, “RBD,” “RGW” and so forth require corresponding marketing termsthat explain what each component does. The terms in this glossary areintended to complement the existing technical terminology.

Sometimes more than one term applies to a definition. Generally, the firstterm reflects a term consistent with Ceph’s marketing, and secondary termsreflect either technical terms or legacy ways of referring to Ceph systems.

  • Ceph Project
  • The aggregate term for the people, software, mission and infrastructureof Ceph.

  • cephx

  • The Ceph authentication protocol. Cephx operates like Kerberos, but ithas no single point of failure.

  • Ceph

  • Ceph Platform
  • All Ceph software, which includes any piece of code hosted athttps://github.com/ceph.

  • Ceph System

  • Ceph Stack
  • A collection of two or more components of Ceph.

  • Ceph Node

  • Node
  • Host
  • Any single machine or server in a Ceph System.

  • Ceph Storage Cluster

  • Ceph Object Store
  • RADOS
  • RADOS Cluster
  • Reliable Autonomic Distributed Object Store
  • The core set of storage software which stores the user’s data (MON+OSD).

  • Ceph Cluster Map

  • Cluster Map
  • The set of maps comprising the monitor map, OSD map, PG map, MDS map andCRUSH map. See Cluster Map for details.

  • Ceph Object Storage

  • The object storage “product”, service or capabilities, which consistsessentially of a Ceph Storage Cluster and a Ceph Object Gateway.

  • Ceph Object Gateway

  • RADOS Gateway
  • RGW
  • The S3/Swift gateway component of Ceph.

  • Ceph Block Device

  • RBD
  • The block storage component of Ceph.

  • Ceph Block Storage

  • The block storage “product,” service or capabilities when used inconjunction with librbd, a hypervisor such as QEMU or Xen, and ahypervisor abstraction layer such as libvirt.

  • Ceph File System

  • CephFS
  • Ceph FS
  • The POSIX filesystem components of Ceph. ReferCephFS Architecture and Ceph File System formore details.

  • Cloud Platforms

  • Cloud Stacks
  • Third party cloud provisioning platforms such as OpenStack, CloudStack,OpenNebula, ProxMox, etc.

  • Object Storage Device

  • OSD
  • A physical or logical storage unit (e.g., LUN).Sometimes, Ceph users use theterm “OSD” to refer to Ceph OSD Daemon, though theproper term is “Ceph OSD”.

  • Ceph OSD Daemon

  • Ceph OSD Daemons
  • Ceph OSD
  • The Ceph OSD software, which interacts with a logicaldisk (OSD). Sometimes, Ceph users use theterm “OSD” to refer to “Ceph OSD Daemon”, though theproper term is “Ceph OSD”.

  • OSD id

  • The integer that defines an OSD. It is generated by the monitors as partof the creation of a new OSD.

  • OSD fsid

  • This is a unique identifier used to further improve the uniqueness of anOSD and it is found in the OSD path in a file called osd_fsid. Thisfsid term is used interchangeably with uuid

  • OSD uuid

  • Just like the OSD fsid, this is the OSD unique identifier and is usedinterchangeably with fsid

  • bluestore

  • OSD BlueStore is a new back end for OSD daemons (kraken and newerversions). Unlike filestore it stores objects directly on theCeph block devices without any file system interface.

  • filestore

  • A back end for OSD daemons, where a Journal is needed and files arewritten to the filesystem.

  • Ceph Monitor

  • MON
  • The Ceph monitor software.

  • Ceph Manager

  • MGR
  • The Ceph manager software, which collects all the state from the wholecluster in one place.

  • Ceph Manager Dashboard

  • Ceph Dashboard
  • Dashboard Module
  • Dashboard Plugin
  • Dashboard
  • A built-in web-based Ceph management and monitoring application toadminister various aspects and objects of the cluster. The dashboard isimplemented as a Ceph Manager module. See Ceph Dashboard for moredetails.

  • Ceph Metadata Server

  • MDS
  • The Ceph metadata software.

  • Ceph Clients

  • Ceph Client
  • The collection of Ceph components which can access a Ceph StorageCluster. These include the Ceph Object Gateway, the Ceph Block Device,the Ceph File System, and their corresponding libraries, kernel modules,and FUSEs.

  • Ceph Kernel Modules

  • The collection of kernel modules which can be used to interact with theCeph System (e.g., ceph.ko, rbd.ko).

  • Ceph Client Libraries

  • The collection of libraries that can be used to interact with componentsof the Ceph System.

  • Ceph Release

  • Any distinct numbered version of Ceph.

  • Ceph Point Release

  • Any ad-hoc release that includes only bug or security fixes.

  • Ceph Interim Release

  • Versions of Ceph that have not yet been put through quality assurancetesting, but may contain new features.

  • Ceph Release Candidate

  • A major version of Ceph that has undergone initial quality assurancetesting and is ready for beta testers.

  • Ceph Stable Release

  • A major version of Ceph where all features from the preceding interimreleases have been put through quality assurance testing successfully.

  • Ceph Test Framework

  • Teuthology
  • The collection of software that performs scripted tests on Ceph.

  • CRUSH

  • Controlled Replication Under Scalable Hashing. It is the algorithmCeph uses to compute object storage locations.

  • CRUSH rule

  • The CRUSH data placement rule that applies to a particular pool(s).

  • Pool

  • Pools
  • Pools are logical partitions for storing objects.

  • systemd oneshot

  • A systemd type where a command is defined in ExecStart which willexit upon completion (it is not intended to daemonize)

  • LVM tags

  • Extensible metadata for LVM volumes and groups. It is used to storeCeph-specific information about devices and its relationship withOSDs.