Experimental Features
CephFS includes a number of experimental features which are not fully stabilizedor qualified for users to turn on in real deployments. We generally do our bestto clearly demarcate these and fence them off so they cannot be used by mistake.
Some of these features are closer to being done than others, though. We describeeach of them with an approximation of how risky they are and briefly describewhat is required to enable them. Note that doing so will irrevocably flag mapsin the monitor as having once enabled this flag to improve debugging andsupport processes.
Inline data
By default, all CephFS file data is stored in RADOS objects. The inline datafeature enables small files (generally <2KB) to be stored in the inodeand served out of the MDS. This may improve small-file performance but increasesload on the MDS. It is not sufficiently tested to support at this time, althoughfailures within it are unlikely to make non-inlined data inaccessible
Inline data has always been off by default and requires settingthe inline_data
flag.
Inline data has been declared deprecated for the Octopus release, and willlikely be removed altogether in the Q release.
Mantle: Programmable Metadata Load Balancer
Mantle is a programmable metadata balancer built into the MDS. The idea is toprotect the mechanisms for balancing load (migration, replication,fragmentation) but stub out the balancing policies using Lua. For details, seeMantle.
Snapshots
Like multiple active MDSes, CephFS is designed from the ground up to supportsnapshotting of arbitrary directories. There are no known bugs at the time ofwriting, but there is insufficient testing to provide stability guarantees andevery expansion of testing has generally revealed new issues. If you do enablesnapshots and experience failure, manual intervention will be needed.
Snapshots are known not to work properly with multiple file systems (below) insome cases. Specifically, if you share a pool for multiple FSes and deletea snapshot in one FS, expect to lose snapshotted file data in any other FS usingsnapshots. See the CephFS Snapshots page for more information.
For somewhat obscure implementation reasons, the kernel client only supports upto 400 snapshots (http://tracker.ceph.com/issues/21420).
Snapshotting was blocked off with the allow_new_snaps
flag prior to Mimic.
Multiple file systems within a Ceph cluster
Code was merged prior to the Jewel release which enables administratorsto create multiple independent CephFS file systems within a single Ceph cluster.These independent file systems have their own set of active MDSes, cluster maps,and data. But the feature required extensive changes to data structures whichare not yet fully qualified, and has security implications which are not allapparent nor resolved.
There are no known bugs, but any failures which do result from having multipleactive file systems in your cluster will require manual intervention and, so far,will not have been experienced by anybody else – knowledgeable help will beextremely limited. You also probably do not have the security or isolationguarantees you want or think you have upon doing so.
Note that snapshots and multiple file systems are not tested in combinationand may not work together; see above.
Multiple file systems were available starting in the Jewel release candidatesbut must be turned on via the enable_multiple
flag until declared stable.
LazyIO
LazyIO relaxes POSIX semantics. Buffered reads/writes are allowed even when afile is opened by multiple applications on multiple clients. Applications areresponsible for managing cache coherency themselves.