NFS
New in version Jewel.
Ceph Object Gateway namespaces can now be exported over file-basedaccess protocols such as NFSv3 and NFSv4, alongside traditional HTTP accessprotocols (S3 and Swift).
In particular, the Ceph Object Gateway can now be configured toprovide file-based access when embedded in the NFS-Ganesha NFS server.
librgw
The librgw.so shared library (Unix) provides a loadable interface toCeph Object Gateway services, and instantiates a full Ceph Object Gatewayinstance on initialization.
In turn, librgw.so exports rgw_file, a stateful API for file-orientedaccess to RGW buckets and objects. The API is general, but its designis strongly influenced by the File System Abstraction Layer (FSAL) APIof NFS-Ganesha, for which it has been primarily designed.
A set of Python bindings is also provided.
Namespace Conventions
The implementation conforms to Amazon Web Services (AWS) hierarchicalnamespace conventions which map UNIX-style path names onto S3 bucketsand objects.
The top level of the attached namespace consists of S3 buckets,represented as NFS directories. Files and directories subordinate tobuckets are each represented as objects, following S3 prefix anddelimiter conventions, with ‘/’ being the only supported pathdelimiter 1.
For example, if an NFS client has mounted an RGW namespace at “/nfs”,then a file “/nfs/mybucket/www/index.html” in the NFS namespacecorresponds to an RGW object “www/index.html” in a bucket/container“mybucket.”
Although it is generally invisible to clients, the NFS namespace isassembled through concatenation of the corresponding paths implied bythe objects in the namespace. Leaf objects, whether files ordirectories, will always be materialized in an RGW object of thecorresponding key name, “<name>” if a file, “<name>/” if a directory.Non-leaf directories (e.g., “www” above) might only be implied bytheir appearance in the names of one or more leaf objects. Directoriescreated within NFS or directly operated on by an NFS client (e.g., viaan attribute-setting operation such as chown or chmod) always have aleaf object representation used to store materialized attributes suchas Unix ownership and permissions.
Supported Operations
The RGW NFS interface supports most operations on files anddirectories, with the following restrictions:
Links, including symlinks, are not supported
NFS ACLs are not supported
- Unix user and group ownership and permissions are supported
Directories may not be moved/renamed
- files may be moved between directories
Only full, sequential write i/o is supported
i.e., write operations are constrained to be uploads
many typical i/o operations such as editing files in place will necessarily fail as they perform non-sequential stores
some file utilities apparently writing sequentially (e.g., some versions of GNU tar) may fail due to infrequent non-sequential stores
When mounting via NFS, sequential application i/o can generally be constrained to be written sequentially to the NFS server via a synchronous mount option (e.g. -osync in Linux)
NFS clients which cannot mount synchronously (e.g., MS Windows) will not be able to upload files
Security
The RGW NFS interface provides a hybrid security model with thefollowing characteristics:
NFS protocol security is provided by the NFS-Ganesha server, as negotiated by the NFS server and clients
e.g., clients can by trusted (AUTH_SYS), or required to present Kerberos user credentials (RPCSEC_GSS)
RPCSEC_GSS wire security can be integrity only (krb5i) or integrity and privacy (encryption, krb5p)
various NFS-specific security and permission rules are available
- e.g., root-squashing
a set of RGW/S3 security credentials (unknown to NFS) is associated with each RGW NFS mount (i.e., NFS-Ganesha EXPORT)
all RGW object operations performed via the NFS server will be performed by the RGW user associated with the credentials stored in the export being accessed (currently only RGW and RGW LDAP credentials are supported)
- additional RGW authentication types such as Keystone are not currently supported
Configuring an NFS-Ganesha Instance
Each NFS RGW instance is an NFS-Ganesha server instance _embeddding_a full Ceph RGW instance.
Therefore, the RGW NFS configuration includes Ceph and Ceph ObjectGateway-specific configuration in a local ceph.conf, as well asNFS-Ganesha-specific configuration in the NFS-Ganesha config file,ganesha.conf.
ceph.conf
Required ceph.conf configuration for RGW NFS includes:
valid [client.radosgw.{instance-name}] section
valid values for minimal instance configuration, in particular, an installed and correct
keyring
Other config variables are optional, front-end-specific and front-endselection variables (e.g., rgw data
and rgw frontends
) areoptional and in some cases ignored.
A small number of config variables (e.g., rgw_nfs_namespace_expire_secs
)are unique to RGW NFS.
ganesha.conf
A strictly minimal ganesha.conf for use with RGW NFS includes oneEXPORT block with embedded FSAL block of type RGW:
- EXPORT
- {
- Export_ID={numeric-id};
- Path = "/";
- Pseudo = "/";
- Access_Type = RW;
- SecType = "sys";
- NFS_Protocols = 4;
- Transport_Protocols = TCP;
- # optional, permit unsquashed access by client "root" user
- #Squash = No_Root_Squash;
- FSAL {
- Name = RGW;
- User_Id = {s3-user-id};
- Access_Key_Id ="{s3-access-key}";
- Secret_Access_Key = "{s3-secret}";
- }
- }
Export_ID
must have an integer value, e.g., “77”
Path
(for RGW) should be “/”
Pseudo
defines an NFSv4 pseudo root name (NFSv4 only)
SecType = sys;
allows clients to attach without Kerberosauthentication
Squash = No_Root_Squash;
enables the client root user to overridepermissions (Unix convention). When root-squashing is enabled,operations attempted by the root user are performed as if by the local“nobody” (and “nogroup”) user on the NFS-Ganesha server
The RGW FSAL additionally supports RGW-specific configurationvariables in the RGW config section:
- RGW {
- cluster = "{cluster name, default 'ceph'}";
- name = "client.rgw.{instance-name}";
- ceph_conf = "/opt/ceph-rgw/etc/ceph/ceph.conf";
- init_args = "-d --debug-rgw=16";
- }
cluster
sets a Ceph cluster name (must match the cluster being exported)
name
sets an RGW instance name (must match the cluster being exported)
ceph_conf
gives a path to a non-default ceph.conf file to use
Other useful NFS-Ganesha configuration:
Any EXPORT block which should support NFSv3 should include version 3in the NFS_Protocols setting. Additionally, NFSv3 is the last majorversion to support the UDP transport. To enable UDP, include it in theTransport_Protocols setting. For example:
- EXPORT {
- ...
- NFS_Protocols = 3,4;
- Transport_Protocols = UDP,TCP;
- ...
- }
One important family of options pertains to interaction with the Linuxidmapping service, which is used to normalize user and group namesacross systems. Details of idmapper integration are not provided here.
With Linux NFS clients, NFS-Ganesha can be configuredto accept client-supplied numeric user and group identifiers withNFSv4, which by default stringifies these–this may be useful in smallsetups and for experimentation:
- NFSV4 {
- Allow_Numeric_Owners = true;
- Only_Numeric_Owners = true;
- }
Troubleshooting
NFS-Ganesha configuration problems are usually debugged by running theserver with debugging options, controlled by the LOG config section.
NFS-Ganesha log messages are grouped into various components, loggingcan be enabled separately for each component. Valid values forcomponent logging include:
- *FATAL* critical errors only
- *WARN* unusual condition
- *DEBUG* mildly verbose trace output
- *FULL_DEBUG* verbose trace output
Example:
- LOG {
- Components {
- MEMLEAKS = FATAL;
- FSAL = FATAL;
- NFSPROTO = FATAL;
- NFS_V4 = FATAL;
- EXPORT = FATAL;
- FILEHANDLE = FATAL;
- DISPATCH = FATAL;
- CACHE_INODE = FATAL;
- CACHE_INODE_LRU = FATAL;
- HASHTABLE = FATAL;
- HASHTABLE_CACHE = FATAL;
- DUPREQ = FATAL;
- INIT = DEBUG;
- MAIN = DEBUG;
- IDMAPPER = FATAL;
- NFS_READDIR = FATAL;
- NFS_V4_LOCK = FATAL;
- CONFIG = FATAL;
- CLIENTID = FATAL;
- SESSIONS = FATAL;
- PNFS = FATAL;
- RW_LOCK = FATAL;
- NLM = FATAL;
- RPC = FATAL;
- NFS_CB = FATAL;
- THREAD = FATAL;
- NFS_V4_ACL = FATAL;
- STATE = FATAL;
- FSAL_UP = FATAL;
- DBUS = FATAL;
- }
- # optional: redirect log output
- # Facility {
- # name = FILE;
- # destination = "/tmp/ganesha-rgw.log";
- # enable = active;
- }
- }
Running Multiple NFS Gateways
Each NFS-Ganesha instance acts as a full gateway endpoint, with thelimitation that currently an NFS-Ganesha instance cannot be configuredto export HTTP services. As with ordinary gateway instances, anynumber of NFS-Ganesha instances can be started, exporting the same ordifferent resources from the cluster. This enables the clustering ofNFS-Ganesha instances. However, this does not imply high availability.
When regular gateway instances and NFS-Ganesha instances overlap thesame data resources, they will be accessible from both the standard S3API and through the NFS-Ganesha instance as exported. You canco-locate the NFS-Ganesha instance with a Ceph Object Gateway instanceon the same host.
RGW vs RGW NFS
Exporting an NFS namespace and other RGW namespaces (e.g., S3 or Swiftvia the Civetweb HTTP front-end) from the same program instance iscurrently not supported.
When adding objects and buckets outside of NFS, those objects willappear in the NFS namespace in the time set byrgw_nfs_namespace_expire_secs
, which defaults to 300 seconds (5 minutes).Override the default value for rgw_nfs_namespace_expire_secs
in theCeph configuration file to change the refresh rate.
If exporting Swift containers that do not conform to valid S3 bucketnaming requirements, set rgw_relaxed_s3_bucket_names
to true in the[client.radosgw] section of the Ceph configuration file. For example,if a Swift container name contains underscores, it is not a valid S3bucket name and will be rejected unless rgw_relaxed_s3_bucket_names
is set to true.
Configuring NFSv4 clients
To access the namespace, mount the configured NFS-Ganesha export(s)into desired locations in the local POSIX namespace. As noted, thisimplementation has a few unique restrictions:
NFS 4.1 and higher protocol flavors are preferred
- NFSv4 OPEN and CLOSE operations are used to track upload transactions
To upload data successfully, clients must preserve write ordering
- on Linux and many Unix NFS clients, use the -osync mount option
Conventions for mounting NFS resources are platform-specific. Thefollowing conventions work on Linux and some Unix platforms:
From the command line:
- mount -t nfs -o nfsvers=4.1,noauto,soft,sync,proto=tcp <ganesha-host-name>:/ <mount-point>
In /etc/fstab:
- <ganesha-host-name>:/ <mount-point> nfs noauto,soft,nfsvers=4.1,sync,proto=tcp 0 0
Specify the NFS-Ganesha host name and the path to the mount point onthe client.
Configuring NFSv3 Clients
Linux clients can be configured to mount with NFSv3 by supplyingnfsvers=3
and noacl
as mount options. To use UDP as thetransport, add proto=udp
to the mount options. However, TCP is thepreferred transport:
- <ganesha-host-name>:/ <mount-point> nfs noauto,noacl,soft,nfsvers=3,sync,proto=tcp 0 0
Configure the NFS Ganesha EXPORT block Protocols setting with version3 and the Transports setting with UDP if the mount will use version 3 with UDP.
NFSv3 Semantics
Since NFSv3 does not communicate client OPEN and CLOSE operations tofile servers, RGW NFS cannot use these operations to mark thebeginning and ending of file upload transactions. Instead, RGW NFSstarts a new upload when the first write is sent to a file at offset0, and finalizes the upload when no new writes to the file have beenseen for a period of time, by default, 10 seconds. To change thistimeout, set an alternate value for rgw_nfs_write_completion_interval_s
in the RGW section(s) of the Ceph configuration file.