0001 .. SPDX-License-Identifier: GPL-2.0
0002
0003 ============================
0004 Ceph Distributed File System
0005 ============================
0006
0007 Ceph is a distributed network file system designed to provide good
0008 performance, reliability, and scalability.
0009
0010 Basic features include:
0011
0012 * POSIX semantics
0013 * Seamless scaling from 1 to many thousands of nodes
0014 * High availability and reliability. No single point of failure.
0015 * N-way replication of data across storage nodes
0016 * Fast recovery from node failures
0017 * Automatic rebalancing of data on node addition/removal
0018 * Easy deployment: most FS components are userspace daemons
0019
0020 Also,
0021
0022 * Flexible snapshots (on any directory)
0023 * Recursive accounting (nested files, directories, bytes)
0024
0025 In contrast to cluster filesystems like GFS, OCFS2, and GPFS that rely
0026 on symmetric access by all clients to shared block devices, Ceph
0027 separates data and metadata management into independent server
0028 clusters, similar to Lustre. Unlike Lustre, however, metadata and
0029 storage nodes run entirely as user space daemons. File data is striped
0030 across storage nodes in large chunks to distribute workload and
0031 facilitate high throughputs. When storage nodes fail, data is
0032 re-replicated in a distributed fashion by the storage nodes themselves
0033 (with some minimal coordination from a cluster monitor), making the
0034 system extremely efficient and scalable.
0035
0036 Metadata servers effectively form a large, consistent, distributed
0037 in-memory cache above the file namespace that is extremely scalable,
0038 dynamically redistributes metadata in response to workload changes,
0039 and can tolerate arbitrary (well, non-Byzantine) node failures. The
0040 metadata server takes a somewhat unconventional approach to metadata
0041 storage to significantly improve performance for common workloads. In
0042 particular, inodes with only a single link are embedded in
0043 directories, allowing entire directories of dentries and inodes to be
0044 loaded into its cache with a single I/O operation. The contents of
0045 extremely large directories can be fragmented and managed by
0046 independent metadata servers, allowing scalable concurrent access.
0047
0048 The system offers automatic data rebalancing/migration when scaling
0049 from a small cluster of just a few nodes to many hundreds, without
0050 requiring an administrator carve the data set into static volumes or
0051 go through the tedious process of migrating data between servers.
0052 When the file system approaches full, new nodes can be easily added
0053 and things will "just work."
0054
0055 Ceph includes flexible snapshot mechanism that allows a user to create
0056 a snapshot on any subdirectory (and its nested contents) in the
0057 system. Snapshot creation and deletion are as simple as 'mkdir
0058 .snap/foo' and 'rmdir .snap/foo'.
0059
0060 Ceph also provides some recursive accounting on directories for nested
0061 files and bytes. That is, a 'getfattr -d foo' on any directory in the
0062 system will reveal the total number of nested regular files and
0063 subdirectories, and a summation of all nested file sizes. This makes
0064 the identification of large disk space consumers relatively quick, as
0065 no 'du' or similar recursive scan of the file system is required.
0066
0067 Finally, Ceph also allows quotas to be set on any directory in the system.
0068 The quota can restrict the number of bytes or the number of files stored
0069 beneath that point in the directory hierarchy. Quotas can be set using
0070 extended attributes 'ceph.quota.max_files' and 'ceph.quota.max_bytes', eg::
0071
0072 setfattr -n ceph.quota.max_bytes -v 100000000 /some/dir
0073 getfattr -n ceph.quota.max_bytes /some/dir
0074
0075 A limitation of the current quotas implementation is that it relies on the
0076 cooperation of the client mounting the file system to stop writers when a
0077 limit is reached. A modified or adversarial client cannot be prevented
0078 from writing as much data as it needs.
0079
0080 Mount Syntax
0081 ============
0082
0083 The basic mount syntax is::
0084
0085 # mount -t ceph user@fsid.fs_name=/[subdir] mnt -o mon_addr=monip1[:port][/monip2[:port]]
0086
0087 You only need to specify a single monitor, as the client will get the
0088 full list when it connects. (However, if the monitor you specify
0089 happens to be down, the mount won't succeed.) The port can be left
0090 off if the monitor is using the default. So if the monitor is at
0091 1.2.3.4::
0092
0093 # mount -t ceph cephuser@07fe3187-00d9-42a3-814b-72a4d5e7d5be.cephfs=/ /mnt/ceph -o mon_addr=1.2.3.4
0094
0095 is sufficient. If /sbin/mount.ceph is installed, a hostname can be
0096 used instead of an IP address and the cluster FSID can be left out
0097 (as the mount helper will fill it in by reading the ceph configuration
0098 file)::
0099
0100 # mount -t ceph cephuser@cephfs=/ /mnt/ceph -o mon_addr=mon-addr
0101
0102 Multiple monitor addresses can be passed by separating each address with a slash (`/`)::
0103
0104 # mount -t ceph cephuser@cephfs=/ /mnt/ceph -o mon_addr=192.168.1.100/192.168.1.101
0105
0106 When using the mount helper, monitor address can be read from ceph
0107 configuration file if available. Note that, the cluster FSID (passed as part
0108 of the device string) is validated by checking it with the FSID reported by
0109 the monitor.
0110
0111 Mount Options
0112 =============
0113
0114 mon_addr=ip_address[:port][/ip_address[:port]]
0115 Monitor address to the cluster. This is used to bootstrap the
0116 connection to the cluster. Once connection is established, the
0117 monitor addresses in the monitor map are followed.
0118
0119 fsid=cluster-id
0120 FSID of the cluster (from `ceph fsid` command).
0121
0122 ip=A.B.C.D[:N]
0123 Specify the IP and/or port the client should bind to locally.
0124 There is normally not much reason to do this. If the IP is not
0125 specified, the client's IP address is determined by looking at the
0126 address its connection to the monitor originates from.
0127
0128 wsize=X
0129 Specify the maximum write size in bytes. Default: 64 MB.
0130
0131 rsize=X
0132 Specify the maximum read size in bytes. Default: 64 MB.
0133
0134 rasize=X
0135 Specify the maximum readahead size in bytes. Default: 8 MB.
0136
0137 mount_timeout=X
0138 Specify the timeout value for mount (in seconds), in the case
0139 of a non-responsive Ceph file system. The default is 60
0140 seconds.
0141
0142 caps_max=X
0143 Specify the maximum number of caps to hold. Unused caps are released
0144 when number of caps exceeds the limit. The default is 0 (no limit)
0145
0146 rbytes
0147 When stat() is called on a directory, set st_size to 'rbytes',
0148 the summation of file sizes over all files nested beneath that
0149 directory. This is the default.
0150
0151 norbytes
0152 When stat() is called on a directory, set st_size to the
0153 number of entries in that directory.
0154
0155 nocrc
0156 Disable CRC32C calculation for data writes. If set, the storage node
0157 must rely on TCP's error correction to detect data corruption
0158 in the data payload.
0159
0160 dcache
0161 Use the dcache contents to perform negative lookups and
0162 readdir when the client has the entire directory contents in
0163 its cache. (This does not change correctness; the client uses
0164 cached metadata only when a lease or capability ensures it is
0165 valid.)
0166
0167 nodcache
0168 Do not use the dcache as above. This avoids a significant amount of
0169 complex code, sacrificing performance without affecting correctness,
0170 and is useful for tracking down bugs.
0171
0172 noasyncreaddir
0173 Do not use the dcache as above for readdir.
0174
0175 noquotadf
0176 Report overall filesystem usage in statfs instead of using the root
0177 directory quota.
0178
0179 nocopyfrom
0180 Don't use the RADOS 'copy-from' operation to perform remote object
0181 copies. Currently, it's only used in copy_file_range, which will revert
0182 to the default VFS implementation if this option is used.
0183
0184 recover_session=<no|clean>
0185 Set auto reconnect mode in the case where the client is blocklisted. The
0186 available modes are "no" and "clean". The default is "no".
0187
0188 * no: never attempt to reconnect when client detects that it has been
0189 blocklisted. Operations will generally fail after being blocklisted.
0190
0191 * clean: client reconnects to the ceph cluster automatically when it
0192 detects that it has been blocklisted. During reconnect, client drops
0193 dirty data/metadata, invalidates page caches and writable file handles.
0194 After reconnect, file locks become stale because the MDS loses track
0195 of them. If an inode contains any stale file locks, read/write on the
0196 inode is not allowed until applications release all stale file locks.
0197
0198 More Information
0199 ================
0200
0201 For more information on Ceph, see the home page at
0202 https://ceph.com/
0203
0204 The Linux kernel client source tree is available at
0205 - https://github.com/ceph/ceph-client.git
0206 - git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git
0207
0208 and the source for the full system is at
0209 https://github.com/ceph/ceph.git