Commit | Line | Data |
---|---|---|
7ad920b5 SW |
1 | Ceph Distributed File System |
2 | ============================ | |
3 | ||
4 | Ceph is a distributed network file system designed to provide good | |
5 | performance, reliability, and scalability. | |
6 | ||
7 | Basic features include: | |
8 | ||
9 | * POSIX semantics | |
10 | * Seamless scaling from 1 to many thousands of nodes | |
8136b58d | 11 | * High availability and reliability. No single point of failure. |
7ad920b5 SW |
12 | * N-way replication of data across storage nodes |
13 | * Fast recovery from node failures | |
14 | * Automatic rebalancing of data on node addition/removal | |
15 | * Easy deployment: most FS components are userspace daemons | |
16 | ||
17 | Also, | |
18 | * Flexible snapshots (on any directory) | |
19 | * Recursive accounting (nested files, directories, bytes) | |
20 | ||
21 | In contrast to cluster filesystems like GFS, OCFS2, and GPFS that rely | |
22 | on symmetric access by all clients to shared block devices, Ceph | |
23 | separates data and metadata management into independent server | |
24 | clusters, similar to Lustre. Unlike Lustre, however, metadata and | |
25 | storage nodes run entirely as user space daemons. Storage nodes | |
26 | utilize btrfs to store data objects, leveraging its advanced features | |
27 | (checksumming, metadata replication, etc.). File data is striped | |
28 | across storage nodes in large chunks to distribute workload and | |
29 | facilitate high throughputs. When storage nodes fail, data is | |
30 | re-replicated in a distributed fashion by the storage nodes themselves | |
31 | (with some minimal coordination from a cluster monitor), making the | |
32 | system extremely efficient and scalable. | |
33 | ||
34 | Metadata servers effectively form a large, consistent, distributed | |
35 | in-memory cache above the file namespace that is extremely scalable, | |
36 | dynamically redistributes metadata in response to workload changes, | |
37 | and can tolerate arbitrary (well, non-Byzantine) node failures. The | |
38 | metadata server takes a somewhat unconventional approach to metadata | |
39 | storage to significantly improve performance for common workloads. In | |
40 | particular, inodes with only a single link are embedded in | |
41 | directories, allowing entire directories of dentries and inodes to be | |
42 | loaded into its cache with a single I/O operation. The contents of | |
43 | extremely large directories can be fragmented and managed by | |
44 | independent metadata servers, allowing scalable concurrent access. | |
45 | ||
46 | The system offers automatic data rebalancing/migration when scaling | |
47 | from a small cluster of just a few nodes to many hundreds, without | |
48 | requiring an administrator carve the data set into static volumes or | |
49 | go through the tedious process of migrating data between servers. | |
50 | When the file system approaches full, new nodes can be easily added | |
51 | and things will "just work." | |
52 | ||
53 | Ceph includes flexible snapshot mechanism that allows a user to create | |
54 | a snapshot on any subdirectory (and its nested contents) in the | |
55 | system. Snapshot creation and deletion are as simple as 'mkdir | |
56 | .snap/foo' and 'rmdir .snap/foo'. | |
57 | ||
58 | Ceph also provides some recursive accounting on directories for nested | |
59 | files and bytes. That is, a 'getfattr -d foo' on any directory in the | |
60 | system will reveal the total number of nested regular files and | |
61 | subdirectories, and a summation of all nested file sizes. This makes | |
62 | the identification of large disk space consumers relatively quick, as | |
63 | no 'du' or similar recursive scan of the file system is required. | |
64 | ||
fb18a575 LH |
65 | Finally, Ceph also allows quotas to be set on any directory in the system. |
66 | The quota can restrict the number of bytes or the number of files stored | |
67 | beneath that point in the directory hierarchy. Quotas can be set using | |
68 | extended attributes 'ceph.quota.max_files' and 'ceph.quota.max_bytes', eg: | |
69 | ||
70 | setfattr -n ceph.quota.max_bytes -v 100000000 /some/dir | |
71 | getfattr -n ceph.quota.max_bytes /some/dir | |
72 | ||
73 | A limitation of the current quotas implementation is that it relies on the | |
74 | cooperation of the client mounting the file system to stop writers when a | |
75 | limit is reached. A modified or adversarial client cannot be prevented | |
76 | from writing as much data as it needs. | |
7ad920b5 SW |
77 | |
78 | Mount Syntax | |
79 | ============ | |
80 | ||
81 | The basic mount syntax is: | |
82 | ||
83 | # mount -t ceph monip[:port][,monip2[:port]...]:/[subdir] mnt | |
84 | ||
85 | You only need to specify a single monitor, as the client will get the | |
86 | full list when it connects. (However, if the monitor you specify | |
87 | happens to be down, the mount won't succeed.) The port can be left | |
88 | off if the monitor is using the default. So if the monitor is at | |
89 | 1.2.3.4, | |
90 | ||
91 | # mount -t ceph 1.2.3.4:/ /mnt/ceph | |
92 | ||
93 | is sufficient. If /sbin/mount.ceph is installed, a hostname can be | |
94 | used instead of an IP address. | |
95 | ||
96 | ||
97 | ||
98 | Mount Options | |
99 | ============= | |
100 | ||
101 | ip=A.B.C.D[:N] | |
102 | Specify the IP and/or port the client should bind to locally. | |
103 | There is normally not much reason to do this. If the IP is not | |
104 | specified, the client's IP address is determined by looking at the | |
a33f3224 | 105 | address its connection to the monitor originates from. |
7ad920b5 SW |
106 | |
107 | wsize=X | |
c7f04944 | 108 | Specify the maximum write size in bytes. Default: 16 MB. |
7ad920b5 SW |
109 | |
110 | rsize=X | |
c7f04944 | 111 | Specify the maximum read size in bytes. Default: 16 MB. |
92c1037c AG |
112 | |
113 | rasize=X | |
c7f04944 | 114 | Specify the maximum readahead size in bytes. Default: 8 MB. |
7ad920b5 SW |
115 | |
116 | mount_timeout=X | |
117 | Specify the timeout value for mount (in seconds), in the case | |
118 | of a non-responsive Ceph file system. The default is 30 | |
119 | seconds. | |
120 | ||
121 | rbytes | |
122 | When stat() is called on a directory, set st_size to 'rbytes', | |
123 | the summation of file sizes over all files nested beneath that | |
124 | directory. This is the default. | |
125 | ||
126 | norbytes | |
127 | When stat() is called on a directory, set st_size to the | |
128 | number of entries in that directory. | |
129 | ||
130 | nocrc | |
23ab15ad | 131 | Disable CRC32C calculation for data writes. If set, the storage node |
7ad920b5 SW |
132 | must rely on TCP's error correction to detect data corruption |
133 | in the data payload. | |
134 | ||
a40dc6cc SW |
135 | dcache |
136 | Use the dcache contents to perform negative lookups and | |
137 | readdir when the client has the entire directory contents in | |
138 | its cache. (This does not change correctness; the client uses | |
139 | cached metadata only when a lease or capability ensures it is | |
140 | valid.) | |
141 | ||
142 | nodcache | |
143 | Do not use the dcache as above. This avoids a significant amount of | |
144 | complex code, sacrificing performance without affecting correctness, | |
145 | and is useful for tracking down bugs. | |
7ad920b5 | 146 | |
a40dc6cc SW |
147 | noasyncreaddir |
148 | Do not use the dcache as above for readdir. | |
7ad920b5 | 149 | |
9122eed5 LH |
150 | noquotadf |
151 | Report overall filesystem usage in statfs instead of using the root | |
152 | directory quota. | |
153 | ||
7ad920b5 SW |
154 | More Information |
155 | ================ | |
156 | ||
157 | For more information on Ceph, see the home page at | |
158 | http://ceph.newdream.net/ | |
159 | ||
160 | The Linux kernel client source tree is available at | |
8136b58d CR |
161 | git://ceph.newdream.net/git/ceph-client.git |
162 | git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git | |
7ad920b5 SW |
163 | |
164 | and the source for the full system is at | |
8136b58d | 165 | git://ceph.newdream.net/git/ceph.git |