ceph.txt 6.4 KB
Newer Older
S
Sage Weil 已提交
1 2 3 4 5 6 7 8 9 10
Ceph Distributed File System
============================

Ceph is a distributed network file system designed to provide good
performance, reliability, and scalability.

Basic features include:

 * POSIX semantics
 * Seamless scaling from 1 to many thousands of nodes
C
Cheng Renquan 已提交
11
 * High availability and reliability.  No single point of failure.
S
Sage Weil 已提交
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
 * N-way replication of data across storage nodes
 * Fast recovery from node failures
 * Automatic rebalancing of data on node addition/removal
 * Easy deployment: most FS components are userspace daemons

Also,
 * Flexible snapshots (on any directory)
 * Recursive accounting (nested files, directories, bytes)

In contrast to cluster filesystems like GFS, OCFS2, and GPFS that rely
on symmetric access by all clients to shared block devices, Ceph
separates data and metadata management into independent server
clusters, similar to Lustre.  Unlike Lustre, however, metadata and
storage nodes run entirely as user space daemons.  Storage nodes
utilize btrfs to store data objects, leveraging its advanced features
(checksumming, metadata replication, etc.).  File data is striped
across storage nodes in large chunks to distribute workload and
facilitate high throughputs.  When storage nodes fail, data is
re-replicated in a distributed fashion by the storage nodes themselves
(with some minimal coordination from a cluster monitor), making the
system extremely efficient and scalable.

Metadata servers effectively form a large, consistent, distributed
in-memory cache above the file namespace that is extremely scalable,
dynamically redistributes metadata in response to workload changes,
and can tolerate arbitrary (well, non-Byzantine) node failures.  The
metadata server takes a somewhat unconventional approach to metadata
storage to significantly improve performance for common workloads.  In
particular, inodes with only a single link are embedded in
directories, allowing entire directories of dentries and inodes to be
loaded into its cache with a single I/O operation.  The contents of
extremely large directories can be fragmented and managed by
independent metadata servers, allowing scalable concurrent access.

The system offers automatic data rebalancing/migration when scaling
from a small cluster of just a few nodes to many hundreds, without
requiring an administrator carve the data set into static volumes or
go through the tedious process of migrating data between servers.
When the file system approaches full, new nodes can be easily added
and things will "just work."

Ceph includes flexible snapshot mechanism that allows a user to create
a snapshot on any subdirectory (and its nested contents) in the
system.  Snapshot creation and deletion are as simple as 'mkdir
.snap/foo' and 'rmdir .snap/foo'.

Ceph also provides some recursive accounting on directories for nested
files and bytes.  That is, a 'getfattr -d foo' on any directory in the
system will reveal the total number of nested regular files and
subdirectories, and a summation of all nested file sizes.  This makes
the identification of large disk space consumers relatively quick, as
no 'du' or similar recursive scan of the file system is required.

65 66 67 68 69 70 71 72 73 74 75 76
Finally, Ceph also allows quotas to be set on any directory in the system.
The quota can restrict the number of bytes or the number of files stored
beneath that point in the directory hierarchy.  Quotas can be set using
extended attributes 'ceph.quota.max_files' and 'ceph.quota.max_bytes', eg:

 setfattr -n ceph.quota.max_bytes -v 100000000 /some/dir
 getfattr -n ceph.quota.max_bytes /some/dir

A limitation of the current quotas implementation is that it relies on the
cooperation of the client mounting the file system to stop writers when a
limit is reached.  A modified or adversarial client cannot be prevented
from writing as much data as it needs.
S
Sage Weil 已提交
77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104

Mount Syntax
============

The basic mount syntax is:

 # mount -t ceph monip[:port][,monip2[:port]...]:/[subdir] mnt

You only need to specify a single monitor, as the client will get the
full list when it connects.  (However, if the monitor you specify
happens to be down, the mount won't succeed.)  The port can be left
off if the monitor is using the default.  So if the monitor is at
1.2.3.4,

 # mount -t ceph 1.2.3.4:/ /mnt/ceph

is sufficient.  If /sbin/mount.ceph is installed, a hostname can be
used instead of an IP address.



Mount Options
=============

  ip=A.B.C.D[:N]
	Specify the IP and/or port the client should bind to locally.
	There is normally not much reason to do this.  If the IP is not
	specified, the client's IP address is determined by looking at the
105
	address its connection to the monitor originates from.
S
Sage Weil 已提交
106 107 108

  wsize=X
	Specify the maximum write size in bytes.  By default there is no
C
Cheng Renquan 已提交
109
	maximum.  Ceph will normally size writes based on the file stripe
S
Sage Weil 已提交
110 111 112
	size.

  rsize=X
A
Andreas Gerstmayr 已提交
113
	Specify the maximum read size in bytes.  Default: 64 MB.
114 115

  rasize=X
A
Andreas Gerstmayr 已提交
116
	Specify the maximum readahead.  Default: 8 MB.
S
Sage Weil 已提交
117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

  mount_timeout=X
	Specify the timeout value for mount (in seconds), in the case
	of a non-responsive Ceph file system.  The default is 30
	seconds.

  rbytes
	When stat() is called on a directory, set st_size to 'rbytes',
	the summation of file sizes over all files nested beneath that
	directory.  This is the default.

  norbytes
	When stat() is called on a directory, set st_size to the
	number of entries in that directory.

  nocrc
133
	Disable CRC32C calculation for data writes.  If set, the storage node
S
Sage Weil 已提交
134 135 136
	must rely on TCP's error correction to detect data corruption
	in the data payload.

137 138 139 140 141 142 143 144 145 146 147
  dcache
        Use the dcache contents to perform negative lookups and
        readdir when the client has the entire directory contents in
        its cache.  (This does not change correctness; the client uses
        cached metadata only when a lease or capability ensures it is
        valid.)

  nodcache
        Do not use the dcache as above.  This avoids a significant amount of
        complex code, sacrificing performance without affecting correctness,
        and is useful for tracking down bugs.
S
Sage Weil 已提交
148

149 150
  noasyncreaddir
	Do not use the dcache as above for readdir.
S
Sage Weil 已提交
151

152 153 154 155
  noquotadf
        Report overall filesystem usage in statfs instead of using the root
        directory quota.

S
Sage Weil 已提交
156 157 158 159 160 161 162
More Information
================

For more information on Ceph, see the home page at
	http://ceph.newdream.net/

The Linux kernel client source tree is available at
C
Cheng Renquan 已提交
163 164
	git://ceph.newdream.net/git/ceph-client.git
	git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git
S
Sage Weil 已提交
165 166

and the source for the full system is at
C
Cheng Renquan 已提交
167
	git://ceph.newdream.net/git/ceph.git