1. 28 10月, 2009 2 次提交
  2. 27 10月, 2009 2 次提交
  3. 23 10月, 2009 1 次提交
  4. 22 10月, 2009 1 次提交
  5. 20 10月, 2009 2 次提交
  6. 17 10月, 2009 1 次提交
  7. 16 10月, 2009 4 次提交
  8. 15 10月, 2009 3 次提交
  9. 14 10月, 2009 1 次提交
  10. 13 10月, 2009 2 次提交
  11. 10 10月, 2009 7 次提交
  12. 09 10月, 2009 1 次提交
  13. 08 10月, 2009 4 次提交
  14. 07 10月, 2009 9 次提交
    • S
      ceph: document shared files in README · e324b8f9
      Sage Weil 提交于
      Document files shared between kernel and user code trees.
      Signed-off-by: NSage Weil <sage@newdream.net>
      e324b8f9
    • S
      ceph: Kconfig, Makefile · 9030aaf9
      Sage Weil 提交于
      Kconfig options and Makefile.
      Signed-off-by: NSage Weil <sage@newdream.net>
      9030aaf9
    • S
      ceph: debugfs · 76aa844d
      Sage Weil 提交于
      Basic state information is available via /sys/kernel/debug/ceph,
      including instances of the client, fsids, current monitor, mds and osd
      maps, outstanding server requests, and hooks to adjust debug levels.
      Signed-off-by: NSage Weil <sage@newdream.net>
      76aa844d
    • S
      ceph: ioctls · 8f4e91de
      Sage Weil 提交于
      A few Ceph ioctls for getting and setting file layout (striping)
      parameters, and learning the identity and network address of the OSD a
      given region of a file is stored on.
      Signed-off-by: NSage Weil <sage@newdream.net>
      8f4e91de
    • S
      ceph: nfs re-export support · a8e63b7d
      Sage Weil 提交于
      Basic NFS re-export support is included.  This mostly works.  However,
      Ceph's MDS design precludes the ability to generate a (small)
      filehandle that will be valid forever, so this is of limited utility.
      Signed-off-by: NSage Weil <sage@newdream.net>
      a8e63b7d
    • S
      ceph: message pools · 8fc91fd8
      Sage Weil 提交于
      The msgpool is a basic mempool_t-like structure to preallocate
      messages we expect to receive over the wire.  This ensures we have the
      necessary memory preallocated to process replies to requests, or to
      process unsolicited messages from various servers.
      Signed-off-by: NSage Weil <sage@newdream.net>
      8fc91fd8
    • S
      ceph: messenger library · 31b8006e
      Sage Weil 提交于
      A generic message passing library is used to communicate with all
      other components in the Ceph file system.  The messenger library
      provides ordered, reliable delivery of messages between two nodes in
      the system.
      
      This implementation is based on TCP.
      Signed-off-by: NSage Weil <sage@newdream.net>
      31b8006e
    • S
      ceph: snapshot management · 963b61eb
      Sage Weil 提交于
      Ceph snapshots rely on client cooperation in determining which
      operations apply to which snapshots, and appropriately flushing
      snapshotted data and metadata back to the OSD and MDS clusters.
      Because snapshots apply to subtrees of the file hierarchy and can be
      created at any time, there is a fair bit of bookkeeping required to
      make this work.
      
      Portions of the hierarchy that belong to the same set of snapshots
      are described by a single 'snap realm.'  A 'snap context' describes
      the set of snapshots that exist for a given file or directory.
      Signed-off-by: NSage Weil <sage@newdream.net>
      963b61eb
    • S
      ceph: capability management · a8599bd8
      Sage Weil 提交于
      The Ceph metadata servers control client access to inode metadata and
      file data by issuing capabilities, granting clients permission to read
      and/or write both inode field and file data to OSDs (storage nodes).
      Each capability consists of a set of bits indicating which operations
      are allowed.
      
      If the client holds a *_SHARED cap, the client has a coherent value
      that can be safely read from the cached inode.
      
      In the case of a *_EXCL (exclusive) or FILE_WR capabilities, the client
      is allowed to change inode attributes (e.g., file size, mtime), note
      its dirty state in the ceph_cap, and asynchronously flush that
      metadata change to the MDS.
      
      In the event of a conflicting operation (perhaps by another client),
      the MDS will revoke the conflicting client capabilities.
      
      In order for a client to cache an inode, it must hold a capability
      with at least one MDS server.  When inodes are released, release
      notifications are batched and periodically sent en masse to the MDS
      cluster to release server state.
      Signed-off-by: NSage Weil <sage@newdream.net>
      a8599bd8