1. 02 3月, 2010 4 次提交
  2. 24 2月, 2010 2 次提交
  3. 20 2月, 2010 2 次提交
  4. 18 2月, 2010 2 次提交
    • S
      ceph: fix iterate_caps removal race · 7c1332b8
      Sage Weil 提交于
      We need to be able to iterate over all caps on a session with a
      possibly slow callback on each cap.  To allow this, we used to
      prevent cap reordering while we were iterating.  However, we were
      not safe from races with removal: removing the 'next' cap would
      make the next pointer from list_for_each_entry_safe be invalid,
      and cause a lock up or similar badness.
      
      Instead, we keep an iterator pointer in the session pointing to
      the current cap.  As before, we avoid reordering.  For removal,
      if the cap isn't the current cap we are iterating over, we are
      fine.  If it is, we clear cap->ci (to mark the cap as pending
      removal) but leave it in the session list.  In iterate_caps, we
      can safely finish removal and get the next cap pointer.
      
      While we're at it, clean up put_cap to not take a cap reservation
      context, as it was never used.
      Signed-off-by: NSage Weil <sage@newdream.net>
      7c1332b8
    • S
      ceph: clean up readdir caps reservation · 85ccce43
      Sage Weil 提交于
      Use a global counter for the minimum number of allocated caps instead of
      hard coding a check against readdir_max.  This takes into account multiple
      client instances, and avoids examining the superblock mount options when a
      cap is dropped.
      Signed-off-by: NSage Weil <sage@newdream.net>
      85ccce43
  5. 12 2月, 2010 5 次提交
  6. 24 12月, 2009 2 次提交
  7. 22 12月, 2009 1 次提交
  8. 04 12月, 2009 1 次提交
  9. 13 11月, 2009 1 次提交
    • S
      ceph: fix page invalidation deadlock · 11ea8eda
      Sage Weil 提交于
      We occasionally want to make a best-effort attempt to invalidate cache
      pages without fear of blocking.  If this fails, we fall back to an async
      invalidate in another thread.
      
      Use invalidate_mapping_pages instead of invalidate_inode_page2, as that
      will skip locked pages, and not deadlock.
      Signed-off-by: NSage Weil <sage@newdream.net>
      11ea8eda
  10. 11 11月, 2009 1 次提交
  11. 10 11月, 2009 1 次提交
    • S
      ceph: do not confuse stale and dead (unreconnected) caps · 685f9a5d
      Sage Weil 提交于
      We were using the cap_gen to track both stale caps (caps that timed out
      due to temporarily losing touch with the mds) and dead caps that did not
      reconnect after an MDS failure.  Introduce a recon_gen counter to track
      reconnections to restarted MDSs and kill dead caps based on that instead.
      
      Rename gen to cap_gen while we're at it to make it more clear which is
      which.
      Signed-off-by: NSage Weil <sage@newdream.net>
      685f9a5d
  12. 28 10月, 2009 1 次提交
  13. 16 10月, 2009 2 次提交
    • S
      ceph: move dirty caps code around · 76e3b390
      Sage Weil 提交于
      Cleanup only.
      Signed-off-by: NSage Weil <sage@newdream.net>
      76e3b390
    • S
      ceph: flush dirty caps via the cap_dirty list · afcdaea3
      Sage Weil 提交于
      Previously we were flushing dirty caps by passing an extra flag
      when traversing the delayed caps list.  Besides being a bit ugly,
      that can also miss caps that are dirty but didn't result in a
      cap requeue: notably, mark_caps_dirty().
      
      Separate the flushing into a separate helper, and traverse the
      cap_dirty list.
      
      This also brings i_dirty_item in line with i_dirty_caps: we are
      on the list IFF caps != 0.  We carry an inode ref IFF
      dirty_caps|flushing_caps != 0.
      
      Lose the unused return value from __ceph_mark_caps_dirty().
      Signed-off-by: NSage Weil <sage@newdream.net>
      afcdaea3
  14. 15 10月, 2009 1 次提交
  15. 07 10月, 2009 1 次提交
    • S
      ceph: capability management · a8599bd8
      Sage Weil 提交于
      The Ceph metadata servers control client access to inode metadata and
      file data by issuing capabilities, granting clients permission to read
      and/or write both inode field and file data to OSDs (storage nodes).
      Each capability consists of a set of bits indicating which operations
      are allowed.
      
      If the client holds a *_SHARED cap, the client has a coherent value
      that can be safely read from the cached inode.
      
      In the case of a *_EXCL (exclusive) or FILE_WR capabilities, the client
      is allowed to change inode attributes (e.g., file size, mtime), note
      its dirty state in the ceph_cap, and asynchronously flush that
      metadata change to the MDS.
      
      In the event of a conflicting operation (perhaps by another client),
      the MDS will revoke the conflicting client capabilities.
      
      In order for a client to cache an inode, it must hold a capability
      with at least one MDS server.  When inodes are released, release
      notifications are batched and periodically sent en masse to the MDS
      cluster to release server state.
      Signed-off-by: NSage Weil <sage@newdream.net>
      a8599bd8