1. 27 7月, 2011 9 次提交
  2. 17 7月, 2011 1 次提交
  3. 14 6月, 2011 2 次提交
  4. 08 6月, 2011 4 次提交
  5. 26 5月, 2011 3 次提交
  6. 25 5月, 2011 3 次提交
    • S
      ceph: fix cap flush race reentrancy · db354052
      Sage Weil 提交于
      In e9964c10 we change cap flushing to do a delicate dance because some
      inodes on the cap_dirty list could be in a migrating state (got EXPORT but
      not IMPORT) in which we couldn't actually flush and move from
      dirty->flushing, breaking the while (!empty) { process first } loop
      structure.  It worked for a single sync thread, but was not reentrant and
      triggered infinite loops when multiple syncers came along.
      
      Instead, move inodes with dirty to a separate cap_dirty_migrating list
      when in the limbo export-but-no-import state, allowing us to go back to
      the simple loop structure (which was reentrant).  This is cleaner and more
      robust.
      
      Audited the cap_dirty users and this looks fine:
      list_empty(&ci->i_dirty_item) is still a reliable indicator of whether we
      have dirty caps (which list we're on is irrelevant) and list_del_init()
      calls still do the right thing.
      Signed-off-by: NSage Weil <sage@newdream.net>
      db354052
    • S
      ceph: avoid inode lookup on nfs fh reconnect · 45e3d3ee
      Sage Weil 提交于
      If we get the inode from the MDS, we have a reference in req; don't do a
      fresh lookup.
      Signed-off-by: NSage Weil <sage@newdream.net>
      45e3d3ee
    • S
      ceph: use LOOKUPINO to make unconnected nfs fh more reliable · 3c454cf2
      Sage Weil 提交于
      If we are unable to locate an inode by ino, ask the MDS using the new
      LOOKUPINO command.
      Signed-off-by: NSage Weil <sage@newdream.net>
      3c454cf2
  7. 20 5月, 2011 7 次提交
  8. 12 5月, 2011 3 次提交
  9. 05 5月, 2011 1 次提交
  10. 04 5月, 2011 2 次提交
  11. 31 3月, 2011 1 次提交
  12. 30 3月, 2011 1 次提交
  13. 29 3月, 2011 1 次提交
  14. 26 3月, 2011 1 次提交
    • S
      ceph: flush msgr_wq during mds_client shutdown · ef550f6f
      Sage Weil 提交于
      The release method for mds connections uses a backpointer to the
      mds_client, so we need to flush the workqueue of any pending work (and
      ceph_connection references) prior to freeing the mds_client.  This fixes
      an oops easily triggered under UML by
      
       while true ; do mount ... ; umount ... ; done
      
      Also fix an outdated comment: the flush in ceph_destroy_client only flushes
      OSD connections out.  This bug is basically an artifact of the ceph ->
      ceph+libceph conversion.
      Signed-off-by: NSage Weil <sage@newdream.net>
      ef550f6f
  15. 22 3月, 2011 1 次提交