1. 02 12月, 2010 1 次提交
  2. 18 11月, 2010 1 次提交
  3. 08 11月, 2010 1 次提交
    • S
      ceph: fix uid/gid on resent mds requests · cb4276cc
      Sage Weil 提交于
      MDS requests can be rebuilt and resent in non-process context, but were
      filling in uid/gid from current_fsuid/gid.  Put that information in the
      request struct on request setup.
      
      This fixes incorrect (and root) uid/gid getting set for requests that
      are forwarded between MDSs, usually due to metadata migrations.
      Signed-off-by: NSage Weil <sage@newdream.net>
      cb4276cc
  4. 21 10月, 2010 3 次提交
    • S
      ceph: switch from BKL to lock_flocks() · 496e5955
      Sage Weil 提交于
      Switch from using the BKL explicitly to the new lock_flocks() interface.
      Eventually this will turn into a spinlock.
      Signed-off-by: NSage Weil <sage@newdream.net>
      496e5955
    • G
      ceph: preallocate flock state without locks held · fca4451a
      Greg Farnum 提交于
      When the lock_kernel() turns into lock_flocks() and a spinlock, we won't
      be able to do allocations with the lock held.  Preallocate space without
      the lock, and retry if the lock state changes out from underneath us.
      Signed-off-by: NGreg Farnum <gregf@hq.newdream.net>
      Signed-off-by: NSage Weil <sage@newdream.net>
      fca4451a
    • Y
      ceph: factor out libceph from Ceph file system · 3d14c5d2
      Yehuda Sadeh 提交于
      This factors out protocol and low-level storage parts of ceph into a
      separate libceph module living in net/ceph and include/linux/ceph.  This
      is mostly a matter of moving files around.  However, a few key pieces
      of the interface change as well:
      
       - ceph_client becomes ceph_fs_client and ceph_client, where the latter
         captures the mon and osd clients, and the fs_client gets the mds client
         and file system specific pieces.
       - Mount option parsing and debugfs setup is correspondingly broken into
         two pieces.
       - The mon client gets a generic handler callback for otherwise unknown
         messages (mds map, in this case).
       - The basic supported/required feature bits can be expanded (and are by
         ceph_fs_client).
      
      No functional change, aside from some subtle error handling cases that got
      cleaned up in the refactoring process.
      Signed-off-by: NSage Weil <sage@newdream.net>
      3d14c5d2
  5. 12 9月, 2010 1 次提交
  6. 27 8月, 2010 1 次提交
  7. 23 8月, 2010 2 次提交
    • S
      ceph: direct requests in snapped namespace based on nonsnap parent · eb6bb1c5
      Sage Weil 提交于
      When making a request in the virtual snapdir or a snapped portion of the
      namespace, we should choose the MDS based on the first nonsnap parent (and
      its caps).  If that is not the best place, we will get forward hints to
      find the right MDS in the cluster.  This fixes ESTALE errors when using
      the .snap directory and namespace with multiple MDSs.
      Signed-off-by: NSage Weil <sage@newdream.net>
      eb6bb1c5
    • S
      ceph: fix multiple mds session shutdown · f3c60c59
      Sage Weil 提交于
      The use of a completion when waiting for session shutdown during umount is
      inappropriate, given the complexity of the condition.  For multiple MDS's,
      this resulted in the umount thread spinning, often preventing the session
      close message from being processed in some cases.
      
      Switch to a waitqueue and defined a condition helper.  This cleans things
      up nicely.
      Signed-off-by: NSage Weil <sage@newdream.net>
      f3c60c59
  8. 04 8月, 2010 1 次提交
  9. 03 8月, 2010 2 次提交
  10. 02 8月, 2010 8 次提交
  11. 28 7月, 2010 1 次提交
  12. 17 7月, 2010 2 次提交
    • S
      ceph: do not include cap/dentry releases in replayed messages · e979cf50
      Sage Weil 提交于
      Strip the cap and dentry releases from replayed messages.  They can
      cause the shared state to get out of sync because they were generated
      (with the request message) earlier, and no longer reflect the current
      client state.
      Signed-off-by: NSage Weil <sage@newdream.net>
      e979cf50
    • S
      ceph: reuse request message when replaying against recovering mds · 01a92f17
      Sage Weil 提交于
      Replayed rename operations (after an mds failure/recovery) were broken
      because the request paths were regenerated from the dentry names, which
      get mangled when d_move() is called.
      
      Instead, resend the previous request message when replaying completed
      operations.  Just make sure the REPLAY flag is set and the target ino is
      filled in.
      
      This fixes problems with workloads doing renames when the MDS restarts,
      where the rename operation appears to succeed, but on mds restart then
      fails (leading to client confusion, app breakage, etc.).
      Signed-off-by: NSage Weil <sage@newdream.net>
      01a92f17
  13. 22 6月, 2010 1 次提交
    • S
      ceph: delay umount until all mds requests drop inode+dentry refs · 17c688c3
      Sage Weil 提交于
      This fixes a race between handle_reply finishing an mds request, signalling
      completion, and then dropping the request structing and its dentry+inode
      refs, and pre_umount function waiting for requests to finish before
      letting the vfs tear down the dcache.  If umount was delayed waiting for
      mds requests, we could race and BUG in shrink_dcache_for_umount_subtree
      because of a slow dput.
      
      This delays umount until the msgr queue flushes, which means handle_reply
      will exit and will have dropped the ceph_mds_request struct.  I'm assuming
      the VFS has already ensured that its calls have all completed and those
      request refs have thus been dropped as well (I haven't seen that race, at
      least).
      Signed-off-by: NSage Weil <sage@newdream.net>
      17c688c3
  14. 11 6月, 2010 2 次提交
  15. 05 6月, 2010 1 次提交
  16. 30 5月, 2010 3 次提交
    • S
      ceph: clean up on forwarded aborted mds request · 2a8e5e36
      Sage Weil 提交于
      If an mds request is aborted (timeout, SIGKILL), it is left registered to
      keep our state in sync with the mds.  If we get a forward notification,
      though, we know the request didn't succeed and we can unregister it
      safely.  We were trying to resend it, but then bailing out (and not
      unregistering) in __do_request.
      Signed-off-by: NSage Weil <sage@newdream.net>
      2a8e5e36
    • S
      ceph: make lease code DN specific · dd1c9057
      Sage Weil 提交于
      The lease code includes a mask in the CEPH_LOCK_* namespace, but that
      namespace is changing, and only one mask (formerly _DN == 1) is used, so
      hard code for that value for now.
      
      If we ever extend this code to handle leases over different data types we
      can extend it accordingly.
      Signed-off-by: NSage Weil <sage@newdream.net>
      dd1c9057
    • S
      ceph: make mds requests killable, not interruptible · aa91647c
      Sage Weil 提交于
      The underlying problem is that many mds requests can't be restarted.  For
      example, a restarted create() would return -EEXIST if the original request
      succeeds.  However, we do not want a hung MDS to hang the client too.  So,
      use the _killable wait_for_completion variants to abort on SIGKILL but
      nothing else.
      Signed-off-by: NSage Weil <sage@newdream.net>
      aa91647c
  17. 22 5月, 2010 1 次提交
  18. 18 5月, 2010 8 次提交
    • Y
      ceph: all allocation functions should get gfp_mask · 34d23762
      Yehuda Sadeh 提交于
      This is essential, as for the rados block device we'll need
      to run in different contexts that would need flags that
      are other than GFP_NOFS.
      Signed-off-by: NYehuda Sadeh <yehuda@hq.newdream.net>
      Signed-off-by: NSage Weil <sage@newdream.net>
      34d23762
    • S
      ceph: use common helper for aborted dir request invalidation · 167c9e35
      Sage Weil 提交于
      We invalidate I_COMPLETE and dentry leases in two places: on aborted mds
      request and on request replay.  Use common helper to avoid duplicate code.
      Signed-off-by: NSage Weil <sage@newdream.net>
      167c9e35
    • S
      85792d0d
    • S
      ceph: throw out dirty caps metadata, data on session teardown · 6c99f254
      Sage Weil 提交于
      The remove_session_caps() helper is called when an MDS closes out our
      session (either normally, or as a result of a failed reconnect), and when
      we tear down state for umount.  If we remove the last cap, and there are
      no cap migrations in progress, then there is little hope of us flushing
      out that data to the mds (without heroic efforts to reconnect and flush).
      
      So, to avoid leaving inodes pinned (due to dirty state) and crashing after
      umount, throw out dirty caps state and unpin the inodes.  Print a warning
      to the console so we know something was lost.
      
      NOTE: Although we drop wrbuffer refs, we don't actually mark pages clean;
      maybe a truncate should be queued?
      Signed-off-by: NSage Weil <sage@newdream.net>
      6c99f254
    • S
      ceph: attempt mds reconnect if mds closes our session · 7e70f0ed
      Sage Weil 提交于
      Currently, if our session is closed (due to a timeout, or explicit close,
      or whatever), we just sit there doing nothing unless/until the MDS
      restarts, at which point we try to reconnect.
      
      Change client to attempt an immediate reconnect if our session is closed.
      
      Note that currently the MDS doesn't support this, and our attempt will
      fail.  We'll get a session CLOSE, our caps and dirty cap state will be
      dropped, and the client will be free to attempt to reconnect.  That's
      clearly not as nice as a successful reconnect, but it at least allows us
      to try to carry on, and in the future the MDS will support a reconnect
      and we will fare better.
      Signed-off-by: NSage Weil <sage@newdream.net>
      7e70f0ed
    • S
      ceph: clean up send_mds_reconnect interface · 34b6c855
      Sage Weil 提交于
      Pass a ceph_mds_session, since the caller has it.
      
      Remove the dead code for sending empty reconnects.  It used to be used
      when the MDS contacted _us_ to solicit a reconnect, and we could reply
      saying "go away, I have no session."  Now we only send reconnects based
      on the mds map, and only when we do in fact have an open session.
      Signed-off-by: NSage Weil <sage@newdream.net>
      34b6c855
    • S
      ceph: wait for mds OPEN reply to indicate reconnect success · 29790f26
      Sage Weil 提交于
      We used to infer reconnect success by watching the MDS state, essentially
      assuming that hearing nothing meant things were ok.  That wasn't
      particularly reliable.  Instead, the MDS replies with an explicit OPEN
      message to indicate success.
      
      Strictly speaking, this is a protocol change, but it is a backwards
      compatible one that does not break new clients + old servers or old
      clients + new servers.  At least not yet.
      
      Drop unused @all argument from kick_requests while we're at it.
      Signed-off-by: NSage Weil <sage@newdream.net>
      29790f26
    • S
      ceph: only send cap releases when mds is OPEN|HUNG · aab53dd9
      Sage Weil 提交于
      On OPENING we shouldn't have any caps (or releases).
      On CLOSING, we should wait until we succeed (and throw it all out), or
      don't (and are OPEN again).
      On RECONNECTING we can wait until we are OPEN.
      Signed-off-by: NSage Weil <sage@newdream.net>
      aab53dd9