1. 05 6月, 2014 18 次提交
  2. 04 6月, 2014 4 次提交
  3. 03 6月, 2014 1 次提交
  4. 02 6月, 2014 3 次提交
  5. 01 6月, 2014 1 次提交
    • L
      dcache: add missing lockdep annotation · 9f12600f
      Linus Torvalds 提交于
      lock_parent() very much on purpose does nested locking of dentries, and
      is careful to maintain the right order (lock parent first).  But because
      it didn't annotate the nested locking order, lockdep thought it might be
      a deadlock on d_lock, and complained.
      
      Add the proper annotation for the inner locking of the child dentry to
      make lockdep happy.
      
      Introduced by commit 046b961b ("shrink_dentry_list(): take parent's
      ->d_lock earlier").
      Reported-and-tested-by: NJosh Boyer <jwboyer@fedoraproject.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9f12600f
  6. 30 5月, 2014 3 次提交
    • A
      dentry_kill() doesn't need the second argument now · 8cbf74da
      Al Viro 提交于
      it's 1 in the only remaining caller.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      8cbf74da
    • A
      dealing with the rest of shrink_dentry_list() livelock · b2b80195
      Al Viro 提交于
      We have the same problem with ->d_lock order in the inner loop, where
      we are dropping references to ancestors.  Same solution, basically -
      instead of using dentry_kill() we use lock_parent() (introduced in the
      previous commit) to get that lock in a safe way, recheck ->d_count
      (in case if lock_parent() has ended up dropping and retaking ->d_lock
      and somebody managed to grab a reference during that window), trylock
      the inode->i_lock and use __dentry_kill() to do the rest.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      b2b80195
    • A
      shrink_dentry_list(): take parent's ->d_lock earlier · 046b961b
      Al Viro 提交于
      The cause of livelocks there is that we are taking ->d_lock on
      dentry and its parent in the wrong order, forcing us to use
      trylock on the parent's one.  d_walk() takes them in the right
      order, and unfortunately it's not hard to create a situation
      when shrink_dentry_list() can't make progress since trylock
      keeps failing, and shrink_dcache_parent() or check_submounts_and_drop()
      keeps calling d_walk() disrupting the very shrink_dentry_list() it's
      waiting for.
      
      Solution is straightforward - if that trylock fails, let's unlock
      the dentry itself and take locks in the right order.  We need to
      stabilize ->d_parent without holding ->d_lock, but that's doable
      using RCU.  And we'd better do that in the very beginning of the
      loop in shrink_dentry_list(), since the checks on refcount, etc.
      would need to be redone anyway.
      
      That deals with a half of the problem - killing dentries on the
      shrink list itself.  Another one (dropping their parents) is
      in the next commit.
      
      locking parent is interesting - it would be easy to do rcu_read_lock(),
      lock whatever we think is a parent, lock dentry itself and check
      if the parent is still the right one.  Except that we need to check
      that *before* locking the dentry, or we are risking taking ->d_lock
      out of order.  Fortunately, once the D1 is locked, we can check if
      D2->d_parent is equal to D1 without the need to lock D2; D2->d_parent
      can start or stop pointing to D1 only under D1->d_lock, so taking
      D1->d_lock is enough.  In other words, the right solution is
      rcu_read_lock/lock what looks like parent right now/check if it's
      still our parent/rcu_read_unlock/lock the child.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      046b961b
  7. 29 5月, 2014 3 次提交
  8. 28 5月, 2014 4 次提交
  9. 24 5月, 2014 1 次提交
  10. 23 5月, 2014 2 次提交
    • D
      AFS: Pass an afs_call* to call->async_workfn() instead of a work_struct* · 656f88dd
      David Howells 提交于
      call->async_workfn() can take an afs_call* arg rather than a work_struct* as
      the functions assigned there are now called from afs_async_workfn() which has
      to call container_of() anyway.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Reviewed-by: NNathaniel Wesley Filardo <nwf@cs.jhu.edu>
      Reviewed-by: NTejun Heo <tj@kernel.org>
      656f88dd
    • N
      AFS: Fix kafs module unloading · 150a6b47
      Nathaniel Wesley Filardo 提交于
      At present, it is not possible to successfully unload the kafs module if there
      are outstanding async outgoing calls (those made with afs_make_call()).  This
      appears to be due to the changes introduced by:
      
      	commit 05949945
      	Author: Tejun Heo <tj@kernel.org>
      	Date:   Fri Mar 7 10:24:50 2014 -0500
      	Subject: afs: don't use PREPARE_WORK
      
      which didn't go far enough.  The problem is due to:
      
       (1) The aforementioned commit introduced a separate handler function pointer
           in the call, call->async_workfn, in addition to the original workqueue
           item, call->async_work, for asynchronous operations because workqueues
           subsystem cannot handle the workqueue item pointer being changed whilst
           the item is queued or being processed.
      
       (2) afs_async_workfn() was introduced in that commit to be the callback for
           call->async_work.  Its sole purpose is to run whatever call->async_workfn
           points to.
      
       (3) call->async_workfn is only used from afs_async_workfn(), which is only
           set on async_work by afs_collect_incoming_call() - ie. for incoming
           calls.
      
       (4) call->async_workfn is *not* set by afs_make_call() when outgoing calls are
           made, and call->async_work is set afs_process_async_call() - and not
           afs_async_workfn().
      
       (5) afs_process_async_call() now changes call->async_workfn rather than
           call->async_work to point to afs_delete_async_call() to clean up, but this
           is only effective for incoming calls because call->async_work does not
           point to afs_async_workfn() for outgoing calls.
      
       (6) Because, for incoming calls, call->async_work remains pointing to
           afs_process_async_call() this results in an infinite loop.
      
      Instead, make the workqueue uniformly vector through call->async_workfn, via
      afs_async_workfn() and simply initialise call->async_workfn to point to
      afs_process_async_call() in afs_make_call().
      Signed-off-by: NNathaniel Wesley Filardo <nwf@cs.jhu.edu>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Reviewed-by: NTejun Heo <tj@kernel.org>
      150a6b47