1. 01 6月, 2018 2 次提交
    • N
      NFS: use cond_resched() when restarting walk of delegation list. · 3ca951b6
      NeilBrown 提交于
      In three places we walk the list of delegations for an nfs_client
      until an interesting one is found, then we act of that delegation
      and restart the walk.
      
      New delegations are added to the end of a list and the interesting
      delegations are usually old, so in many case we won't repeat
      a long walk over and over again, but it is possible - particularly if
      the first server in the list has a large number of uninteresting
      delegations.
      
      In each cache the work done on interesting delegations will often
      complete without sleeping, so this could loop many times without
      giving up the CPU.
      
      So add a cond_resched() at an appropriate point to avoid hogging the
      CPU for too long.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com>
      3ca951b6
    • N
      NFS: slight optimization for walking list for delegations · f3893491
      NeilBrown 提交于
      There are 3 places where we walk the list of delegations
      for an nfs_client.
      In each case there are two nested loops, one for nfs_servers
      and one for nfs_delegations.
      
      When we find an interesting delegation we try to get an active
      reference to the server.  If that fails, it is pointless to
      continue to look at the other delegation for the server as
      we will never be able to get an active reference.
      So instead of continuing in the inner loop, break out
      and continue in the outer loop.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com>
      f3893491
  2. 11 4月, 2018 2 次提交
  3. 29 1月, 2018 1 次提交
  4. 18 11月, 2017 1 次提交
  5. 15 8月, 2017 1 次提交
  6. 02 12月, 2016 1 次提交
  7. 28 9月, 2016 9 次提交
  8. 18 5月, 2016 1 次提交
  9. 08 10月, 2015 1 次提交
  10. 21 9月, 2015 1 次提交
    • T
      NFSv4: Recovery of recalled read delegations is broken · 24311f88
      Trond Myklebust 提交于
      When a read delegation is being recalled, and we're reclaiming the
      cached opens, we need to make sure that we only reclaim read-only
      modes.
      A previous attempt to do this, relied on retrieving the delegation
      type from the nfs4_opendata structure. Unfortunately, as Kinglong
      pointed out, this field can only be set when performing reboot recovery.
      
      Furthermore, if we call nfs4_open_recover(), then we end up clobbering
      the state->flags for all modes that we're not recovering...
      
      The fix is to have the delegation recall code pass this information
      to the recovery call, and then refactor the recovery code so that
      nfs4_open_delegation_recall() does not need to call nfs4_open_recover().
      Reported-by: NKinglong Mee <kinglongmee@gmail.com>
      Fixes: 39f897fd ("NFSv4: When returning a delegation, don't...")
      Tested-by: NKinglong Mee <kinglongmee@gmail.com>
      Cc: NeilBrown <neilb@suse.com>
      Cc: stable@vger.kernel.org # v4.2+
      Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
      24311f88
  11. 08 9月, 2015 2 次提交
  12. 28 3月, 2015 1 次提交
  13. 13 3月, 2015 1 次提交
  14. 03 3月, 2015 4 次提交
  15. 02 3月, 2015 1 次提交
  16. 14 2月, 2015 1 次提交
  17. 25 1月, 2015 1 次提交
  18. 17 1月, 2015 3 次提交
  19. 13 11月, 2014 2 次提交
  20. 13 7月, 2014 1 次提交
  21. 03 3月, 2014 1 次提交
  22. 22 8月, 2013 1 次提交
  23. 29 6月, 2013 1 次提交
    • J
      locks: protect most of the file_lock handling with i_lock · 1c8c601a
      Jeff Layton 提交于
      Having a global lock that protects all of this code is a clear
      scalability problem. Instead of doing that, move most of the code to be
      protected by the i_lock instead. The exceptions are the global lists
      that the ->fl_link sits on, and the ->fl_block list.
      
      ->fl_link is what connects these structures to the
      global lists, so we must ensure that we hold those locks when iterating
      over or updating these lists.
      
      Furthermore, sound deadlock detection requires that we hold the
      blocked_list state steady while checking for loops. We also must ensure
      that the search and update to the list are atomic.
      
      For the checking and insertion side of the blocked_list, push the
      acquisition of the global lock into __posix_lock_file and ensure that
      checking and update of the  blocked_list is done without dropping the
      lock in between.
      
      On the removal side, when waking up blocked lock waiters, take the
      global lock before walking the blocked list and dequeue the waiters from
      the global list prior to removal from the fl_block list.
      
      With this, deadlock detection should be race free while we minimize
      excessive file_lock_lock thrashing.
      
      Finally, in order to avoid a lock inversion problem when handling
      /proc/locks output we must ensure that manipulations of the fl_block
      list are also protected by the file_lock_lock.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      1c8c601a