1. 09 6月, 2013 1 次提交
  2. 21 5月, 2013 1 次提交
    • A
      NFSv4.1 Fix a pNFS session draining deadlock · 774d5f14
      Andy Adamson 提交于
      On a CB_RECALL the callback service thread flushes the inode using
      filemap_flush prior to scheduling the state manager thread to return the
      delegation. When pNFS is used and I/O has not yet gone to the data server
      servicing the inode, a LAYOUTGET can preceed the I/O. Unlike the async
      filemap_flush call, the LAYOUTGET must proceed to completion.
      
      If the state manager starts to recover data while the inode flush is sending
      the LAYOUTGET, a deadlock occurs as the callback service thread holds the
      single callback session slot until the flushing is done which blocks the state
      manager thread, and the state manager thread has set the session draining bit
      which puts the inode flush LAYOUTGET RPC to sleep on the forechannel slot
      table waitq.
      
      Separate the draining of the back channel from the draining of the fore channel
      by moving the NFS4_SESSION_DRAINING bit from session scope into the fore
      and back slot tables.  Drain the back channel first allowing the LAYOUTGET
      call to proceed (and fail) so the callback service thread frees the callback
      slot. Then proceed with draining the forechannel.
      Signed-off-by: NAndy Adamson <andros@netapp.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      774d5f14
  3. 06 4月, 2013 1 次提交
  4. 15 2月, 2013 1 次提交
    • T
      NFSv4.1: Fix bulk recall and destroy of layouts · fd9a8d71
      Trond Myklebust 提交于
      The current code in pnfs_destroy_all_layouts() assumes that removing
      the layout from the server->layouts list is sufficient to make it
      invisible to other processes. This ignores the fact that most
      users access the layout through the nfs_inode->layout...
      There is further breakage due to lack of reference counting of the
      layouts, meaning that the whole thing Oopses at the drop of a hat.
      
      The code in initiate_bulk_draining() is almost correct, and can be
      used as a model for pnfs_destroy_all_layouts(), so move that
      code to pnfs.c, and refactor the code to allow us to choose between
      a single filesystem bulk recall, and a recall of all layouts.
      Also note that initiate_bulk_draining() currently calls iput() while
      holding locks. Fix that too.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Cc: stable@vger.kernel.org
      fd9a8d71
  5. 06 1月, 2013 1 次提交
  6. 06 12月, 2012 7 次提交
  7. 05 11月, 2012 1 次提交
  8. 29 9月, 2012 3 次提交
  9. 06 3月, 2012 2 次提交
  10. 03 3月, 2012 1 次提交
  11. 02 3月, 2012 1 次提交
    • T
      NFSv4.1: Get rid of NFS4CLNT_LAYOUTRECALL · 0cb3284b
      Trond Myklebust 提交于
      The NFS4CLNT_LAYOUTRECALL bit is a long-term impediment to scalability. It
      basically stops all other recalls by a given server once any layout recall
      is requested.
      
      If the recall is for a different file, then we don't care.
      If the recall applies to the same file, then we're in one of two situations:
      Either we are in the case of a replay of an existing request, in which case
      the session is supposed to deal with matters, or we are dealing with a
      completely different request, in which case we should just try to process
      it.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      0cb3284b
  12. 07 2月, 2012 1 次提交
  13. 01 2月, 2012 1 次提交
  14. 05 1月, 2012 1 次提交
  15. 04 8月, 2011 2 次提交
    • T
      NFSv4.1: Return NFS4ERR_BADSESSION to callbacks during session resets · 910ac68a
      Trond Myklebust 提交于
      If the client is in the process of resetting the session when it receives
      a callback, then returning NFS4ERR_DELAY may cause a deadlock with the
      DESTROY_SESSION call.
      
      Basically, if the client returns NFS4ERR_DELAY in response to the
      CB_SEQUENCE call, then the server is entitled to believe that the
      client is busy because it is already processing that call. In that
      case, the server is perfectly entitled to respond with a
      NFS4ERR_BACK_CHAN_BUSY to any DESTROY_SESSION call.
      
      Fix this by having the client reply with a NFS4ERR_BADSESSION in
      response to the callback if it is resetting the session.
      
      Cc: stable@kernel.org [2.6.38+]
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      910ac68a
    • T
      NFSv4.1: Fix the callback 'highest_used_slotid' behaviour · 55a67399
      Trond Myklebust 提交于
      Currently, there is no guarantee that we will call nfs4_cb_take_slot() even
      though nfs4_callback_compound() will consistently call
      nfs4_cb_free_slot() provided the cb_process_state has set the 'clp' field.
      The result is that we can trigger the BUG_ON() upon the next call to
      nfs4_cb_take_slot().
      
      This patch fixes the above problem by using the slot id that was taken in
      the CB_SEQUENCE operation as a flag for whether or not we need to call
      nfs4_cb_free_slot().
      It also fixes an atomicity problem: we need to set tbl->highest_used_slotid
      atomically with the check for NFS4_SESSION_DRAINING, otherwise we end up
      racing with the various tests in nfs4_begin_drain_session().
      
      Cc: stable@kernel.org [2.6.38+]
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      55a67399
  16. 13 7月, 2011 1 次提交
  17. 30 5月, 2011 3 次提交
  18. 12 3月, 2011 1 次提交
    • F
      pnfs: fix pnfs lock inversion of i_lock and cl_lock · f49f9baa
      Fred Isaman 提交于
      The pnfs code was using throughout the lock order i_lock, cl_lock.
      This conflicts with the nfs delegation code.  Rework the pnfs code
      to avoid taking both locks simultaneously.
      
      Currently the code takes the double lock to add/remove the layout to a
      nfs_client list, while atomically checking that the list of lsegs is
      empty.  To avoid this, we rely on existing serializations.  When a
      layout is initialized with lseg count equal zero, LAYOUTGET's
      openstateid serialization is in effect, making it safe to assume it
      stays zero unless we change it.  And once a layout's lseg count drops
      to zero, it is set as DESTROYED and so will stay at zero.
      Signed-off-by: NFred Isaman <iisaman@netapp.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      f49f9baa
  19. 26 1月, 2011 2 次提交
  20. 07 1月, 2011 5 次提交
  21. 25 10月, 2010 1 次提交
  22. 07 8月, 2010 1 次提交
  23. 23 6月, 2010 1 次提交