1. 20 7月, 2016 1 次提交
  2. 25 6月, 2016 3 次提交
  3. 26 5月, 2016 1 次提交
  4. 18 5月, 2016 8 次提交
    • J
      pnfs: make pnfs_layout_process more robust · 1b3c6d07
      Jeff Layton 提交于
      It can return NULL if layoutgets are blocked currently. Fix it to return
      -EAGAIN in that case, so we can properly handle it in pnfs_update_layout.
      
      Also, clean up and simplify the error handling -- eliminate "status" and
      just use "lseg".
      Signed-off-by: NJeff Layton <jeff.layton@primarydata.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      1b3c6d07
    • J
      pnfs: rework LAYOUTGET retry handling · 183d9e7b
      Jeff Layton 提交于
      There are several problems in the way a stateid is selected for a
      LAYOUTGET operation:
      
      We pick a stateid to use in the RPC prepare op, but that makes
      it difficult to serialize LAYOUTGETs that use the open stateid. That
      serialization is done in pnfs_update_layout, which occurs well before
      the rpc_prepare operation.
      
      Between those two events, the i_lock is dropped and reacquired.
      pnfs_update_layout can find that the list has lsegs in it and not do any
      serialization, but then later pnfs_choose_layoutget_stateid ends up
      choosing the open stateid.
      
      This patch changes the client to select the stateid to use in the
      LAYOUTGET earlier, when we're searching for a usable layout segment.
      This way we can do it all while holding the i_lock the first time, and
      ensure that we serialize any LAYOUTGET call that uses a non-layout
      stateid.
      
      This also means a rework of how LAYOUTGET replies are handled, as we
      must now get the latest stateid if we want to retransmit in response
      to a retryable error.
      
      Most of those errors boil down to the fact that the layout state has
      changed in some fashion. Thus, what we really want to do is to re-search
      for a layout when it fails with a retryable error, so that we can avoid
      reissuing the RPC at all if possible.
      
      While the LAYOUTGET RPC is async, the initiating thread always waits for
      it to complete, so it's effectively synchronous anyway. Currently, when
      we need to retry a LAYOUTGET because of an error, we drive that retry
      via the rpc state machine.
      
      This means that once the call has been submitted, it runs until it
      completes. So, we must move the error handling for this RPC out of the
      rpc_call_done operation and into the caller.
      
      In order to handle errors like NFS4ERR_DELAY properly, we must also
      pass a pointer to the sliding timeout, which is now moved to the stack
      in pnfs_update_layout.
      
      The complicating errors are -NFS4ERR_RECALLCONFLICT and
      -NFS4ERR_LAYOUTTRYLATER, as those involve a timeout after which we give
      up and return NULL back to the caller. So, there is some special
      handling for those errors to ensure that the layers driving the retries
      can handle that appropriately.
      Signed-off-by: NJeff Layton <jeff.layton@primarydata.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      183d9e7b
    • J
      pnfs: lift retry logic from send_layoutget to pnfs_update_layout · 83026d80
      Jeff Layton 提交于
      If we get back something like NFS4ERR_OLD_STATEID, that will be
      translated into -EAGAIN, and the do/while loop in send_layoutget
      will drive the call again.
      
      This is not quite what we want, I think. An error like that is a
      sign that something has changed. That something could have been a
      concurrent LAYOUTGET that would give us a usable lseg.
      
      Lift the retry logic into pnfs_update_layout instead. That allows
      us to redo the layout search, and may spare us from having to issue
      an RPC.
      Signed-off-by: NJeff Layton <jeff.layton@primarydata.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      83026d80
    • J
      pnfs: fix bad error handling in send_layoutget · d03ab29d
      Jeff Layton 提交于
      Currently, the code will clear the fail bit if we get back a fatal
      error. I don't think that's correct -- we want to clear that bit
      if we do not get a fatal error.
      
      Fixes: 0bcbf039 (nfs: handle request add failure properly)
      Signed-off-by: NJeff Layton <jeff.layton@primarydata.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      d03ab29d
    • J
      pnfs: only tear down lsegs that precede seqid in LAYOUTRETURN args · 6d597e17
      Jeff Layton 提交于
      LAYOUTRETURN is "special" in that servers and clients are expected to
      work with old stateids. When the client sends a LAYOUTRETURN with an old
      stateid in it then the server is expected to only tear down layout
      segments that were present when that seqid was current. Ensure that the
      client handles its accounting accordingly.
      Signed-off-by: NJeff Layton <jeff.layton@primarydata.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      6d597e17
    • J
      pnfs: keep track of the return sequence number in pnfs_layout_hdr · 3982a6a2
      Jeff Layton 提交于
      When we want to selectively do a LAYOUTRETURN, we need to specify a
      stateid that represents most recent layout acquisition that is to be
      returned.
      
      When we mark a layout stateid to be returned, we update the return
      sequence number in the layout header with that value, if it's newer
      than the existing one. Then, when we go to do a LAYOUTRETURN on
      layout header put, we overwrite the seqid in the stateid with the
      saved one, and then zero it out.
      Signed-off-by: NJeff Layton <jeff.layton@primarydata.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      3982a6a2
    • J
      pnfs: record sequence in pnfs_layout_segment when it's created · 66755283
      Jeff Layton 提交于
      In later patches, we're going to teach the client to be more selective
      about how it returns layouts. This means keeping a record of what the
      stateid's seqid was at the time that the server handed out a layout
      segment.
      Signed-off-by: NJeff Layton <jeff.layton@primarydata.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      66755283
    • T
  5. 09 5月, 2016 1 次提交
  6. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  7. 23 2月, 2016 2 次提交
  8. 16 2月, 2016 2 次提交
  9. 28 1月, 2016 1 次提交
  10. 27 1月, 2016 1 次提交
    • T
      pNFS: Fix missing layoutreturn calls · 13c13a6a
      Trond Myklebust 提交于
      The layoutreturn code currently relies on pnfs_put_lseg() to initiate the
      RPC call when conditions are right. A problem arises when we want to
      free the layout segment from inside an inode->i_lock section (e.g. in
      pnfs_clear_request_commit()), since we cannot sleep.
      
      The workaround is to move the actual call to pnfs_send_layoutreturn()
      to pnfs_put_layout_hdr(), which doesn't have this restriction.
      Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
      13c13a6a
  11. 05 1月, 2016 7 次提交
  12. 01 1月, 2016 1 次提交
  13. 29 12月, 2015 7 次提交
  14. 28 12月, 2015 2 次提交
  15. 14 12月, 2015 1 次提交
  16. 26 11月, 2015 1 次提交