1. 05 11月, 2014 1 次提交
  2. 01 10月, 2014 2 次提交
    • A
      NFSv4.1: Fix an NFSv4.1 state renewal regression · d1f456b0
      Andy Adamson 提交于
      Commit 2f60ea6b ("NFSv4: The NFSv4.0 client must send RENEW calls if it holds a delegation") set the NFS4_RENEW_TIMEOUT flag in nfs4_renew_state, and does
      not put an nfs41_proc_async_sequence call, the NFSv4.1 lease renewal heartbeat
      call, on the wire to renew the NFSv4.1 state if the flag was not set.
      
      The NFS4_RENEW_TIMEOUT flag is set when "now" is after the last renewal
      (cl_last_renewal) plus the lease time divided by 3. This is arbitrary and
      sometimes does the following:
      
      In normal operation, the only way a future state renewal call is put on the
      wire is via a call to nfs4_schedule_state_renewal, which schedules a
      nfs4_renew_state workqueue task. nfs4_renew_state determines if the
      NFS4_RENEW_TIMEOUT should be set, and the calls nfs41_proc_async_sequence,
      which only gets sent if the NFS4_RENEW_TIMEOUT flag is set.
      Then the nfs41_proc_async_sequence rpc_release function schedules
      another state remewal via nfs4_schedule_state_renewal.
      
      Without this change we can get into a state where an application stops
      accessing the NFSv4.1 share, state renewal calls stop due to the
      NFS4_RENEW_TIMEOUT flag _not_ being set. The only way to recover
      from this situation is with a clientid re-establishment, once the application
      resumes and the server has timed out the lease and so returns
      NFS4ERR_BAD_SESSION on the subsequent SEQUENCE operation.
      
      An example application:
      open, lock, write a file.
      
      sleep for 6 * lease (could be less)
      
      ulock, close.
      
      In the above example with NFSv4.1 delegations enabled, without this change,
      there are no OP_SEQUENCE state renewal calls during the sleep, and the
      clientid is recovered due to lease expiration on the close.
      
      This issue does not occur with NFSv4.1 delegations disabled, nor with
      NFSv4.0, with or without delegations enabled.
      Signed-off-by: NAndy Adamson <andros@netapp.com>
      Link: http://lkml.kernel.org/r/1411486536-23401-1-git-send-email-andros@netapp.com
      Fixes: 2f60ea6b (NFSv4: The NFSv4.0 client must send RENEW calls...)
      Cc: stable@vger.kernel.org # 3.2.x
      Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
      d1f456b0
    • A
      NFS: Implement SEEK · 1c6dcbe5
      Anna Schumaker 提交于
      The SEEK operation is used when an application makes an lseek call with
      either the SEEK_HOLE or SEEK_DATA flags set.  I fall back on
      nfs_file_llseek() if the server does not have SEEK support.
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
      1c6dcbe5
  3. 25 9月, 2014 1 次提交
    • N
      NFSv4: use exponential retry on NFS4ERR_DELAY for async requests. · 8478eaa1
      NeilBrown 提交于
      Currently asynchronous NFSv4 request will be retried with
      exponential timeout (from 1/10 to 15 seconds), but async
      requests will always use a 15second retry.
      
      Some "async" requests are really synchronous though.  The
      async mechanism is used to allow the request to continue if
      the requesting process is killed.
      In those cases, an exponential retry is appropriate.
      
      For example, if two different clients both open a file and
      get a READ delegation, and one client then unlinks the file
      (while still holding an open file descriptor), that unlink
      will used the "silly-rename" handling which is async.
      The first rename will result in NFS4ERR_DELAY while the
      delegation is reclaimed from the other client.  The rename
      will not be retried for 15 seconds, causing an unlink to take
      15 seconds rather than 100msec.
      
      This patch only added exponential timeout for async unlink and
      async rename.  Other async calls, such as 'close' are sometimes
      waited for so they might benefit from exponential timeout too.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
      8478eaa1
  4. 19 9月, 2014 1 次提交
  5. 13 9月, 2014 3 次提交
  6. 11 9月, 2014 2 次提交
  7. 27 8月, 2014 2 次提交
  8. 05 8月, 2014 1 次提交
  9. 04 8月, 2014 1 次提交
    • N
      NFS: nfs4_do_open should add negative results to the dcache. · 4fa2c54b
      NeilBrown 提交于
      If you have an NFSv4 mounted directory which does not container 'foo'
      and:
      
        ls -l foo
        ssh $server touch foo
        cat foo
      
      then the 'cat' will fail (usually, depending a bit on the various
      cache ages).  This is correct as negative looks are cached by default.
      However with the same initial conditions:
      
        cat foo
        ssh $server touch foo
        cat foo
      
      will usually succeed.  This is because an "open" does not add a
      negative dentry to the dcache, while a "lookup" does.
      
      This can have negative performance effects.  When "gcc" searches for
      an include file, it will try to "open" the file in every director in
      the search path.  Without caching of negative "open" results, this
      generates much more traffic to the server than it should (or than
      NFSv3 does).
      
      The root of the problem is that _nfs4_open_and_get_state() will call
      d_add_unique() on a positive result, but not on a negative result.
      Compare with nfs_lookup() which calls d_materialise_unique on both
      a positive result and on ENOENT.
      
      This patch adds a call d_add() in the ENOENT case for
      _nfs4_open_and_get_state() and also calls nfs_set_verifier().
      
      With it, many fewer "open" requests for known-non-existent files are
      sent to the server.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
      4fa2c54b
  10. 13 7月, 2014 5 次提交
  11. 25 6月, 2014 2 次提交
  12. 07 6月, 2014 1 次提交
  13. 30 5月, 2014 1 次提交
  14. 29 5月, 2014 5 次提交
  15. 29 3月, 2014 1 次提交
  16. 18 3月, 2014 2 次提交
  17. 06 3月, 2014 2 次提交
  18. 02 3月, 2014 1 次提交
  19. 20 2月, 2014 3 次提交
  20. 02 2月, 2014 1 次提交
  21. 30 1月, 2014 2 次提交