1. 05 9月, 2013 1 次提交
    • N
      NFSv4: Don't try to recover NFSv4 locks when they are lost. · ef1820f9
      NeilBrown 提交于
      When an NFSv4 client loses contact with the server it can lose any
      locks that it holds.
      
      Currently when it reconnects to the server it simply tries to reclaim
      those locks.  This might succeed even though some other client has
      held and released a lock in the mean time.  So the first client might
      think the file is unchanged, but it isn't.  This isn't good.
      
      If, when recovery happens, the locks cannot be claimed because some
      other client still holds the lock, then we get a message in the kernel
      logs, but the client can still write.  So two clients can both think
      they have a lock and can both write at the same time.  This is equally
      not good.
      
      There was a patch a while ago
        http://comments.gmane.org/gmane.linux.nfs/41917
      
      which tried to address some of this, but it didn't seem to go
      anywhere.  That patch would also send a signal to the process.  That
      might be useful but for now this patch just causes writes to fail.
      
      For NFSv4 (unlike v2/v3) there is a strong link between the lock and
      the write request so we can fairly easily fail any IO of the lock is
      gone.  While some applications might not expect this, it is still
      safer than allowing the write to succeed.
      
      Because this is a fairly big change in behaviour a module parameter,
      "recover_locks", is introduced which defaults to true (the current
      behaviour) but can be set to "false" to tell the client not to try to
      recover things that were lost.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      ef1820f9
  2. 09 6月, 2013 1 次提交
  3. 23 2月, 2013 1 次提交
  4. 13 12月, 2012 1 次提交
    • A
      SUNRPC handle EKEYEXPIRED in call_refreshresult · eb96d5c9
      Andy Adamson 提交于
      Currently, when an RPCSEC_GSS context has expired or is non-existent
      and the users (Kerberos) credentials have also expired or are non-existent,
      the client receives the -EKEYEXPIRED error and tries to refresh the context
      forever.  If an application is performing I/O, or other work against the share,
      the application hangs, and the user is not prompted to refresh/establish their
      credentials. This can result in a denial of service for other users.
      
      Users are expected to manage their Kerberos credential lifetimes to mitigate
      this issue.
      
      Move the -EKEYEXPIRED handling into the RPC layer. Try tk_cred_retry number
      of times to refresh the gss_context, and then return -EACCES to the application.
      Signed-off-by: NAndy Adamson <andros@netapp.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      eb96d5c9
  5. 31 7月, 2012 2 次提交
  6. 18 7月, 2012 1 次提交
  7. 14 7月, 2012 1 次提交
    • M
      nfs: clean up ->create in nfs_rpc_ops · 8867fe58
      Miklos Szeredi 提交于
      Don't pass nfs_open_context() to ->create().  Only the NFS4 implementation
      needed that and only because it wanted to return an open file using open
      intents.  That task has been replaced by ->atomic_open so it is not necessary
      anymore to pass the context to the create rpc operation.
      
      Despite nfs4_proc_create apparently being okay with a NULL context it Oopses
      somewhere down the call chain.  So allocate a context here.
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      CC: Trond Myklebust <Trond.Myklebust@netapp.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      8867fe58
  8. 29 6月, 2012 6 次提交
  9. 12 6月, 2012 1 次提交
  10. 11 5月, 2012 1 次提交
    • L
      vfs: make it possible to access the dentry hash/len as one 64-bit entry · 26fe5750
      Linus Torvalds 提交于
      This allows comparing hash and len in one operation on 64-bit
      architectures.  Right now only __d_lookup_rcu() takes advantage of this,
      since that is the case we care most about.
      
      The use of anonymous struct/unions hides the alternate 64-bit approach
      from most users, the exception being a few cases where we initialize a
      'struct qstr' with a static initializer.  This makes the problematic
      cases use a new QSTR_INIT() helper function for that (but initializing
      just the name pointer with a "{ .name = xyzzy }" initializer remains
      valid, as does just copying another qstr structure).
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      26fe5750
  11. 28 4月, 2012 4 次提交
  12. 21 3月, 2012 4 次提交
  13. 07 12月, 2011 1 次提交
  14. 05 11月, 2011 1 次提交
    • J
      nfs: when attempting to open a directory, fall back on normal lookup (try #5) · 1788ea6e
      Jeff Layton 提交于
      commit d953126a changed how nfs_atomic_lookup handles an -EISDIR return
      from an OPEN call. Prior to that patch, that caused the client to fall
      back to doing a normal lookup. When that patch went in, the code began
      returning that error to userspace. The d_revalidate codepath however
      never had the corresponding change, so it was still possible to end up
      with a NULL ctx->state pointer after that.
      
      That patch caused a regression. When we attempt to open a directory that
      does not have a cached dentry, that open now errors out with EISDIR. If
      you attempt the same open with a cached dentry, it will succeed.
      
      Fix this by reverting the change in nfs_atomic_lookup and allowing
      attempts to open directories to fall back to a normal lookup
      
      Also, add a NFSv4-specific f_ops->open routine that just returns
      -ENOTDIR. This should never be called if things are working properly,
      but if it ever is, then the dprintk may help in debugging.
      
      To facilitate this, a new file_operations field is also added to the
      nfs_rpc_ops struct.
      
      Cc: stable@kernel.org
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      1788ea6e
  15. 25 3月, 2011 1 次提交
  16. 12 3月, 2011 1 次提交
  17. 05 1月, 2011 1 次提交
  18. 17 12月, 2010 1 次提交
  19. 24 10月, 2010 1 次提交
  20. 18 9月, 2010 2 次提交
  21. 17 9月, 2010 1 次提交
  22. 15 5月, 2010 2 次提交
  23. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  24. 10 2月, 2010 1 次提交
  25. 24 9月, 2009 1 次提交
  26. 20 3月, 2009 1 次提交
    • T
      NFS: Optimise NFS close() · 7fe5c398
      Trond Myklebust 提交于
      Close-to-open cache consistency rules really only require us to flush out
      writes on calls to close(), and require us to revalidate attributes on the
      very last close of the file.
      
      Currently we appear to be doing a lot of extra attribute revalidation
      and cache flushes.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      7fe5c398