1. 19 3月, 2009 3 次提交
  2. 08 1月, 2009 1 次提交
  3. 24 12月, 2008 2 次提交
  4. 30 9月, 2008 3 次提交
  5. 10 7月, 2008 1 次提交
    • O
      rpc: bring back cl_chatty · b6b6152c
      Olga Kornievskaia 提交于
      The cl_chatty flag alows us to control whether a given rpc client leaves
      
      	"server X not responding, timed out"
      
      messages in the syslog.  Such messages make sense for ordinary nfs
      clients (where an unresponsive server means applications on the
      mountpoint are probably hanging), but not for the callback client (which
      can fail more commonly, with the only result just of disabling some
      optimizations).
      
      Previously cl_chatty was removed, do to lack of users; reinstate it, and
      use it for the nfsd's callback client.
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      b6b6152c
  6. 19 5月, 2008 1 次提交
  7. 30 4月, 2008 1 次提交
  8. 24 4月, 2008 1 次提交
    • O
      nfsd: use static memory for callback program and stats · ff7d9756
      Olga Kornievskaia 提交于
      There's no need to dynamically allocate this memory, and doing so may
      create the possibility of races on shutdown of the rpc client.  (We've
      witnessed it only after adding rpcsec_gss support to the server, after
      which the rpc code can send destroys calls that expect to still be able
      to access the rpc_stats structure after it has been destroyed.)
      
      Such races are in theory possible if the module containing this "static"
      memory is removed very quickly after an rpc client is destroyed, but
      we haven't seen that happen.
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      ff7d9756
  9. 02 2月, 2008 3 次提交
    • J
      nfsd4: recognize callback channel failure earlier · 404ec117
      J. Bruce Fields 提交于
      When the callback channel fails, we inform the client of that by
      returning a cb_path_down error the next time it tries to renew its
      lease.
      
      If we wait most of a lease period before deciding that a callback has
      failed and that the callback channel is down, then we decrease the
      chances that the client will find out in time to do anything about it.
      
      So, mark the channel down as soon as we recognize that an rpc has
      failed.  However, continue trying to recall delegations anyway, in hopes
      it will come back up.  This will prevent more delegations from being
      given out, and ensure cb_path_down is returned to renew calls earlier,
      while still making the best effort to deliver recalls of existing
      delegations.
      
      Also fix a couple comments and remove a dprink that doesn't seem likely
      to be useful.
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      404ec117
    • J
      nfsd: move callback rpc_client creation into separate thread · 63c86716
      J. Bruce Fields 提交于
      The whole reason to move this callback-channel probe into a separate
      thread was because (for now) we don't have an easy way to create the
      rpc_client asynchronously.  But I forgot to move the rpc_create() to the
      spawned thread.  Doh!  Fix that.
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      63c86716
    • J
      nfsd4: probe callback channel only once · 46f8a64b
      J. Bruce Fields 提交于
      Our callback code doesn't actually handle concurrent attempts to probe
      the callback channel.  Some rethinking of the locking may be required.
      However, we can also just move the callback probing to this case.  Since
      this is the only time a client is "confirmed" (and since that can only
      happen once in the lifetime of a client), this ensures we only probe
      once.
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      46f8a64b
  10. 10 10月, 2007 3 次提交
  11. 18 7月, 2007 1 次提交
  12. 11 7月, 2007 2 次提交
  13. 22 5月, 2007 1 次提交
    • A
      Detach sched.h from mm.h · e8edc6e0
      Alexey Dobriyan 提交于
      First thing mm.h does is including sched.h solely for can_do_mlock() inline
      function which has "current" dereference inside. By dealing with can_do_mlock()
      mm.h can be detached from sched.h which is good. See below, why.
      
      This patch
      a) removes unconditional inclusion of sched.h from mm.h
      b) makes can_do_mlock() normal function in mm/mlock.c
      c) exports can_do_mlock() to not break compilation
      d) adds sched.h inclusions back to files that were getting it indirectly.
      e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were
         getting them indirectly
      
      Net result is:
      a) mm.h users would get less code to open, read, preprocess, parse, ... if
         they don't need sched.h
      b) sched.h stops being dependency for significant number of files:
         on x86_64 allmodconfig touching sched.h results in recompile of 4083 files,
         after patch it's only 3744 (-8.3%).
      
      Cross-compile tested on
      
      	all arm defconfigs, all mips defconfigs, all powerpc defconfigs,
      	alpha alpha-up
      	arm
      	i386 i386-up i386-defconfig i386-allnoconfig
      	ia64 ia64-up
      	m68k
      	mips
      	parisc parisc-up
      	powerpc powerpc-up
      	s390 s390-up
      	sparc sparc-up
      	sparc64 sparc64-up
      	um-x86_64
      	x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig
      
      as well as my two usual configs.
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e8edc6e0
  14. 01 5月, 2007 1 次提交
    • C
      SUNRPC: RPC buffer size estimates are too large · 2bea90d4
      Chuck Lever 提交于
      The RPC buffer size estimation logic in net/sunrpc/clnt.c always
      significantly overestimates the requirements for the buffer size.
      A little instrumentation demonstrated that in fact rpc_malloc was never
      allocating the buffer from the mempool, but almost always called kmalloc.
      
      To compute the size of the RPC buffer more precisely, split p_bufsiz into
      two fields; one for the argument size, and one for the result size.
      
      Then, compute the sum of the exact call and reply header sizes, and split
      the RPC buffer precisely between the two.  That should keep almost all RPC
      buffers within the 2KiB buffer mempool limit.
      
      And, we can finally be rid of RPC_SLACK_SPACE!
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      2bea90d4
  15. 17 2月, 2007 1 次提交
  16. 21 10月, 2006 2 次提交
  17. 17 10月, 2006 1 次提交
  18. 02 10月, 2006 1 次提交
  19. 23 9月, 2006 1 次提交
  20. 01 7月, 2006 1 次提交
  21. 11 4月, 2006 1 次提交
  22. 24 3月, 2006 1 次提交
  23. 21 3月, 2006 1 次提交
  24. 07 1月, 2006 2 次提交
  25. 24 6月, 2005 3 次提交
  26. 23 6月, 2005 1 次提交