1. 23 6月, 2014 1 次提交
  2. 31 5月, 2014 2 次提交
  3. 23 5月, 2014 1 次提交
  4. 04 1月, 2014 1 次提交
  5. 11 12月, 2013 1 次提交
  6. 31 8月, 2013 1 次提交
  7. 24 1月, 2013 1 次提交
  8. 18 12月, 2012 1 次提交
  9. 28 7月, 2012 1 次提交
  10. 01 6月, 2012 2 次提交
  11. 15 2月, 2012 1 次提交
  12. 01 2月, 2012 2 次提交
  13. 25 10月, 2011 1 次提交
  14. 14 9月, 2011 1 次提交
  15. 20 8月, 2011 1 次提交
  16. 18 7月, 2011 1 次提交
    • J
      nfsd: turn on reply cache for NFSv4 · 1091006c
      J. Bruce Fields 提交于
      It's sort of ridiculous that we've never had a working reply cache for
      NFSv4.
      
      On the other hand, we may still not: our current reply cache is likely
      not very good, especially in the TCP case (which is the only case that
      matters for v4).  What we really need here is some serious testing.
      
      Anyway, here's a start.
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      1091006c
  17. 15 7月, 2011 1 次提交
  18. 07 1月, 2011 1 次提交
  19. 05 1月, 2011 1 次提交
    • J
      svcrpc: simpler request dropping · 9e701c61
      J. Bruce Fields 提交于
      Currently we use -EAGAIN returns to determine when to drop a deferred
      request.  On its own, that is error-prone, as it makes us treat -EAGAIN
      returns from other functions specially to prevent inadvertent dropping.
      
      So, use a flag on the request instead.
      
      Returning an error on request deferral is still required, to prevent
      further processing, but we no longer need worry that an error return on
      its own could result in a drop.
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      9e701c61
  20. 16 12月, 2009 1 次提交
  21. 24 11月, 2009 1 次提交
  22. 15 7月, 2009 1 次提交
    • A
      nfsd41: use globals for DRC limits · 4bd9b0f4
      Andy Adamson 提交于
      The version 4.1 DRC memory limit and tracking variables are server wide and
      session specific. Replace struct svc_serv fields with globals.
      Stop using the svc_serv sv_lock.
      
      Add a spinlock to serialize access to the DRC limit management variables which
      change on session creation and deletion (usage counter) or (future)
      administrative action to adjust the total DRC memory limit.
      Signed-off-by: NAndy Adamson <andros@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      4bd9b0f4
  23. 18 6月, 2009 3 次提交
  24. 04 4月, 2009 2 次提交
  25. 29 3月, 2009 2 次提交
    • C
      SUNRPC: Remove @family argument from svc_create() and svc_create_pooled() · 49a9072f
      Chuck Lever 提交于
      Since an RPC service listener's protocol family is specified now via
      svc_create_xprt(), it no longer needs to be passed to svc_create() or
      svc_create_pooled().  Remove that argument from the synopsis of those
      functions, and remove the sv_family field from the svc_serv struct.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      49a9072f
    • C
      SUNRPC: Pass a family argument to svc_register() · 4b62e58c
      Chuck Lever 提交于
      The sv_family field is going away.  Instead of using sv_family, have
      the svc_register() function take a protocol family argument.
      
      Since this argument represents a protocol family, and not an address
      family, this argument takes an int, as this is what is passed to
      sock_create_kern().  Also make sure svc_register's helpers are
      checking for PF_FOO instead of AF_FOO.  The value of [AP]F_FOO are
      equivalent; this is simply a symbolic change to reflect the semantics
      of the value stored in that variable.
      
      sock_create_kern() should return EPFNOSUPPORT if the passed-in
      protocol family isn't supported, but it uses EAFNOSUPPORT for this
      case.  We will stick with that tradition here, as svc_register()
      is called by the RPC server in the same path as sock_create_kern().
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      4b62e58c
  26. 19 3月, 2009 2 次提交
    • G
      knfsd: add file to export stats about nfsd pools · 03cf6c9f
      Greg Banks 提交于
      Add /proc/fs/nfsd/pool_stats to export to userspace various
      statistics about the operation of rpc server thread pools.
      
      This patch is based on a forward-ported version of
      knfsd-add-pool-thread-stats which has been shipping in the SGI
      "Enhanced NFS" product since 2006 and which was previously
      posted:
      
      http://article.gmane.org/gmane.linux.nfs/10375
      
      It has also been updated thus:
      
       * moved EXPORT_SYMBOL() to near the function it exports
       * made the new struct struct seq_operations const
       * used SEQ_START_TOKEN instead of ((void *)1)
       * merged fix from SGI PV 990526 "sunrpc: use dprintk instead of
         printk in svc_pool_stats_*()" by Harshula Jayasuriya.
       * merged fix from SGI PV 964001 "Crash reading pool_stats before
         nfsds are started".
      Signed-off-by: NGreg Banks <gnb@sgi.com>
      Signed-off-by: NHarshula Jayasuriya <harshula@sgi.com>
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      03cf6c9f
    • G
      knfsd: avoid overloading the CPU scheduler with enormous load averages · 59a252ff
      Greg Banks 提交于
      Avoid overloading the CPU scheduler with enormous load averages
      when handling high call-rate NFS loads.  When the knfsd bottom half
      is made aware of an incoming call by the socket layer, it tries to
      choose an nfsd thread and wake it up.  As long as there are idle
      threads, one will be woken up.
      
      If there are lot of nfsd threads (a sensible configuration when
      the server is disk-bound or is running an HSM), there will be many
      more nfsd threads than CPUs to run them.  Under a high call-rate
      low service-time workload, the result is that almost every nfsd is
      runnable, but only a handful are actually able to run.  This situation
      causes two significant problems:
      
      1. The CPU scheduler takes over 10% of each CPU, which is robbing
         the nfsd threads of valuable CPU time.
      
      2. At a high enough load, the nfsd threads starve userspace threads
         of CPU time, to the point where daemons like portmap and rpc.mountd
         do not schedule for tens of seconds at a time.  Clients attempting
         to mount an NFS filesystem timeout at the very first step (opening
         a TCP connection to portmap) because portmap cannot wake up from
         select() and call accept() in time.
      
      Disclaimer: these effects were observed on a SLES9 kernel, modern
      kernels' schedulers may behave more gracefully.
      
      The solution is simple: keep in each svc_pool a counter of the number
      of threads which have been woken but have not yet run, and do not wake
      any more if that count reaches an arbitrary small threshold.
      
      Testing was on a 4 CPU 4 NIC Altix using 4 IRIX clients, each with 16
      synthetic client threads simulating an rsync (i.e. recursive directory
      listing) workload reading from an i386 RH9 install image (161480
      regular files in 10841 directories) on the server.  That tree is small
      enough to fill in the server's RAM so no disk traffic was involved.
      This setup gives a sustained call rate in excess of 60000 calls/sec
      before being CPU-bound on the server.  The server was running 128 nfsds.
      
      Profiling showed schedule() taking 6.7% of every CPU, and __wake_up()
      taking 5.2%.  This patch drops those contributions to 3.0% and 2.2%.
      Load average was over 120 before the patch, and 20.9 after.
      
      This patch is a forward-ported version of knfsd-avoid-nfsd-overload
      which has been shipping in the SGI "Enhanced NFS" product since 2006.
      It has been posted before:
      
      http://article.gmane.org/gmane.linux.nfs/10374Signed-off-by: NGreg Banks <gnb@sgi.com>
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      59a252ff
  27. 07 1月, 2009 1 次提交
    • J
      sunrpc: add sv_maxconn field to svc_serv (try #3) · c9233eb7
      Jeff Layton 提交于
      svc_check_conn_limits() attempts to prevent denial of service attacks
      by having the service close old connections once it reaches a
      threshold. This threshold is based on the number of threads in the
      service:
      
      	(serv->sv_nrthreads + 3) * 20
      
      Once we reach this, we drop the oldest connections and a printk pops
      to warn the admin that they should increase the number of threads.
      
      Increasing the number of threads isn't an option however for services
      like lockd. We don't want to eliminate this check entirely for such
      services but we need some way to increase this limit.
      
      This patch adds a sv_maxconn field to the svc_serv struct. When it's
      set to 0, we use the current method to calculate the max number of
      connections. RPC services can then set this on an as-needed basis.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Acked-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      c9233eb7
  28. 30 9月, 2008 3 次提交
  29. 24 6月, 2008 2 次提交