1. 23 9月, 2006 3 次提交
  2. 04 8月, 2006 1 次提交
  3. 22 7月, 2006 1 次提交
  4. 09 6月, 2006 1 次提交
    • C
      SUNRPC: NFS_ROOT always uses the same XIDs · bf3fcf89
      Chuck Lever 提交于
      The XID generator uses get_random_bytes to generate an initial XID.
      NFS_ROOT starts up before the random driver, though, so get_random_bytes
      doesn't set a random XID for NFS_ROOT.  This causes NFS_ROOT mount points
      to reuse XIDs every time the client is booted.  If the client boots often
      enough, the server will start serving old replies out of its DRC.
      
      Use net_random() instead.
      
      Test plan:
      I/O intensive workloads should perform well and generate no errors.  Traces
      taken during client reboots should show that NFS_ROOT mounts use unique
      XIDs after every reboot.
      Signed-off-by: NChuck Lever <cel@netapp.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      bf3fcf89
  5. 21 3月, 2006 5 次提交
  6. 07 1月, 2006 3 次提交
    • T
      SUNRPC: Clean up xprt_destroy() · 0065db32
      Trond Myklebust 提交于
       We ought never to be calling xprt_destroy() if there are still active
       rpc_tasks. Optimise away the broken code that attempts to "fix" that case.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      0065db32
    • T
      SUNRPC: Ensure client closes the socket when server initiates a close · 632e3bdc
      Trond Myklebust 提交于
       If the server decides to close the RPC socket, we currently don't actually
       respond until either another RPC call is scheduled, or until xprt_autoclose()
       gets called by the socket expiry timer (which may be up to 5 minutes
       later).
      
       This patch ensures that xprt_autoclose() is called much sooner if the
       server closes the socket.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      632e3bdc
    • C
      SUNRPC: switchable buffer allocation · 02107148
      Chuck Lever 提交于
       Add RPC client transport switch support for replacing buffer management
       on a per-transport basis.
      
       In the current IPv4 socket transport implementation, RPC buffers are
       allocated as needed for each RPC message that is sent.  Some transport
       implementations may choose to use pre-allocated buffers for encoding,
       sending, receiving, and unmarshalling RPC messages, however.  For
       transports capable of direct data placement, the buffers can be carved
       out of a pre-registered area of memory rather than from a slab cache.
      
       Test-plan:
       Millions of fsx operations.  Performance characterization with "sio" and
       "iozone".  Use oprofile and other tools to look for significant regression
       in CPU utilization.
      Signed-off-by: NChuck Lever <cel@netapp.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      02107148
  7. 19 10月, 2005 2 次提交
  8. 24 9月, 2005 20 次提交
  9. 08 7月, 2005 1 次提交
  10. 25 6月, 2005 1 次提交
  11. 23 6月, 2005 2 次提交
    • C
      [PATCH] RPC: kick off socket connect operations faster · ae388462
      Chuck Lever 提交于
       Make the socket transport kick the event queue to start socket connects
       immediately.  This should improve responsiveness of applications that are
       sensitive to slow mount operations (like automounters).
      
       We are now also careful to cancel the connect worker before destroying
       the xprt.  This eliminates a race where xprt_destroy can finish before
       the connect worker is even allowed to run.
      
       Test-plan:
       Destructive testing (unplugging the network temporarily).  Connectathon
       with UDP and TCP.  Hard-code impossibly small connect timeout.
      
       Version: Fri, 29 Apr 2005 15:32:01 -0400
      Signed-off-by: NChuck Lever <cel@netapp.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      ae388462
    • C
      [PATCH] RPC: TCP reconnects are too slow · 20e5ac82
      Chuck Lever 提交于
       When the network layer reports a connection close, the RPC task
       waiting to reconnect should be notified so it can retry immediately
       instead of waiting for the normal connection establishment timeout.
      
       This reverts a change made in 2.6.6 as part of adding client support
       for RPC over TCP socket idle timeouts.
      
       Test-plan:
       Destructive testing with NFS over TCP mounts.
      
       Version: Fri, 29 Apr 2005 15:31:46 -0400
      Signed-off-by: NChuck Lever <cel@netapp.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      20e5ac82