1. 15 2月, 2007 3 次提交
  2. 04 2月, 2007 1 次提交
  3. 08 12月, 2006 1 次提交
  4. 06 12月, 2006 14 次提交
  5. 22 11月, 2006 2 次提交
    • D
      WorkStruct: Pass the work_struct pointer instead of context data · 65f27f38
      David Howells 提交于
      Pass the work_struct pointer to the work function rather than context data.
      The work function can use container_of() to work out the data.
      
      For the cases where the container of the work_struct may go away the moment the
      pending bit is cleared, it is made possible to defer the release of the
      structure by deferring the clearing of the pending bit.
      
      To make this work, an extra flag is introduced into the management side of the
      work_struct.  This governs auto-release of the structure upon execution.
      
      Ordinarily, the work queue executor would release the work_struct for further
      scheduling or deallocation by clearing the pending bit prior to jumping to the
      work function.  This means that, unless the driver makes some guarantee itself
      that the work_struct won't go away, the work function may not access anything
      else in the work_struct or its container lest they be deallocated..  This is a
      problem if the auxiliary data is taken away (as done by the last patch).
      
      However, if the pending bit is *not* cleared before jumping to the work
      function, then the work function *may* access the work_struct and its container
      with no problems.  But then the work function must itself release the
      work_struct by calling work_release().
      
      In most cases, automatic release is fine, so this is the default.  Special
      initiators exist for the non-auto-release case (ending in _NAR).
      Signed-Off-By: NDavid Howells <dhowells@redhat.com>
      65f27f38
    • D
      WorkStruct: Separate delayable and non-delayable events. · 52bad64d
      David Howells 提交于
      Separate delayable work items from non-delayable work items be splitting them
      into a separate structure (delayed_work), which incorporates a work_struct and
      the timer_list removed from work_struct.
      
      The work_struct struct is huge, and this limits it's usefulness.  On a 64-bit
      architecture it's nearly 100 bytes in size.  This reduces that by half for the
      non-delayable type of event.
      Signed-Off-By: NDavid Howells <dhowells@redhat.com>
      52bad64d
  6. 21 10月, 2006 1 次提交
  7. 29 9月, 2006 1 次提交
  8. 23 9月, 2006 7 次提交
  9. 04 8月, 2006 1 次提交
  10. 22 7月, 2006 1 次提交
  11. 09 6月, 2006 1 次提交
    • C
      SUNRPC: select privileged port numbers at random · b85d8806
      Chuck Lever 提交于
      Make the RPC client select privileged ephemeral source ports at
      random.  This improves DRC behavior on the server by using the
      same port when reconnecting for the same mount point, but using
      a different port for fresh mounts.
      
      The Linux TCP implementation already does this for nonprivileged
      ports.  Note that TCP sockets in TIME_WAIT will prevent quick reuse
      of a random ephemeral port number by leaving the port INUSE until
      the connection transitions out of TIME_WAIT.
      
      Test plan:
      Connectathon against every known server implementation using multiple
      mount points.  Locking especially.
      Signed-off-by: NChuck Lever <cel@netapp.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      b85d8806
  12. 21 3月, 2006 2 次提交
  13. 07 1月, 2006 3 次提交
    • T
      SUNRPC: Ensure client closes the socket when server initiates a close · 632e3bdc
      Trond Myklebust 提交于
       If the server decides to close the RPC socket, we currently don't actually
       respond until either another RPC call is scheduled, or until xprt_autoclose()
       gets called by the socket expiry timer (which may be up to 5 minutes
       later).
      
       This patch ensures that xprt_autoclose() is called much sooner if the
       server closes the socket.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      632e3bdc
    • C
      SUNRPC: transport switch API for setting port number · 92200412
      Chuck Lever 提交于
       At some point, transport endpoint addresses will no longer be IPv4.  To hide
       the structure of the rpc_xprt's address field from ULPs and port mappers,
       add an API for setting the port number during an RPC bind operation.
      
       Test-plan:
       Destructive testing (unplugging the network temporarily).  Connectathon
       with UDP and TCP.  NFSv2/3 and NFSv4 mounting should be carefully checked.
       Probably need to rig a server where certain services aren't running, or
       that returns an error for some typical operation.
      Signed-off-by: NChuck Lever <cel@netapp.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      92200412
    • C
      SUNRPC: switchable buffer allocation · 02107148
      Chuck Lever 提交于
       Add RPC client transport switch support for replacing buffer management
       on a per-transport basis.
      
       In the current IPv4 socket transport implementation, RPC buffers are
       allocated as needed for each RPC message that is sent.  Some transport
       implementations may choose to use pre-allocated buffers for encoding,
       sending, receiving, and unmarshalling RPC messages, however.  For
       transports capable of direct data placement, the buffers can be carved
       out of a pre-registered area of memory rather than from a slab cache.
      
       Test-plan:
       Millions of fsx operations.  Performance characterization with "sio" and
       "iozone".  Use oprofile and other tools to look for significant regression
       in CPU utilization.
      Signed-off-by: NChuck Lever <cel@netapp.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      02107148
  14. 20 12月, 2005 1 次提交
    • T
      RPC: Do not block on skb allocation · b079fa7b
      Trond Myklebust 提交于
       If we get something like the following,
       [  125.300636]  [<c04086e1>] schedule_timeout+0x54/0xa5
       [  125.305931]  [<c040866e>] io_schedule_timeout+0x29/0x33
       [  125.311495]  [<c02880c4>] blk_congestion_wait+0x70/0x85
       [  125.317058]  [<c014136b>] throttle_vm_writeout+0x69/0x7d
       [  125.322720]  [<c014714d>] shrink_zone+0xe0/0xfa
       [  125.327560]  [<c01471d4>] shrink_caches+0x6d/0x6f
       [  125.332581]  [<c01472a6>] try_to_free_pages+0xd0/0x1b5
       [  125.338056]  [<c013fa4b>] __alloc_pages+0x135/0x2e8
       [  125.343258]  [<c03b74ad>] tcp_sendmsg+0xaa0/0xb78
       [  125.348281]  [<c03d4666>] inet_sendmsg+0x48/0x53
       [  125.353212]  [<c0388716>] sock_sendmsg+0xb8/0xd3
       [  125.358147]  [<c0388773>] kernel_sendmsg+0x42/0x4f
       [  125.363259]  [<c038bc00>] sock_no_sendpage+0x5e/0x77
       [  125.368556]  [<c03ee7af>] xs_tcp_send_request+0x2af/0x375
       then the socket is blocked until memory is reclaimed, and no
       progress can ever be made.
      
       Try to access the emergency pools by using GFP_ATOMIC.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      b079fa7b
  15. 05 11月, 2005 1 次提交