1. 14 9月, 2009 1 次提交
  2. 12 9月, 2009 1 次提交
  3. 10 8月, 2009 2 次提交
    • C
      SUNRPC: Kill RPC_DISPLAY_ALL · c740eff8
      Chuck Lever 提交于
      At some point, I recall that rpc_pipe_fs used RPC_DISPLAY_ALL.
      Currently there are no uses of RPC_DISPLAY_ALL outside the transport
      modules themselves, so we can safely get rid of it.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      c740eff8
    • C
      SUNRPC: Remove duplicate universal address generation · ba809130
      Chuck Lever 提交于
      RPC universal address generation is currently done in several places:
      rpcb_clnt.c, nfs4proc.c xprtsock.c, and xprtrdma.c.  Remove the
      redundant cases that convert a socket address to a universal
      address.  The nfs4proc.c case takes a pre-formatted presentation
      address string, not a socket address, so we'll leave that one.
      
      Because the new uaddr constructor uses the recently introduced
      rpc_ntop(), it now supports proper "::" shorthanding for IPv6
      addresses.  This allows the kernel to register properly formed
      universal addresses with the local rpcbind service, in _all_ cases.
      
      The kernel can now also send properly formed universal addresses in
      RPCB_GETADDR requests, and support link-local properly when
      encoding and decoding IPv6 addresses.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      ba809130
  4. 18 6月, 2009 4 次提交
    • R
      nfs41: Rename rq_received to rq_reply_bytes_recvd · dd2b63d0
      Ricardo Labiaga 提交于
      The 'rq_received' member of 'struct rpc_rqst' is used to track when we
      have received a reply to our request.  With v4.1, the backchannel
      can now accept callback requests over the existing connection.  Rename
      this field to make it clear that it is only used for tracking reply bytes
      and not all bytes received on the connection.
      Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      dd2b63d0
    • R
      nfs41: Add backchannel processing support to RPC state machine · 55ae1aab
      Ricardo Labiaga 提交于
      Adds rpc_run_bc_task() which is called by the NFS callback service to
      process backchannel requests.  It performs similar work to rpc_run_task()
      though "schedules" the backchannel task to be executed starting at the
      call_trasmit state in the RPC state machine.
      
      It also introduces some miscellaneous updates to the argument validation,
      call_transmit, and transport cleanup functions to take into account
      that there are now forechannel and backchannel tasks.
      
      Backchannel requests do not carry an RPC message structure, since the
      payload has already been XDR encoded using the existing NFSv4 callback
      mechanism.
      
      Introduce a new transmit state for the client to reply on to backchannel
      requests.  This new state simply reserves the transport and issues the
      reply.  In case of a connection related error, disconnects the transport and
      drops the reply.  It requires the forechannel to re-establish the connection
      and the server to retransmit the request, as stated in NFSv4.1 section
      2.9.2 "Client and Server Transport Behavior".
      
      Note: There is no need to loop attempting to reserve the transport.  If EAGAIN
      is returned by xprt_prepare_transmit(), return with tk_status == 0,
      setting tk_action to call_bc_transmit.  rpc_execute() will invoke it again
      after the task is taken off the sleep queue.
      
      [nfs41: rpc_run_bc_task() need not be exported outside RPC module]
      [nfs41: New call_bc_transmit RPC state]
      Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      [nfs41: Backchannel: No need to loop in call_bc_transmit()]
      Signed-off-by: NAndy Adamson <andros@netapp.com>
      Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      [rpc_count_iostats incorrectly exits early]
      Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      [Convert rpc_reply_expected() to inline function]
      [Remove unnecessary BUG_ON()]
      [Rename variable]
      Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      55ae1aab
    • R
      nfs41: New backchannel helper routines · fb7a0b9a
      Ricardo Labiaga 提交于
      This patch introduces support to setup the callback xprt on the client side.
      It allocates/ destroys the preallocated memory structures used to process
      backchannel requests.
      
      At setup time, xprt_setup_backchannel() is invoked to allocate one or
      more rpc_rqst structures and substructures.  This ensures that they
      are available when an RPC callback arrives.  The rpc_rqst structures
      are maintained in a linked list attached to the rpc_xprt structure.
      We keep track of the number of allocations so that they can be correctly
      removed when the channel is destroyed.
      
      When an RPC callback arrives, xprt_alloc_bc_request() is invoked to
      obtain a preallocated rpc_rqst structure.  An rpc_xprt structure is
      returned, and its RPC_BC_PREALLOC_IN_USE bit is set in
      rpc_xprt->bc_flags.  The structure is removed from the the list
      since it is now in use, and it will be later added back when its
      user is done with it.
      
      After the RPC callback replies, the rpc_rqst structure is returned
      by invoking xprt_free_bc_request().  This clears the
      RPC_BC_PREALLOC_IN_USE bit and adds it back to the list, allowing it
      to be reused by a subsequent RPC callback request.
      
      To be consistent with the reception of RPC messages, the backchannel requests
      should be placed into the 'struct rpc_rqst' rq_rcv_buf, which is then in turn
      copied to the 'struct rpc_rqst' rq_private_buf.
      
      [nfs41: Preallocate rpc_rqst receive buffer for handling callbacks]
      Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      [Update copyright notice and explain page allocation]
      Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      fb7a0b9a
    • R
      nfs41: client callback structures · 56632b5b
      Ricardo Labiaga 提交于
      Adds new list of rpc_xprt structures, and a readers/writers lock to
      protect the list.  The list is used to preallocate resources for
      the backchannel during backchannel requests.  Callbacks are not
      expected to cause significant latency, so only one callback will
      be allowed at this time.
      
      It also adds a pointer to the NFS callback service so that
      requests can be directed to it for processing.
      
      New callback members added to svc_serv. The NFSv4.1 callback service will
      sleep on the svc_serv->svc_cb_waitq until new callback requests arrive.
      The request will be queued in svc_serv->svc_cb_list. This patch adds this
      list, the sleep queue and spinlock to svc_serv.
      
      [nfs41: NFSv4.1 callback support]
      Signed-off-by: NRicardo Labiaga <ricardo.labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      56632b5b
  5. 03 5月, 2009 1 次提交
  6. 20 3月, 2009 1 次提交
    • T
      SUNRPC: Add the equivalent of the linger and linger2 timeouts to RPC sockets · 7d1e8255
      Trond Myklebust 提交于
      This fixes a regression against FreeBSD servers as reported by Tomas
      Kasparek. Apparently when using RPC over a TCP socket, the FreeBSD servers
      don't ever react to the client closing the socket, and so commit
      e06799f9 (SUNRPC: Use shutdown() instead of
      close() when disconnecting a TCP socket) causes the setup to hang forever
      whenever the client attempts to close and then reconnect.
      
      We break the deadlock by adding a 'linger2' style timeout to the socket,
      after which, the client will abort the connection using a TCP 'RST'.
      
      The default timeout is set to 15 seconds. A subsequent patch will put it
      under user control by means of a systctl.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      7d1e8255
  7. 12 3月, 2009 1 次提交
  8. 24 12月, 2008 1 次提交
  9. 20 4月, 2008 2 次提交
    • T
      SUNRPC: Don't disconnect more than once if retransmitting NFSv4 requests · 7c1d71cf
      Trond Myklebust 提交于
      NFSv4 requires us to ensure that we break the TCP connection before we're
      allowed to retransmit a request. However in the case where we're
      retransmitting several requests that have been sent on the same
      connection, we need to ensure that we don't interfere with the attempt to
      reconnect and/or break the connection again once it has been established.
      
      We therefore introduce a 'connection' cookie that is bumped every time a
      connection is broken. This allows requests to track if they need to force a
      disconnection.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      7c1d71cf
    • T
      SUNRPC: Fix up xprt_write_space() · b6ddf64f
      Trond Myklebust 提交于
      The rest of the networking layer uses SOCK_ASYNC_NOSPACE to signal whether
      or not we have someone waiting for buffer memory. Convert the SUNRPC layer
      to use the same idiom.
      Remove the unlikely()s in xs_udp_write_space and xs_tcp_write_space. In
      fact, the most common case will be that there is nobody waiting for buffer
      space.
      
      SOCK_NOSPACE is there to tell the TCP layer whether or not the cwnd was
      limited by the application window. Ensure that we follow the same idiom as
      the rest of the networking layer here too.
      
      Finally, ensure that we clear SOCK_ASYNC_NOSPACE once we wake up, so that
      write_space() doesn't keep waking things up on xprt->pending.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      b6ddf64f
  10. 30 1月, 2008 6 次提交
  11. 10 10月, 2007 7 次提交
  12. 11 7月, 2007 2 次提交
  13. 15 5月, 2007 1 次提交
  14. 01 5月, 2007 3 次提交
    • C
      SUNRPC: introduce rpcbind: replacement for in-kernel portmapper · a509050b
      Chuck Lever 提交于
      Introduce a replacement for the in-kernel portmapper client that supports
      all 3 versions of the rpcbind protocol.  This code is not used yet.
      
      Original code by Groupe Bull updated for the latest kernel, with multiple
      bug fixes.
      
      Note that rpcb_clnt.c does not yet support registering via versions 3 and
      4 of the rpcbind protocol.  That is planned for a later patch.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      a509050b
    • C
      SUNRPC: Eliminate side effects from rpc_malloc · c5a4dd8b
      Chuck Lever 提交于
      Currently rpc_malloc sets req->rq_buffer internally.  Make this a more
      generic interface:  return a pointer to the new buffer (or NULL) and
      make the caller set req->rq_buffer and req->rq_bufsize.  This looks much
      more like kmalloc and eliminates the side effects.
      
      To fix a potential deadlock, this patch also replaces GFP_NOFS with
      GFP_NOWAIT in rpc_malloc.  This prevents async RPCs from sleeping outside
      the RPC's task scheduler while allocating their buffer.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      c5a4dd8b
    • C
      SUNRPC: RPC buffer size estimates are too large · 2bea90d4
      Chuck Lever 提交于
      The RPC buffer size estimation logic in net/sunrpc/clnt.c always
      significantly overestimates the requirements for the buffer size.
      A little instrumentation demonstrated that in fact rpc_malloc was never
      allocating the buffer from the mempool, but almost always called kmalloc.
      
      To compute the size of the RPC buffer more precisely, split p_bufsiz into
      two fields; one for the argument size, and one for the result size.
      
      Then, compute the sum of the exact call and reply header sizes, and split
      the RPC buffer precisely between the two.  That should keep almost all RPC
      buffers within the 2KiB buffer mempool limit.
      
      And, we can finally be rid of RPC_SLACK_SPACE!
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      2bea90d4
  15. 06 12月, 2006 7 次提交