1. 02 10月, 2013 1 次提交
  2. 26 4月, 2013 1 次提交
    • J
      SUNRPC: allow disabling idle timeout · 33d90ac0
      J. Bruce Fields 提交于
      In the gss-proxy case we don't want to have to reconnect at random--we
      want to connect only on gss-proxy startup when we can steal gss-proxy's
      context to do the connect in the right namespace.
      
      So, provide a flag that allows the rpc_create caller to turn off the
      idle timeout.
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      33d90ac0
  3. 15 4月, 2013 2 次提交
  4. 01 2月, 2013 2 次提交
  5. 29 9月, 2012 1 次提交
  6. 07 9月, 2012 1 次提交
    • T
      SUNRPC: Fix a UDP transport regression · f39c1bfb
      Trond Myklebust 提交于
      Commit 43cedbf0 (SUNRPC: Ensure that
      we grab the XPRT_LOCK before calling xprt_alloc_slot) is causing
      hangs in the case of NFS over UDP mounts.
      
      Since neither the UDP or the RDMA transport mechanism use dynamic slot
      allocation, we can skip grabbing the socket lock for those transports.
      Add a new rpc_xprt_op to allow switching between the TCP and UDP/RDMA
      case.
      
      Note that the NFSv4.1 back channel assigns the slot directly
      through rpc_run_bc_task, so we can ignore that case.
      Reported-by: NDick Streefland <dick.streefland@altium.nl>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Cc: stable@vger.kernel.org [>= 3.1]
      f39c1bfb
  7. 01 8月, 2012 1 次提交
    • M
      nfs: enable swap on NFS · a564b8f0
      Mel Gorman 提交于
      Implement the new swapfile a_ops for NFS and hook up ->direct_IO.  This
      will set the NFS socket to SOCK_MEMALLOC and run socket reconnect under
      PF_MEMALLOC as well as reset SOCK_MEMALLOC before engaging the protocol
      ->connect() method.
      
      PF_MEMALLOC should allow the allocation of struct socket and related
      objects and the early (re)setting of SOCK_MEMALLOC should allow us to
      receive the packets required for the TCP connection buildup.
      
      [jlayton@redhat.com: Restore PF_MEMALLOC task flags in all cases]
      [dfeng@redhat.com: Fix handling of multiple swap files]
      [a.p.zijlstra@chello.nl: Original patch]
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Acked-by: NRik van Riel <riel@redhat.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Eric B Munson <emunson@mgebm.net>
      Cc: Eric Paris <eparis@redhat.com>
      Cc: James Morris <jmorris@namei.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Cc: Xiaotian Feng <dfeng@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a564b8f0
  8. 03 3月, 2012 1 次提交
  9. 17 2月, 2012 1 次提交
    • A
      SUNRPC: add sending,pending queue and max slot to xprt stats · 15a45206
      Andy Adamson 提交于
      With static RPC slots, the xprt backlog queue stats were useful in showing
      when the transport (TCP) was starved by lack of RPC slots. The new dynamic
      RPC slot code, commit d9ba131d, always
      provides an RPC slot and so only uses the xprt backlog queue when the
      tcp_max_slot_table_entries value has been hit or when an allocation error
      occurs. All requests are now placed on the xprt sending or pending queue which
      need to be monitored for debugging.
      
      The max_slot stat shows the maximum number of dynamic RPC slots reached which is
      useful when debugging performance issues.
      
      Add the new fields at the end of the mountstats xprt stanza so that mountstats
      outputs the previous correct values and ignores the new fields. Bump
      NFS_IOSTATS_VERS.
      Signed-off-by: NAndy Adamson <andros@netapp.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      15a45206
  10. 07 2月, 2012 1 次提交
  11. 18 7月, 2011 4 次提交
  12. 15 7月, 2011 1 次提交
  13. 28 5月, 2011 1 次提交
    • C
      SUNRPC: Support for RPC over AF_LOCAL transports · 176e21ee
      Chuck Lever 提交于
      TI-RPC introduces the capability of performing RPC over AF_LOCAL
      sockets.  It uses this mainly for registering and unregistering
      local RPC services securely with the local rpcbind, but we could
      also conceivably use it as a generic upcall mechanism.
      
      This patch provides a client-side only implementation for the moment.
      We might also consider a server-side implementation to provide
      AF_LOCAL access to NLM (for statd downcalls, and such like).
      
      Autobinding is not supported on kernel AF_LOCAL transports at this
      time.  Kernel ULPs must specify the pathname of the remote endpoint
      when an AF_LOCAL transport is created.  rpcbind supports registering
      services available via AF_LOCAL, so the kernel could handle it with
      some adjustment to ->rpcbind and ->set_port.  But we don't need this
      feature for doing upcalls via well-known named sockets.
      
      This has not been tested with ULPs that move a substantial amount of
      data.  Thus, I can't attest to how robust the write_space and
      congestion management logic is.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      176e21ee
  14. 18 3月, 2011 1 次提交
  15. 12 1月, 2011 1 次提交
  16. 02 10月, 2010 4 次提交
  17. 04 8月, 2010 1 次提交
  18. 15 5月, 2010 4 次提交
  19. 14 9月, 2009 1 次提交
  20. 12 9月, 2009 1 次提交
  21. 10 8月, 2009 2 次提交
    • C
      SUNRPC: Kill RPC_DISPLAY_ALL · c740eff8
      Chuck Lever 提交于
      At some point, I recall that rpc_pipe_fs used RPC_DISPLAY_ALL.
      Currently there are no uses of RPC_DISPLAY_ALL outside the transport
      modules themselves, so we can safely get rid of it.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      c740eff8
    • C
      SUNRPC: Remove duplicate universal address generation · ba809130
      Chuck Lever 提交于
      RPC universal address generation is currently done in several places:
      rpcb_clnt.c, nfs4proc.c xprtsock.c, and xprtrdma.c.  Remove the
      redundant cases that convert a socket address to a universal
      address.  The nfs4proc.c case takes a pre-formatted presentation
      address string, not a socket address, so we'll leave that one.
      
      Because the new uaddr constructor uses the recently introduced
      rpc_ntop(), it now supports proper "::" shorthanding for IPv6
      addresses.  This allows the kernel to register properly formed
      universal addresses with the local rpcbind service, in _all_ cases.
      
      The kernel can now also send properly formed universal addresses in
      RPCB_GETADDR requests, and support link-local properly when
      encoding and decoding IPv6 addresses.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      ba809130
  22. 18 6月, 2009 4 次提交
    • R
      nfs41: Rename rq_received to rq_reply_bytes_recvd · dd2b63d0
      Ricardo Labiaga 提交于
      The 'rq_received' member of 'struct rpc_rqst' is used to track when we
      have received a reply to our request.  With v4.1, the backchannel
      can now accept callback requests over the existing connection.  Rename
      this field to make it clear that it is only used for tracking reply bytes
      and not all bytes received on the connection.
      Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      dd2b63d0
    • R
      nfs41: Add backchannel processing support to RPC state machine · 55ae1aab
      Ricardo Labiaga 提交于
      Adds rpc_run_bc_task() which is called by the NFS callback service to
      process backchannel requests.  It performs similar work to rpc_run_task()
      though "schedules" the backchannel task to be executed starting at the
      call_trasmit state in the RPC state machine.
      
      It also introduces some miscellaneous updates to the argument validation,
      call_transmit, and transport cleanup functions to take into account
      that there are now forechannel and backchannel tasks.
      
      Backchannel requests do not carry an RPC message structure, since the
      payload has already been XDR encoded using the existing NFSv4 callback
      mechanism.
      
      Introduce a new transmit state for the client to reply on to backchannel
      requests.  This new state simply reserves the transport and issues the
      reply.  In case of a connection related error, disconnects the transport and
      drops the reply.  It requires the forechannel to re-establish the connection
      and the server to retransmit the request, as stated in NFSv4.1 section
      2.9.2 "Client and Server Transport Behavior".
      
      Note: There is no need to loop attempting to reserve the transport.  If EAGAIN
      is returned by xprt_prepare_transmit(), return with tk_status == 0,
      setting tk_action to call_bc_transmit.  rpc_execute() will invoke it again
      after the task is taken off the sleep queue.
      
      [nfs41: rpc_run_bc_task() need not be exported outside RPC module]
      [nfs41: New call_bc_transmit RPC state]
      Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      [nfs41: Backchannel: No need to loop in call_bc_transmit()]
      Signed-off-by: NAndy Adamson <andros@netapp.com>
      Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      [rpc_count_iostats incorrectly exits early]
      Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      [Convert rpc_reply_expected() to inline function]
      [Remove unnecessary BUG_ON()]
      [Rename variable]
      Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      55ae1aab
    • R
      nfs41: New backchannel helper routines · fb7a0b9a
      Ricardo Labiaga 提交于
      This patch introduces support to setup the callback xprt on the client side.
      It allocates/ destroys the preallocated memory structures used to process
      backchannel requests.
      
      At setup time, xprt_setup_backchannel() is invoked to allocate one or
      more rpc_rqst structures and substructures.  This ensures that they
      are available when an RPC callback arrives.  The rpc_rqst structures
      are maintained in a linked list attached to the rpc_xprt structure.
      We keep track of the number of allocations so that they can be correctly
      removed when the channel is destroyed.
      
      When an RPC callback arrives, xprt_alloc_bc_request() is invoked to
      obtain a preallocated rpc_rqst structure.  An rpc_xprt structure is
      returned, and its RPC_BC_PREALLOC_IN_USE bit is set in
      rpc_xprt->bc_flags.  The structure is removed from the the list
      since it is now in use, and it will be later added back when its
      user is done with it.
      
      After the RPC callback replies, the rpc_rqst structure is returned
      by invoking xprt_free_bc_request().  This clears the
      RPC_BC_PREALLOC_IN_USE bit and adds it back to the list, allowing it
      to be reused by a subsequent RPC callback request.
      
      To be consistent with the reception of RPC messages, the backchannel requests
      should be placed into the 'struct rpc_rqst' rq_rcv_buf, which is then in turn
      copied to the 'struct rpc_rqst' rq_private_buf.
      
      [nfs41: Preallocate rpc_rqst receive buffer for handling callbacks]
      Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      [Update copyright notice and explain page allocation]
      Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      fb7a0b9a
    • R
      nfs41: client callback structures · 56632b5b
      Ricardo Labiaga 提交于
      Adds new list of rpc_xprt structures, and a readers/writers lock to
      protect the list.  The list is used to preallocate resources for
      the backchannel during backchannel requests.  Callbacks are not
      expected to cause significant latency, so only one callback will
      be allowed at this time.
      
      It also adds a pointer to the NFS callback service so that
      requests can be directed to it for processing.
      
      New callback members added to svc_serv. The NFSv4.1 callback service will
      sleep on the svc_serv->svc_cb_waitq until new callback requests arrive.
      The request will be queued in svc_serv->svc_cb_list. This patch adds this
      list, the sleep queue and spinlock to svc_serv.
      
      [nfs41: NFSv4.1 callback support]
      Signed-off-by: NRicardo Labiaga <ricardo.labiaga@netapp.com>
      Signed-off-by: NBenny Halevy <bhalevy@panasas.com>
      56632b5b
  23. 03 5月, 2009 1 次提交
  24. 20 3月, 2009 1 次提交
    • T
      SUNRPC: Add the equivalent of the linger and linger2 timeouts to RPC sockets · 7d1e8255
      Trond Myklebust 提交于
      This fixes a regression against FreeBSD servers as reported by Tomas
      Kasparek. Apparently when using RPC over a TCP socket, the FreeBSD servers
      don't ever react to the client closing the socket, and so commit
      e06799f9 (SUNRPC: Use shutdown() instead of
      close() when disconnecting a TCP socket) causes the setup to hang forever
      whenever the client attempts to close and then reconnect.
      
      We break the deadlock by adding a 'linger2' style timeout to the socket,
      after which, the client will abort the connection using a TCP 'RST'.
      
      The default timeout is set to 15 seconds. A subsequent patch will put it
      under user control by means of a systctl.
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      7d1e8255
  25. 12 3月, 2009 1 次提交