1. 07 10月, 2015 1 次提交
  2. 25 9月, 2015 1 次提交
  3. 31 8月, 2015 1 次提交
  4. 06 8月, 2015 5 次提交
  5. 13 6月, 2015 12 次提交
  6. 19 5月, 2015 1 次提交
  7. 31 3月, 2015 13 次提交
  8. 07 3月, 2015 1 次提交
  9. 30 1月, 2015 5 次提交
    • C
      xprtrdma: Clean up after adding regbuf management · df515ca7
      Chuck Lever 提交于
      rpcrdma_{de}register_internal() are used only in verbs.c now.
      
      MAX_RPCRDMAHDR is no longer used and can be removed.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Reviewed-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      df515ca7
    • C
      xprtrdma: Allocate zero pad separately from rpcrdma_buffer · c05fbb5a
      Chuck Lever 提交于
      Use the new rpcrdma_alloc_regbuf() API to shrink the amount of
      contiguous memory needed for a buffer pool by moving the zero
      pad buffer into a regbuf.
      
      This is for consistency with the other uses of internally
      registered memory.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Reviewed-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      c05fbb5a
    • C
      xprtrdma: Allocate RPC/RDMA receive buffer separately from struct rpcrdma_rep · 6b1184cd
      Chuck Lever 提交于
      The rr_base field is currently the buffer where RPC replies land.
      
      An RPC/RDMA reply header lands in this buffer. In some cases an RPC
      reply header also lands in this buffer, just after the RPC/RDMA
      header.
      
      The inline threshold is an agreed-on size limit for RDMA SEND
      operations that pass from server and client. The sum of the
      RPC/RDMA reply header size and the RPC reply header size must be
      less than this threshold.
      
      The largest RDMA RECV that the client should have to handle is the
      size of the inline threshold. The receive buffer should thus be the
      size of the inline threshold, and not related to RPCRDMA_MAX_SEGS.
      
      RPC replies received via RDMA WRITE (long replies) are caught in
      rq_rcv_buf, which is the second half of the RPC send buffer. Ie,
      such replies are not involved in any way with rr_base.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      6b1184cd
    • C
      xprtrdma: Allocate RPC/RDMA send buffer separately from struct rpcrdma_req · 85275c87
      Chuck Lever 提交于
      The rl_base field is currently the buffer where each RPC/RDMA call
      header is built.
      
      The inline threshold is an agreed-on size limit to for RDMA SEND
      operations that pass between client and server. The sum of the
      RPC/RDMA header size and the RPC header size must be less than or
      equal to this threshold.
      
      Increasing the r/wsize maximum will require MAX_SEGS to grow
      significantly, but the inline threshold size won't change (both
      sides agree on it). The server's inline threshold doesn't change.
      
      Since an RPC/RDMA header can never be larger than the inline
      threshold, make all RPC/RDMA header buffers the size of the
      inline threshold.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Reviewed-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      85275c87
    • C
      xprtrdma: Allocate RPC send buffer separately from struct rpcrdma_req · 0ca77dc3
      Chuck Lever 提交于
      Because internal memory registration is an expensive and synchronous
      operation, xprtrdma pre-registers send and receive buffers at mount
      time, and then re-uses them for each RPC.
      
      A "hardway" allocation is a memory allocation and registration that
      replaces a send buffer during the processing of an RPC. Hardway must
      be done if the RPC send buffer is too small to accommodate an RPC's
      call and reply headers.
      
      For xprtrdma, each RPC send buffer is currently part of struct
      rpcrdma_req so that xprt_rdma_free(), which is passed nothing but
      the address of an RPC send buffer, can find its matching struct
      rpcrdma_req and rpcrdma_rep quickly via container_of / offsetof.
      
      That means that hardway currently has to replace a whole rpcrmda_req
      when it replaces an RPC send buffer. This is often a fairly hefty
      chunk of contiguous memory due to the size of the rl_segments array
      and the fact that both the send and receive buffers are part of
      struct rpcrdma_req.
      
      Some obscure re-use of fields in rpcrdma_req is done so that
      xprt_rdma_free() can detect replaced rpcrdma_req structs, and
      restore the original.
      
      This commit breaks apart the RPC send buffer and struct rpcrdma_req
      so that increasing the size of the rl_segments array does not change
      the alignment of each RPC send buffer. (Increasing rl_segments is
      needed to bump up the maximum r/wsize for NFS/RDMA).
      
      This change opens up some interesting possibilities for improving
      the design of xprt_rdma_allocate().
      
      xprt_rdma_allocate() is now the one place where RPC send buffers
      are allocated or re-allocated, and they are now always left in place
      by xprt_rdma_free().
      
      A large re-allocation that includes both the rl_segments array and
      the RPC send buffer is no longer needed. Send buffer re-allocation
      becomes quite rare. Good send buffer alignment is guaranteed no
      matter what the size of the rl_segments array is.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Reviewed-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      0ca77dc3