1. 18 11月, 2017 18 次提交
  2. 17 10月, 2017 3 次提交
  3. 25 9月, 2017 1 次提交
  4. 06 9月, 2017 4 次提交
    • C
      xprtrdma: Use xprt_pin_rqst in rpcrdma_reply_handler · 9590d083
      Chuck Lever 提交于
      Adopt the use of xprt_pin_rqst to eliminate contention between
      Call-side users of rb_lock and the use of rb_lock in
      rpcrdma_reply_handler.
      
      This replaces the mechanism introduced in 431af645 ("xprtrdma:
      Fix client lock-up after application signal fires").
      
      Use recv_lock to quickly find the completing rqst, pin it, then
      drop the lock. At that point invalidation and pull-up of the Reply
      XDR can be done. Both are often expensive operations.
      
      Finally, take recv_lock again to signal completion to the RPC
      layer. It also protects adjustment of "cwnd".
      
      This greatly reduces the amount of time a lock is held by the
      reply handler. Comparing lock_stat results shows a marked decrease
      in contention on rb_lock and recv_lock.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      [trond.myklebust@primarydata.com: Remove call to rpcrdma_buffer_put() from
         the "out_norqst:" path in rpcrdma_reply_handler.]
      Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
      9590d083
    • C
      svcrdma: Estimate Send Queue depth properly · 26fb2254
      Chuck Lever 提交于
      The rdma_rw API adjusts max_send_wr upwards during the
      rdma_create_qp() call. If the ULP actually wants to take advantage
      of these extra resources, it must increase the size of its send
      completion queue (created before rdma_create_qp is called) and
      increase its send queue accounting limit.
      
      Use the new rdma_rw_mr_factor API to figure out the correct value
      to use for the Send Queue and Send Completion Queue depths.
      
      And, ensure that the chosen Send Queue depth for a newly created
      transport does not overrun the QP WR limit of the underlying device.
      
      Lastly, there's no longer a need to carry the Send Queue depth in
      struct svcxprt_rdma, since the value is used only in the
      svc_rdma_accept() path.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      26fb2254
    • C
      svcrdma: Limit RQ depth · 5a25bfd2
      Chuck Lever 提交于
      Ensure that the chosen Receive Queue depth for a newly created
      transport does not overrun the QP WR limit of the underlying device.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      5a25bfd2
    • C
      svcrdma: Populate tail iovec when receiving · 193bcb7b
      Chuck Lever 提交于
      So that NFS WRITE payloads can eventually be placed directly into a
      file's page cache, enable the RPC-over-RDMA transport to present
      these payloads in the xdr_buf's page list, while placing trailing
      content (such as a GETATTR operation) in the xdr_buf's tail.
      
      After this change, the RPC-over-RDMA's "copy tail" hack, added by
      commit a97c331f ("svcrdma: Handle additional inline content"),
      is no longer needed and can be removed.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      193bcb7b
  5. 25 8月, 2017 2 次提交
  6. 23 8月, 2017 1 次提交
  7. 19 8月, 2017 1 次提交
  8. 16 8月, 2017 2 次提交
  9. 12 8月, 2017 5 次提交
  10. 08 8月, 2017 3 次提交