1. 14 5月, 2016 2 次提交
  2. 02 3月, 2016 6 次提交
  3. 20 1月, 2016 2 次提交
  4. 29 10月, 2015 1 次提交
  5. 12 10月, 2015 1 次提交
  6. 08 10月, 2015 1 次提交
    • C
      IB: split struct ib_send_wr · e622f2f4
      Christoph Hellwig 提交于
      This patch split up struct ib_send_wr so that all non-trivial verbs
      use their own structure which embedds struct ib_send_wr.  This dramaticly
      shrinks the size of a WR for most common operations:
      
      sizeof(struct ib_send_wr) (old):	96
      
      sizeof(struct ib_send_wr):		48
      sizeof(struct ib_rdma_wr):		64
      sizeof(struct ib_atomic_wr):		96
      sizeof(struct ib_ud_wr):		88
      sizeof(struct ib_fast_reg_wr):		88
      sizeof(struct ib_bind_mw_wr):		96
      sizeof(struct ib_sig_handover_wr):	80
      
      And with Sagi's pending MR rework the fast registration WR will also be
      down to a reasonable size:
      
      sizeof(struct ib_fastreg_wr):		64
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com> [srp, srpt]
      Reviewed-by: Chuck Lever <chuck.lever@oracle.com> [sunrpc]
      Tested-by: NHaggai Eran <haggaie@mellanox.com>
      Tested-by: NSagi Grimberg <sagig@mellanox.com>
      Tested-by: NSteve Wise <swise@opengridcomputing.com>
      e622f2f4
  7. 30 9月, 2015 1 次提交
    • S
      svcrdma: handle rdma read with a non-zero initial page offset · c91aed98
      Steve Wise 提交于
      The server rdma_read_chunk_lcl() and rdma_read_chunk_frmr() functions
      were not taking into account the initial page_offset when determining
      the rdma read length.  This resulted in a read who's starting address
      and length exceeded the base/bounds of the frmr.
      
      The server gets an async error from the rdma device and kills the
      connection, and the client then reconnects and resends.  This repeats
      indefinitely, and the application hangs.
      
      Most work loads don't tickle this bug apparently, but one test hit it
      every time: building the linux kernel on a 16 core node with 'make -j
      16 O=/mnt/0' where /mnt/0 is a ramdisk mounted via NFSRDMA.
      
      This bug seems to only be tripped with devices having small fastreg page
      list depths.  I didn't see it with mlx4, for instance.
      
      Fixes: 0bf48289 ('svcrdma: refactor marshalling logic')
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Tested-by: NChuck Lever <chuck.lever@oracle.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      c91aed98
  8. 29 8月, 2015 1 次提交
  9. 05 6月, 2015 1 次提交
  10. 19 5月, 2015 2 次提交
  11. 16 1月, 2015 8 次提交
  12. 23 7月, 2014 1 次提交
  13. 07 6月, 2014 2 次提交
  14. 29 3月, 2014 1 次提交
  15. 18 12月, 2012 1 次提交
  16. 18 2月, 2012 1 次提交
  17. 19 10月, 2010 2 次提交
  18. 03 5月, 2010 1 次提交
    • N
      sunrpc: centralise most calls to svc_xprt_received · b48fa6b9
      Neil Brown 提交于
      svc_xprt_received must be called when ->xpo_recvfrom has finished
      receiving a message, so that the XPT_BUSY flag will be cleared and
      if necessary, requeued for further work.
      
      This call is currently made in each ->xpo_recvfrom function, often
      from multiple different points.  In each case it is the earliest point
      on a particular path where it is known that the protection provided by
      XPT_BUSY is no longer needed.
      
      However there are (still) some error paths which do not call
      svc_xprt_received, and requiring each ->xpo_recvfrom to make the call
      does not encourage robustness.
      
      So: move the svc_xprt_received call to be made just after the
      call to ->xpo_recvfrom(), and move it of the various ->xpo_recvfrom
      methods.
      
      This means that it may not be called at the earliest possible instant,
      but this is unlikely to be a measurable performance issue.
      
      Note that there are still other calls to svc_xprt_received as it is
      also needed when an xprt is newly created.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      b48fa6b9
  19. 30 11月, 2009 1 次提交
  20. 16 6月, 2009 1 次提交
  21. 26 4月, 2009 1 次提交
  22. 15 12月, 2008 1 次提交
  23. 07 10月, 2008 1 次提交
    • T
      svcrdma: Modify the RPC recv path to use FRMR when available · 146b6df6
      Tom Tucker 提交于
      RPCRDMA requests that specify a read-list are fetched with RDMA_READ. Using
      an FRMR to map the data sink improves NFSRDMA security on transports that
      place the RDMA_READ data sink LKEY on the wire because the valid lifetime
      of the MR is only the duration of the RDMA_READ. The LKEY is invalidated
      when the last RDMA_READ WR completes.
      
      Mapping the data sink also allows for very large amounts to data to be
      fetched with a single WR, so if the client is also using FRMR, the entire
      RPC read-list can be fetched with a single WR.
      Signed-off-by: NTom Tucker <tom@opengridcomputing.com>
      146b6df6