1. 15 10月, 2016 1 次提交
  2. 03 4月, 2015 1 次提交
  3. 09 7月, 2013 1 次提交
  4. 16 2月, 2012 1 次提交
  5. 11 8月, 2011 1 次提交
  6. 26 6月, 2010 1 次提交
  7. 04 6月, 2009 1 次提交
  8. 14 3月, 2009 2 次提交
  9. 16 12月, 2008 1 次提交
  10. 14 10月, 2008 2 次提交
  11. 15 7月, 2008 1 次提交
    • S
      RDMA/cxgb3: Fixes for zero STag · 4ab928f6
      Steve Wise 提交于
      Handling the zero STag in receive work request requires some extra
      logic in the driver:
      
       - Only set the QP_PRIV bit for kernel mode QPs.
      
      - Add a zero STag build function for recv wrs. The uP needs a PBL
        allocated and passed down in the recv WR so it can construct a HW
        PBL for the zero STag S/G entries.  Note: we need to place a few
        restrictions on zero STag usage because of this:
      
        1) all SGEs in a recv WR must either be zero STag or not.  No mixing.
      
        2) an individual SGE length cannot exceed 128MB for a zero-stag SGE.
           This should be OK since it's not really practical to allocate
           such a large chunk of pinned contiguous DMA mapped memory.
      
      - Add an optimized non-zero-STag recv wr format for kernel users.
        This is needed to optimize both zero and non-zero STag cracking in
        the recv path for kernel users.
      
       - Remove the iwch_ prefix from the static build functions.
      
       - Bump required FW version.
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      4ab928f6
  12. 30 4月, 2008 1 次提交
    • S
      RDMA/cxgb3: Support peer-2-peer connection setup · f8b0dfd1
      Steve Wise 提交于
      Open MPI, Intel MPI and other applications don't respect the iWARP
      requirement that the client (active) side of the connection send the
      first RDMA message.  This class of application connection setup is
      called peer-to-peer.  Typically once the connection is setup, _both_
      sides want to send data.
      
      This patch enables supporting peer-to-peer over the chelsio RNIC by
      enforcing this iWARP requirement in the driver itself as part of RDMA
      connection setup.
      
      Connection setup is extended, when the peer2peer module option is 1,
      such that the MPA initiator will send a 0B Read (the RTR) just after
      connection setup.  The MPA responder will suspend SQ processing until
      the RTR message is received and reply-to.
      
      In the longer term, this will be handled in a standardized way by
      enhancing the MPA negotiation so peers can indicate whether they
      want/need the RTR and what type of RTR (0B read, 0B write, or 0B send)
      should be sent.  This will be done by standardizing a few bits of the
      private data in order to negotiate all this.  However this patch
      enables peer-to-peer applications now and allows most of the required
      firmware and driver changes to be done and tested now.
      
      Design:
      
       - Add a module option, peer2peer, to enable this mode.
      
       - New firmware support for peer-to-peer mode:
      
      	- a new bit in the rdma_init WR to tell it to do peer-2-peer
      	  and what form of RTR message to send or expect.
      
      	- process _all_ preposted recvs before moving the connection
      	  into rdma mode.
      
      	- passive side: defer completing the rdma_init WR until all
      	  pre-posted recvs are processed.  Suspend SQ processing until
      	  the RTR is received.
      
      	- active side: expect and process the 0B read WR on offload TX
      	  queue. Defer completing the rdma_init WR until all
      	  pre-posted recvs are processed.  Suspend SQ processing until
      	  the 0B read WR is processed from the offload TX queue.
      
       - If peer2peer is set, driver posts 0B read request on offload TX
         queue just after posting the rdma_init WR to the offload TX queue.
      
       - Add CQ poll logic to ignore unsolicitied read responses.
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      f8b0dfd1
  13. 29 1月, 2008 1 次提交
  14. 11 10月, 2007 1 次提交
  15. 10 7月, 2007 1 次提交
    • S
      RDMA/cxgb3: Streaming -> RDMA mode transition fixes · de3d3530
      Steve Wise 提交于
      Due to a HW issue, our current scheme to transition the connection from
      streaming to rdma mode is broken on the passive side.  The firmware
      and driver now support a new transition scheme for the passive side:
      
       - driver posts rdma_init_wr (now including the initial receive seqno)
       - driver posts last streaming message via TX_DATA message (MPA start
         response)
       - uP atomically sends the last streaming message and transitions the
         tcb to rdma mode.
       - driver waits for wr_ack indicating the last streaming message was ACKed.
      
      NOTE: This change also bumps the required firmware version to 4.3.
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      de3d3530
  16. 09 7月, 2007 1 次提交
  17. 07 5月, 2007 1 次提交
  18. 04 4月, 2007 1 次提交
  19. 03 3月, 2007 1 次提交
  20. 27 2月, 2007 1 次提交
  21. 06 2月, 2007 2 次提交