1. 23 9月, 2006 1 次提交
    • S
      IB/mad: Add support for dual-sided RMPP transfers. · 75ab1344
      Sean Hefty 提交于
      The implementation assumes that any RMPP request that requires a
      response uses DS RMPP.  Based on the RMPP start-up scenarios defined
      by the spec, this should be a valid assumption.  That is, there is no
      start-up scenario defined where an RMPP request is followed by a
      non-RMPP response.  By having this assumption we avoid any API
      changes.
      
      In order for a node that supports DS RMPP to communicate with one that
      does not, RMPP responses assume a new window size of 1 if a DS ACK has
      not been received.  (By DS ACK, I'm referring to the turn-around ACK
      after the final ACK of the request.)  This is a slight spec deviation,
      but is necessary to allow communication with nodes that do not
      generate the DS ACK.  It also handles the case when a response is sent
      after the request state has been discarded.
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      75ab1344
  2. 27 6月, 2006 1 次提交
  3. 13 5月, 2006 1 次提交
    • S
      IB: refcount race fixes · 1b52fa98
      Sean Hefty 提交于
      Fix race condition during destruction calls to avoid possibility of
      accessing object after it has been freed.  Instead of waking up a wait
      queue directly, which is susceptible to a race where the object is
      freed between the reference count going to 0 and the wake_up(), use a
      completion to wait in the function doing the freeing.
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      1b52fa98
  4. 30 3月, 2006 2 次提交
  5. 21 3月, 2006 1 次提交
    • J
      IB/umad: Add support for large RMPP transfers · f36e1793
      Jack Morgenstein 提交于
      Add support for sending and receiving large RMPP transfers.  The old
      code supports transfers only as large as a single contiguous kernel
      memory allocation.  This patch uses linked list of memory buffers when
      sending and receiving data to avoid needing contiguous pages for
      larger transfers.
      
        Receive side: copy the arriving MADs in chunks instead of coalescing
        to one large buffer in kernel space.
      
        Send side: split a multipacket MAD buffer to a list of segments,
        (multipacket_list) and send these using a gather list of size 2.
        Also, save pointer to last sent segment, and retrieve requested
        segments by walking list starting at last sent segment. Finally,
        save pointer to last-acked segment.  When retrying, retrieve
        segments for resending relative to this pointer.  When updating last
        ack, start at this pointer.
      Signed-off-by: NJack Morgenstein <jackm@mellanox.co.il>
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      f36e1793
  6. 26 10月, 2005 2 次提交
    • R
      [IB] simplify mad_rmpp.c:alloc_response_msg() · 7cc656ef
      Roland Dreier 提交于
      Change alloc_response_msg() in mad_rmpp.c to return the struct
      it allocates directly (or an error code a la ERR_PTR), rather than
      returning a status and passing the struct back in a pointer param.
      This simplifies the code and gets rid of warnings like
      
          drivers/infiniband/core/mad_rmpp.c: In function nack_recv:
          drivers/infiniband/core/mad_rmpp.c:192: warning: msg may be used uninitialized in this function
      
      with newer versions of gcc.
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      7cc656ef
    • S
      [IB] Fix MAD layer DMA mappings to avoid touching data buffer once mapped · 34816ad9
      Sean Hefty 提交于
      The MAD layer was violating the DMA API by touching data buffers used
      for sends after the DMA mapping was done.  This causes problems on
      non-cache-coherent architectures, because the device doing DMA won't
      see updates to the payload buffers that exist only in the CPU cache.
      
      Fix this by having all MAD consumers use ib_create_send_mad() to
      allocate their send buffers, and moving the DMA mapping into the MAD
      layer so it can be done just before calling send (and after any
      modifications of the send buffer by the MAD layer).
      
      Tested on a non-cache-coherent PowerPC 440SPe system.
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      34816ad9
  7. 22 9月, 2005 2 次提交
  8. 08 9月, 2005 1 次提交
    • H
      [PATCH] IB: RMPP fixes · b5dcbf47
      Hal Rosenstock 提交于
      - Fix payload length of middle RMPP sent segments. Middle payload
        lengths should be 0 on the send side.
      
        (This is perhaps a compliance and should not be an interop issue as
        middle payload lengths are supposed to be ignored on receive).
      
      - Fix length in first segment of multipacket sends
      
        (This is a compliance issue but does not affect at least OpenIB to
        OpenIB RMPP transfers).
      Signed-off-by: NHal Rosenstock <halr@voltaire.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      b5dcbf47
  9. 27 8月, 2005 2 次提交
  10. 28 7月, 2005 1 次提交