1. 23 9月, 2006 1 次提交
  2. 18 6月, 2006 1 次提交
  3. 13 5月, 2006 1 次提交
    • S
      IB: refcount race fixes · 1b52fa98
      Sean Hefty 提交于
      Fix race condition during destruction calls to avoid possibility of
      accessing object after it has been freed.  Instead of waking up a wait
      queue directly, which is susceptible to a race where the object is
      freed between the reference count going to 0 and the wake_up(), use a
      completion to wait in the function doing the freeing.
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      1b52fa98
  4. 30 3月, 2006 1 次提交
  5. 21 3月, 2006 1 次提交
    • J
      IB/umad: Add support for large RMPP transfers · f36e1793
      Jack Morgenstein 提交于
      Add support for sending and receiving large RMPP transfers.  The old
      code supports transfers only as large as a single contiguous kernel
      memory allocation.  This patch uses linked list of memory buffers when
      sending and receiving data to avoid needing contiguous pages for
      larger transfers.
      
        Receive side: copy the arriving MADs in chunks instead of coalescing
        to one large buffer in kernel space.
      
        Send side: split a multipacket MAD buffer to a list of segments,
        (multipacket_list) and send these using a gather list of size 2.
        Also, save pointer to last sent segment, and retrieve requested
        segments by walking list starting at last sent segment. Finally,
        save pointer to last-acked segment.  When retrying, retrieve
        segments for resending relative to this pointer.  When updating last
        ack, start at this pointer.
      Signed-off-by: NJack Morgenstein <jackm@mellanox.co.il>
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      f36e1793
  6. 26 10月, 2005 1 次提交
    • S
      [IB] Fix MAD layer DMA mappings to avoid touching data buffer once mapped · 34816ad9
      Sean Hefty 提交于
      The MAD layer was violating the DMA API by touching data buffers used
      for sends after the DMA mapping was done.  This causes problems on
      non-cache-coherent architectures, because the device doing DMA won't
      see updates to the payload buffers that exist only in the CPU cache.
      
      Fix this by having all MAD consumers use ib_create_send_mad() to
      allocate their send buffers, and moving the DMA mapping into the MAD
      layer so it can be done just before calling send (and after any
      modifications of the send buffer by the MAD layer).
      
      Tested on a non-cache-coherent PowerPC 440SPe system.
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      34816ad9
  7. 27 8月, 2005 2 次提交
  8. 28 7月, 2005 5 次提交
  9. 17 4月, 2005 2 次提交