1. 25 7月, 2006 1 次提交
  2. 30 3月, 2006 1 次提交
  3. 21 3月, 2006 1 次提交
    • J
      IB/umad: Add support for large RMPP transfers · f36e1793
      Jack Morgenstein 提交于
      Add support for sending and receiving large RMPP transfers.  The old
      code supports transfers only as large as a single contiguous kernel
      memory allocation.  This patch uses linked list of memory buffers when
      sending and receiving data to avoid needing contiguous pages for
      larger transfers.
      
        Receive side: copy the arriving MADs in chunks instead of coalescing
        to one large buffer in kernel space.
      
        Send side: split a multipacket MAD buffer to a list of segments,
        (multipacket_list) and send these using a gather list of size 2.
        Also, save pointer to last sent segment, and retrieve requested
        segments by walking list starting at last sent segment. Finally,
        save pointer to last-acked segment.  When retrying, retrieve
        segments for resending relative to this pointer.  When updating last
        ack, start at this pointer.
      Signed-off-by: NJack Morgenstein <jackm@mellanox.co.il>
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      f36e1793
  4. 10 12月, 2005 1 次提交
  5. 29 11月, 2005 1 次提交
  6. 19 11月, 2005 1 次提交
    • R
      IB/umad: make sure write()s have sufficient data · eabc7793
      Roland Dreier 提交于
      Make sure that userspace passes in enough data when sending a MAD.  We
      always copy at least sizeof (struct ib_user_mad) + IB_MGMT_RMPP_HDR
      bytes from userspace, so anything less is definitely invalid.  Also,
      if the length is less than this limit, it's possible for the second
      copy_from_user() to get a negative length and trigger a BUG().
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      eabc7793
  7. 11 11月, 2005 3 次提交
  8. 07 11月, 2005 1 次提交
  9. 04 11月, 2005 1 次提交
  10. 29 10月, 2005 2 次提交
  11. 28 10月, 2005 2 次提交
  12. 26 10月, 2005 1 次提交
    • S
      [IB] Fix MAD layer DMA mappings to avoid touching data buffer once mapped · 34816ad9
      Sean Hefty 提交于
      The MAD layer was violating the DMA API by touching data buffers used
      for sends after the DMA mapping was done.  This causes problems on
      non-cache-coherent architectures, because the device doing DMA won't
      see updates to the payload buffers that exist only in the CPU cache.
      
      Fix this by having all MAD consumers use ib_create_send_mad() to
      allocate their send buffers, and moving the DMA mapping into the MAD
      layer so it can be done just before calling send (and after any
      modifications of the send buffer by the MAD layer).
      
      Tested on a non-cache-coherent PowerPC 440SPe system.
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      34816ad9
  13. 21 10月, 2005 2 次提交
  14. 20 9月, 2005 1 次提交
  15. 27 8月, 2005 3 次提交
  16. 28 7月, 2005 2 次提交
  17. 26 5月, 2005 1 次提交
  18. 17 4月, 2005 2 次提交