1. 04 11月, 2005 2 次提交
  2. 03 11月, 2005 4 次提交
  3. 02 11月, 2005 1 次提交
  4. 31 10月, 2005 4 次提交
    • R
      [IPoIB] cleanups: fix comment, remove useless variables · 3bc12e75
      Roland Dreier 提交于
      Minor cleanups: fix a misleading comment, and get rid of attr_mask
      variables that are only used to hold constants (just use the constants
      directly).
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      3bc12e75
    • R
      [IB] mthca: Avoid SRQ free WQE list corruption · e5b251a2
      Roland Dreier 提交于
      Fix wqe_to_link() to use a structure field that we know is definitely
      always unused for receive work requests, so that it really avoids the
      free list corruption bug that the comment claims it does.
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      e5b251a2
    • R
      [IB] uverbs: Avoid NULL pointer deref on CQ async event · 7162a3e0
      Roland Dreier 提交于
      Userspace CQs that have no completion event channel attached end up
      with their cq_context set to NULL.  However, asynchronous events like
      "CQ overrun" can still occur on such CQs, so add a uverbs_file member
      to struct ib_ucq_object that we can follow to deliver these events.
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      7162a3e0
    • T
      [PATCH] fix missing includes · 4e57b681
      Tim Schmielau 提交于
      I recently picked up my older work to remove unnecessary #includes of
      sched.h, starting from a patch by Dave Jones to not include sched.h
      from module.h. This reduces the number of indirect includes of sched.h
      by ~300. Another ~400 pointless direct includes can be removed after
      this disentangling (patch to follow later).
      However, quite a few indirect includes need to be fixed up for this.
      
      In order to feed the patches through -mm with as little disturbance as
      possible, I've split out the fixes I accumulated up to now (complete for
      i386 and x86_64, more archs to follow later) and post them before the real
      patch.  This way this large part of the patch is kept simple with only
      adding #includes, and all hunks are independent of each other.  So if any
      hunk rejects or gets in the way of other patches, just drop it.  My scripts
      will pick it up again in the next round.
      Signed-off-by: NTim Schmielau <tim@physik3.uni-rostock.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4e57b681
  5. 30 10月, 2005 2 次提交
  6. 29 10月, 2005 6 次提交
  7. 28 10月, 2005 4 次提交
  8. 26 10月, 2005 3 次提交
    • R
      [IB] simplify mad_rmpp.c:alloc_response_msg() · 7cc656ef
      Roland Dreier 提交于
      Change alloc_response_msg() in mad_rmpp.c to return the struct
      it allocates directly (or an error code a la ERR_PTR), rather than
      returning a status and passing the struct back in a pointer param.
      This simplifies the code and gets rid of warnings like
      
          drivers/infiniband/core/mad_rmpp.c: In function nack_recv:
          drivers/infiniband/core/mad_rmpp.c:192: warning: msg may be used uninitialized in this function
      
      with newer versions of gcc.
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      7cc656ef
    • R
      [IB] mthca: correct modify QP attribute masks for UC · 547e3090
      Roland Dreier 提交于
      The UC transport does not support RDMA reads or atomic operations, so
      we shouldn't require or even allow the consumer to set attributes
      relating to these operations for UC QPs.
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      547e3090
    • S
      [IB] Fix MAD layer DMA mappings to avoid touching data buffer once mapped · 34816ad9
      Sean Hefty 提交于
      The MAD layer was violating the DMA API by touching data buffers used
      for sends after the DMA mapping was done.  This causes problems on
      non-cache-coherent architectures, because the device doing DMA won't
      see updates to the payload buffers that exist only in the CPU cache.
      
      Fix this by having all MAD consumers use ib_create_send_mad() to
      allocate their send buffers, and moving the DMA mapping into the MAD
      layer so it can be done just before calling send (and after any
      modifications of the send buffer by the MAD layer).
      
      Tested on a non-cache-coherent PowerPC 440SPe system.
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      34816ad9
  9. 25 10月, 2005 2 次提交
  10. 24 10月, 2005 1 次提交
  11. 23 10月, 2005 1 次提交
  12. 21 10月, 2005 3 次提交
  13. 19 10月, 2005 5 次提交
  14. 18 10月, 2005 2 次提交