1. 23 9月, 2006 11 次提交
  2. 01 9月, 2006 1 次提交
    • R
      IB/mthca: Use IRQ safe locks to protect allocation bitmaps · 5a4e6dcc
      Roland Dreier 提交于
      It is supposed to be OK to call mthca_create_ah() and mthca_destroy_ah()
      from any context.  However, for mem-full HCAs, these functions use the
      mthca_alloc() and mthca_free() bitmap helpers, and those helpers use
      non-IRQ-safe spin_lock() internally.  Lockdep correctly warns that
      this could lead to a deadlock.  Fix this by changing mthca_alloc() and
      mthca_free() to use spin_lock_irqsave().
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      5a4e6dcc
  3. 24 8月, 2006 1 次提交
  4. 19 8月, 2006 1 次提交
    • R
      IB/mthca: No userspace SRQs if HCA doesn't have SRQ support · 5beba532
      Roland Dreier 提交于
      Leave all SRQ methods out of the device's uverbs_cmd_mask if the
      device doesn't have SRQ support (because of ancient firmware) so that
      we don't allow userspace to call the driver's create_srq method.  This
      fixes a userspace-triggerable oops caused by ib_uverbs_create_srq()
      following the device's ->create_srq function pointer, which will be
      NULL if the device doesn't support SRQs.
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      5beba532
  5. 11 8月, 2006 2 次提交
  6. 04 8月, 2006 2 次提交
  7. 25 7月, 2006 1 次提交
  8. 24 7月, 2006 1 次提交
  9. 15 7月, 2006 2 次提交
  10. 05 7月, 2006 1 次提交
    • Z
      [PATCH] mthca: initialize send and receive queue locks separately · a46f9484
      Zach Brown 提交于
      mthca: initialize send and receive queue locks separately
      
      lockdep identifies a lock by the call site of its initialization.  By
      initializing the send and receive queue locks in mthca_wq_init() we confuse
      lockdep.  It warns that that the ordered acquiry of both locks in
      mthca_modify_qp() is recursive acquiry of one lock:
      
        =============================================
        [ INFO: possible recursive locking detected ]
        ---------------------------------------------
        modprobe/1192 is trying to acquire lock:
         (&wq->lock){....}, at: [<f892b4db>] mthca_modify_qp+0x60/0xa7b [ib_mthca]
        but task is already holding lock:
         (&wq->lock){....}, at: [<f892b4ce>] mthca_modify_qp+0x53/0xa7b [ib_mthca]
      
      Initializing the locks separately in mthca_alloc_qp_common() stops the
      warning and will let lockdep enforce proper ordering on paths that acquire
      both locks.
      Signed-off-by: NZach Brown <zach.brown@oracle.com>
      Cc: Roland Dreier <rolandd@cisco.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a46f9484
  11. 03 7月, 2006 1 次提交
  12. 01 7月, 2006 1 次提交
  13. 28 6月, 2006 1 次提交
  14. 18 6月, 2006 8 次提交
  15. 25 5月, 2006 1 次提交
  16. 19 5月, 2006 1 次提交
  17. 17 5月, 2006 1 次提交
    • R
      IB/mthca: Make fw_cmd_doorbell default to 0 · 1db76c14
      Roland Dreier 提交于
      Setting fw_cmd_doorbell allows FW command to be queued using posted
      writes instead of requiring polling on a "go" bit, so it should be a
      performance boost.  However, the option causes problems with at least
      some device/firmware combinations, so set the default to 0 until we
      understand what's going on better.
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      1db76c14
  18. 11 5月, 2006 1 次提交
  19. 10 5月, 2006 1 次提交
    • R
      IB/mthca: Fix race in reference counting · a3285aa4
      Roland Dreier 提交于
      Fix races in in destroying various objects.  If a destroy routine
      waits for an object to become free by doing
      
      	wait_event(&obj->wait, !atomic_read(&obj->refcount));
      	/* now clean up and destroy the object */
      
      and another place drops a reference to the object by doing
      
      	if (atomic_dec_and_test(&obj->refcount))
      		wake_up(&obj->wait);
      
      then this is susceptible to a race where the wait_event() and final
      freeing of the object occur between the atomic_dec_and_test() and the
      wake_up().  And this is a use-after-free, since wake_up() will be
      called on part of the already-freed object.
      
      Fix this in mthca by replacing the atomic_t refcounts with plain old
      integers protected by a spinlock.  This makes it possible to do the
      decrement of the reference count and the wake_up() so that it appears
      as a single atomic operation to the code waiting on the wait queue.
      
      While touching this code, also simplify mthca_cq_clean(): the CQ being
      cleaned cannot go away, because it still has a QP attached to it.  So
      there's no reason to be paranoid and look up the CQ by number; it's
      perfectly safe to use the pointer that the callers already have.
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      a3285aa4
  20. 02 5月, 2006 1 次提交