1. 01 10月, 2012 1 次提交
  2. 20 7月, 2012 1 次提交
    • M
      IB/qib: Reduce sdma_lock contention · 551ace12
      Mike Marciniszyn 提交于
      Profiling has shown that sdma_lock is proving a bottleneck for
      performance. The situations include:
       - RDMA reads when krcvqs > 1
       - post sends from multiple threads
      
      For RDMA read the current global qib_wq mechanism runs on all CPUs
      and contends for the sdma_lock when multiple RMDA read requests are
      fielded on differenct CPUs. For post sends, the direct call to
      qib_do_send() from multiple threads causes the contention.
      
      Since the sdma mechanism is per port, this fix converts the existing
      workqueue to a per port single thread workqueue to reduce the lock
      contention in the RDMA read case, and for any other case where the QP
      is scheduled via the workqueue mechanism from more than 1 CPU.
      
      For the post send case, This patch modifies the post send code to test
      for a non empty sdma engine.  If the sdma is not idle the (now single
      thread) workqueue will be used to trigger the send engine instead of
      the direct call to qib_do_send().
      Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      551ace12
  3. 18 7月, 2012 1 次提交
  4. 09 7月, 2012 2 次提交
    • M
      IB/qib: RCU locking for MR validation · 8aac4cc3
      Mike Marciniszyn 提交于
      Profiling indicates that MR validation locking is expensive.  The MR
      table is largely read-only and is a suitable candidate for RCU locking.
      
      The patch uses RCU locking during validation to eliminate one
      lock/unlock during that validation.
      Reviewed-by: NMike Heinz <michael.william.heinz@intel.com>
      Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      8aac4cc3
    • M
      IB/qib: Avoid returning EBUSY from MR deregister · 6a82649f
      Mike Marciniszyn 提交于
      A timing issue can occur where qib_mr_dereg can return -EBUSY if the
      MR use count is not zero.
      
      This can occur if the MR is de-registered while RDMA read response
      packets are being progressed from the SDMA ring.  The suspicion is
      that the peer sent an RDMA read request, which has already been copied
      across to the peer.  The peer sees the completion of his request and
      then communicates to the responder that the MR is not needed any
      longer.  The responder tries to de-register the MR, catching some
      responses remaining in the SDMA ring holding the MR use count.
      
      The code now uses a get/put paradigm to track MR use counts and
      coordinates with the MR de-registration process using a completion
      when the count has reached zero.  A timeout on the delay is in place
      to catch other EBUSY issues.
      
      The reference count protocol is as follows:
      - The return to the user counts as 1
      - A reference from the lk_table or the qib_ibdev counts as 1.
      - Transient I/O operations increase/decrease as necessary
      
      A lot of code duplication has been folded into the new routines
      init_qib_mregion() and deinit_qib_mregion().  Additionally, explicit
      initialization of fields to zero is now handled by kzalloc().
      
      Also, duplicated code 'while.*num_sge' that decrements reference
      counts have been consolidated in qib_put_ss().
      Reviewed-by: NRamkrishna Vepa <ramkrishna.vepa@intel.com>
      Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      6a82649f
  5. 04 1月, 2012 1 次提交
  6. 01 11月, 2011 1 次提交
  7. 22 10月, 2011 1 次提交
    • M
      IB/qib: Use RCU for qpn lookup · af061a64
      Mike Marciniszyn 提交于
      The heavy weight spinlock in qib_lookup_qpn() is replaced with RCU.
      The hash list itself is now accessed via jhash functions instead of mod.
      
      The changes should benefit multiple receive contexts in different
      processors by not contending for the lock just to read the hash
      structures.
      
      The patch also adds a lookaside_qp (pointer) and a lookaside_qpn in
      the context.  The interrupt handler will test the current packet's qpn
      against lookaside_qpn if the lookaside_qp pointer is non-NULL.  The
      pointer is NULL'ed when the interrupt handler exits.
      Signed-off-by: NMike Marciniszyn <mike.marciniszyn@qlogic.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      af061a64
  8. 04 8月, 2010 1 次提交
  9. 24 5月, 2010 1 次提交