1. 22 6月, 2013 1 次提交
  2. 20 7月, 2012 1 次提交
    • M
      IB/qib: Reduce sdma_lock contention · 551ace12
      Mike Marciniszyn 提交于
      Profiling has shown that sdma_lock is proving a bottleneck for
      performance. The situations include:
       - RDMA reads when krcvqs > 1
       - post sends from multiple threads
      
      For RDMA read the current global qib_wq mechanism runs on all CPUs
      and contends for the sdma_lock when multiple RMDA read requests are
      fielded on differenct CPUs. For post sends, the direct call to
      qib_do_send() from multiple threads causes the contention.
      
      Since the sdma mechanism is per port, this fix converts the existing
      workqueue to a per port single thread workqueue to reduce the lock
      contention in the RDMA read case, and for any other case where the QP
      is scheduled via the workqueue mechanism from more than 1 CPU.
      
      For the post send case, This patch modifies the post send code to test
      for a non empty sdma engine.  If the sdma is not idle the (now single
      thread) workqueue will be used to trigger the send engine instead of
      the direct call to qib_do_send().
      Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      551ace12
  3. 18 7月, 2012 1 次提交
  4. 11 7月, 2012 1 次提交
  5. 09 7月, 2012 2 次提交
    • M
      IB/qib: RCU locking for MR validation · 8aac4cc3
      Mike Marciniszyn 提交于
      Profiling indicates that MR validation locking is expensive.  The MR
      table is largely read-only and is a suitable candidate for RCU locking.
      
      The patch uses RCU locking during validation to eliminate one
      lock/unlock during that validation.
      Reviewed-by: NMike Heinz <michael.william.heinz@intel.com>
      Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      8aac4cc3
    • M
      IB/qib: Avoid returning EBUSY from MR deregister · 6a82649f
      Mike Marciniszyn 提交于
      A timing issue can occur where qib_mr_dereg can return -EBUSY if the
      MR use count is not zero.
      
      This can occur if the MR is de-registered while RDMA read response
      packets are being progressed from the SDMA ring.  The suspicion is
      that the peer sent an RDMA read request, which has already been copied
      across to the peer.  The peer sees the completion of his request and
      then communicates to the responder that the MR is not needed any
      longer.  The responder tries to de-register the MR, catching some
      responses remaining in the SDMA ring holding the MR use count.
      
      The code now uses a get/put paradigm to track MR use counts and
      coordinates with the MR de-registration process using a completion
      when the count has reached zero.  A timeout on the delay is in place
      to catch other EBUSY issues.
      
      The reference count protocol is as follows:
      - The return to the user counts as 1
      - A reference from the lk_table or the qib_ibdev counts as 1.
      - Transient I/O operations increase/decrease as necessary
      
      A lot of code duplication has been folded into the new routines
      init_qib_mregion() and deinit_qib_mregion().  Additionally, explicit
      initialization of fields to zero is now handled by kzalloc().
      
      Also, duplicated code 'while.*num_sge' that decrements reference
      counts have been consolidated in qib_put_ss().
      Reviewed-by: NRamkrishna Vepa <ramkrishna.vepa@intel.com>
      Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      6a82649f
  6. 15 5月, 2012 1 次提交
  7. 22 10月, 2011 3 次提交
  8. 17 1月, 2011 1 次提交
    • T
      RDMA: Update workqueue usage · f0626710
      Tejun Heo 提交于
      * ib_wq is added, which is used as the common workqueue for infiniband
        instead of the system workqueue.  All system workqueue usages
        including flush_scheduled_work() callers are converted to use and
        flush ib_wq.
      
      * cancel_delayed_work() + flush_scheduled_work() converted to
        cancel_delayed_work_sync().
      
      * qib_wq is removed and ib_wq is used instead.
      
      This is to prepare for deprecation of flush_scheduled_work().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      f0626710
  9. 11 1月, 2011 2 次提交
  10. 24 5月, 2010 1 次提交