1. 20 7月, 2012 2 次提交
    • M
      IB/qib: Reduce sdma_lock contention · 551ace12
      Mike Marciniszyn 提交于
      Profiling has shown that sdma_lock is proving a bottleneck for
      performance. The situations include:
       - RDMA reads when krcvqs > 1
       - post sends from multiple threads
      
      For RDMA read the current global qib_wq mechanism runs on all CPUs
      and contends for the sdma_lock when multiple RMDA read requests are
      fielded on differenct CPUs. For post sends, the direct call to
      qib_do_send() from multiple threads causes the contention.
      
      Since the sdma mechanism is per port, this fix converts the existing
      workqueue to a per port single thread workqueue to reduce the lock
      contention in the RDMA read case, and for any other case where the QP
      is scheduled via the workqueue mechanism from more than 1 CPU.
      
      For the post send case, This patch modifies the post send code to test
      for a non empty sdma engine.  If the sdma is not idle the (now single
      thread) workqueue will be used to trigger the send engine instead of
      the direct call to qib_do_send().
      Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      551ace12
    • B
      IB/qib: Fix an incorrect log message · f3331f88
      Betty Dall 提交于
      There is a cut-and-paste typo in the function qib_pci_slot_reset()
      where it prints that the "link_reset" function is called rather than
      the "slot_reset" function.  This makes the message misleading.
      Signed-off-by: NBetty Dall <betty.dall@hp.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      f3331f88
  2. 18 7月, 2012 1 次提交
  3. 11 7月, 2012 1 次提交
  4. 09 7月, 2012 3 次提交
  5. 01 7月, 2012 9 次提交
  6. 30 6月, 2012 9 次提交
  7. 29 6月, 2012 15 次提交