1. 29 8月, 2015 2 次提交
  2. 12 5月, 2015 1 次提交
  3. 06 5月, 2015 1 次提交
  4. 05 5月, 2015 3 次提交
  5. 16 1月, 2015 2 次提交
  6. 13 1月, 2015 2 次提交
  7. 16 12月, 2014 2 次提交
    • H
      RDMA/cxgb4: Handle NET_XMIT return codes · e6b11163
      Hariprasad S 提交于
      cxgb4_create_server() and cxgb4_create_server6() return NET_XMIT_*
      values or a negative errno. iw_cxgb4 need to handle this correctly.
      Signed-off-by: NHariprasad Shenai <hariprasad@chelsio.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      e6b11163
    • H
      RDMA/cxgb4: Fix locking issue in process_mpa_request · 10be6b48
      Hariprasad Shenai 提交于
      Fix the following lockdep report:
      
          =============================================
          [ INFO: possible recursive locking detected ]
          3.17.0+ #3 Tainted: G            E
          ---------------------------------------------
          kworker/u64:3/299 is trying to acquire lock:
           (&epc->mutex){+.+.+.}, at: [<ffffffffa074e07a>]
          process_mpa_request+0x1aa/0x3e0 [iw_cxgb4]
      
          but task is already holding lock:
           (&epc->mutex){+.+.+.}, at: [<ffffffffa074e34e>] rx_data+0x9e/0x1f0 [iw_cxgb4]
      
          other info that might help us debug this:
           Possible unsafe locking scenario:
      
                 CPU0
                 ----
            lock(&epc->mutex);
            lock(&epc->mutex);
      
           *** DEADLOCK ***
      
           May be due to missing lock nesting notation
      
          3 locks held by kworker/u64:3/299:
           #0:  ("%s""iw_cxgb4"){.+.+.+}, at: [<ffffffff8106f14d>]
          process_one_work+0x13d/0x4d0
           #1:  (skb_work){+.+.+.}, at: [<ffffffff8106f14d>] process_one_work+0x13d/0x4d0
           #2:  (&epc->mutex){+.+.+.}, at: [<ffffffffa074e34e>] rx_data+0x9e/0x1f0
          [iw_cxgb4]
      
          stack backtrace:
          CPU: 2 PID: 299 Comm: kworker/u64:3 Tainted: G            E  3.17.0+ #3
          Hardware name: Dell Inc. PowerEdge T110/0X744K, BIOS 1.2.1 01/28/2010
          Workqueue: iw_cxgb4 process_work [iw_cxgb4]
           ffff8800b91593d0 ffff8800b8a2f9f8 ffffffff815df107 0000000000000001
           ffff8800b9158750 ffff8800b8a2fa28 ffffffff8109f0e2 ffff8800bb768a00
           ffff8800b91593d0 ffff8800b9158750 0000000000000000 ffff8800b8a2fa88
          Call Trace:
           [<ffffffff815df107>] dump_stack+0x49/0x62
           [<ffffffff8109f0e2>] print_deadlock_bug+0xf2/0x100
           [<ffffffff810a0f04>] validate_chain+0x454/0x700
           [<ffffffff810a1574>] __lock_acquire+0x3c4/0x580
           [<ffffffffa074e07a>] ? process_mpa_request+0x1aa/0x3e0 [iw_cxgb4]
           [<ffffffff810a17cc>] lock_acquire+0x9c/0x110
           [<ffffffffa074e07a>] ? process_mpa_request+0x1aa/0x3e0 [iw_cxgb4]
           [<ffffffff815e111b>] mutex_lock_nested+0x4b/0x360
           [<ffffffffa074e07a>] ? process_mpa_request+0x1aa/0x3e0 [iw_cxgb4]
           [<ffffffff810c181a>] ? del_timer_sync+0xaa/0xd0
           [<ffffffff810c1770>] ? try_to_del_timer_sync+0x70/0x70
           [<ffffffffa074e07a>] process_mpa_request+0x1aa/0x3e0 [iw_cxgb4]
           [<ffffffffa074a3ec>] ? update_rx_credits+0xec/0x140 [iw_cxgb4]
           [<ffffffffa074e381>] rx_data+0xd1/0x1f0 [iw_cxgb4]
           [<ffffffff8109ff23>] ? mark_held_locks+0x73/0xa0
           [<ffffffff815e4b90>] ? _raw_spin_unlock_irqrestore+0x40/0x70
           [<ffffffff810a020d>] ? trace_hardirqs_on_caller+0xfd/0x1c0
           [<ffffffff810a02dd>] ? trace_hardirqs_on+0xd/0x10
           [<ffffffffa074c931>] process_work+0x51/0x80 [iw_cxgb4]
           [<ffffffff8106f1c8>] process_one_work+0x1b8/0x4d0
           [<ffffffff8106f14d>] ? process_one_work+0x13d/0x4d0
           [<ffffffff8106f600>] worker_thread+0x120/0x3c0
           [<ffffffff8106f4e0>] ? process_one_work+0x4d0/0x4d0
           [<ffffffff81074a0e>] kthread+0xde/0x100
           [<ffffffff815e4b40>] ? _raw_spin_unlock_irq+0x30/0x40
           [<ffffffff81074930>] ? __init_kthread_worker+0x70/0x70
           [<ffffffff815e512c>] ret_from_fork+0x7c/0xb0
           [<ffffffff81074930>] ? __init_kthread_worker+0x70/0x70
      
      Based on original work by Steve Wise <swise@opengridcomputing.com>.
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NHariprasad Shenai <hariprasad@chelsio.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      10be6b48
  8. 23 11月, 2014 2 次提交
  9. 14 11月, 2014 1 次提交
  10. 11 11月, 2014 1 次提交
  11. 14 10月, 2014 3 次提交
  12. 22 7月, 2014 1 次提交
  13. 16 7月, 2014 1 次提交
    • H
      cxgb4/iw_cxgb4: use firmware ord/ird resource limits · 4c2c5763
      Hariprasad Shenai 提交于
      Advertise a larger max read queue depth for qps, and gather the resource limits
      from fw and use them to avoid exhaustinq all the resources.
      
      Design:
      
      cxgb4:
      
      Obtain the max_ordird_qp and max_ird_adapter device params from FW
      at init time and pass them up to the ULDs when they attach.  If these
      parameters are not available, due to older firmware, then hard-code
      the values based on the known values for older firmware.
      iw_cxgb4:
      
      Fix the c4iw_query_device() to report these correct values based on
      adapter parameters.  ibv_query_device() will always return:
      
      max_qp_rd_atom = max_qp_init_rd_atom = min(module_max, max_ordird_qp)
      max_res_rd_atom = max_ird_adapter
      
      Bump up the per qp max module option to 32, allowing it to be increased
      by the user up to the device max of max_ordird_qp.  32 seems to be
      sufficient to maximize throughput for streaming read benchmarks.
      
      Fail connection setup if the negotiated IRD exhausts the available
      adapter ird resources.  So the driver will track the amount of ird
      resource in use and not send an RI_WR/INIT to FW that would reduce the
      available ird resources below zero.
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NHariprasad Shenai <hariprasad@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4c2c5763
  14. 14 7月, 2014 1 次提交
  15. 09 7月, 2014 2 次提交
  16. 02 7月, 2014 1 次提交
  17. 11 6月, 2014 3 次提交
  18. 20 5月, 2014 1 次提交
  19. 29 4月, 2014 2 次提交
    • S
      RDMA/cxgb4: Force T5 connections to use TAHOE congestion control · 92e5011a
      Steve Wise 提交于
      This is required to work around a T5 HW issue.
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      92e5011a
    • S
      RDMA/cxgb4: Fix endpoint mutex deadlocks · cc18b939
      Steve Wise 提交于
      In cases where the cm calls c4iw_modify_rc_qp() with the endpoint
      mutex held, they must be called with internal == 1.  rx_data() and
      process_mpa_reply() are not doing this.  This causes a deadlock
      because c4iw_modify_rc_qp() might call c4iw_ep_disconnect() in some
      !internal cases, and c4iw_ep_disconnect() acquires the endpoint mutex.
      The design was intended to only do the disconnect for !internal calls.
      
      Change rx_data(), FPDU_MODE case, to call c4iw_modify_rc_qp() with
      internal == 1, and then disconnect only after releasing the mutex.
      
      Change process_mpa_reply() to call c4iw_modify_rc_qp(TERMINATE) with
      internal == 1 and set a new attr flag telling it to send a TERMINATE
      message.  Previously this was implied by !internal.
      
      Change process_mpa_reply() to return whether the caller should
      disconnect after releasing the endpoint mutex.  Now rx_data() will do
      the disconnect in the cases where process_mpa_reply() wants to
      disconnect after the TERMINATE is sent.
      
      Change c4iw_modify_rc_qp() RTS->TERM to only disconnect if !internal,
      and to send a TERMINATE message if attrs->send_term is 1.
      
      Change abort_connection() to not aquire the ep mutex for setting the
      state, and make all calls to abort_connection() do so with the mutex
      held.
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      cc18b939
  20. 12 4月, 2014 1 次提交
    • S
      RDMA/cxgb4: Endpoint timeout fixes · b33bd0cb
      Steve Wise 提交于
      1) timedout endpoint processing can be starved. If there are continual
         CPL messages flowing into the driver, the endpoint timeout
         processing can be starved.  This condition exposed the other bugs
         below.
      
      Solution: In process_work(), call process_timedout_eps() after each CPL
      is processed.
      
      2) Connection events can be processed even though the endpoint is on
         the timeout list.  If the endpoint is scheduled for timeout
         processing, then we must ignore MPA Start Requests and Replies.
      
      Solution: Change stop_ep_timer() to return 1 if the ep has already been
      queued for timeout processing.  All the callers of stop_ep_timer() need
      to check this and act accordingly.  There are just a few cases where
      the caller needs to do something different if stop_ep_timer() returns 1:
      
      1) in process_mpa_reply(), ignore the reply and  process_timeout()
         will abort the connection.
      
      2) in process_mpa_request, ignore the request and process_timeout()
         will abort the connection.
      
      It is ok for callers of stop_ep_timer() to abort the connection since
      that will leave the state in ABORTING or DEAD, and process_timeout()
      now ignores timeouts when the ep is in these states.
      
      3) Double insertion on the timeout list.  Since the endpoint timers
         are used for connection setup and teardown, we need to guard
         against the possibility that an endpoint is already on the timeout
         list.  This is a rare condition and only seen under heavy load and
         in the presense of the above 2 bugs.
      
      Solution: In ep_timeout(), don't queue the endpoint if it is already on
      the queue.
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      b33bd0cb
  21. 02 4月, 2014 3 次提交
  22. 25 3月, 2014 3 次提交
  23. 21 3月, 2014 1 次提交