1. 12 6月, 2015 1 次提交
  2. 08 4月, 2015 1 次提交
  3. 02 4月, 2015 1 次提交
  4. 30 3月, 2015 1 次提交
    • H
      cxgb4: Allocate dynamic mem. for egress and ingress queue maps · 4b8e27a8
      Hariprasad Shenai 提交于
      QIDs (egress/ingress) from firmware in FW_*_CMD.alloc command
      can be anywhere in the range from EQ(IQFLINT)_START to EQ(IQFLINT)_END.
      For eg, in the first load eqid can be from 100 to 300.
      In the next load it can be from 301 to 500 (assume eq_start is 100 and eq_end is
      1000).
      
      The driver was assuming them to always start from EQ(IQFLINT)_START till
      MAX_EGRQ(INGQ). This was causing stack overflow and subsequent crash.
      
      Fixed it by dynamically allocating memory (of qsize (x_END - x_START + 1)) for
      these structures.
      
      Based on original work by Santosh Rastapur <santosh@chelsio.com>
      Signed-off-by: NHariprasad Shenai <hariprasad@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4b8e27a8
  5. 25 3月, 2015 1 次提交
  6. 06 3月, 2015 1 次提交
  7. 28 2月, 2015 1 次提交
  8. 10 2月, 2015 1 次提交
  9. 08 2月, 2015 3 次提交
  10. 05 2月, 2015 1 次提交
    • H
      cxgb4: Add low latency socket busy_poll support · 3a336cb1
      Hariprasad Shenai 提交于
      cxgb_busy_poll, corresponding to ndo_busy_poll, gets called by the socket
      waiting for data.
      
      With busy_poll enabled, improvement is seen in latency numbers as observed by
      collecting netperf TCP_RR numbers.
      Below are latency number, with and without busy-poll, in a switched environment
      for a particular msg size:
      netperf command: netperf -4 -H <ip> -l 30 -t TCP_RR -- -r1,1
      Latency without busy-poll: ~16.25 us
      Latency with busy-poll   : ~08.79 us
      
      Based on original work by Kumar Sanghvi <kumaras@chelsio.com>
      Signed-off-by: NHariprasad Shenai <hariprasad@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3a336cb1
  11. 28 1月, 2015 1 次提交
  12. 27 1月, 2015 4 次提交
  13. 25 1月, 2015 2 次提交
  14. 16 1月, 2015 1 次提交
  15. 09 1月, 2015 3 次提交
  16. 13 12月, 2014 1 次提交
  17. 11 12月, 2014 1 次提交
  18. 10 12月, 2014 3 次提交
  19. 23 11月, 2014 1 次提交
  20. 11 11月, 2014 2 次提交
  21. 15 10月, 2014 1 次提交
  22. 10 10月, 2014 1 次提交
  23. 29 9月, 2014 2 次提交
  24. 22 8月, 2014 1 次提交
  25. 08 8月, 2014 2 次提交
  26. 05 8月, 2014 1 次提交
  27. 16 7月, 2014 1 次提交
    • H
      cxgb4/iw_cxgb4: use firmware ord/ird resource limits · 4c2c5763
      Hariprasad Shenai 提交于
      Advertise a larger max read queue depth for qps, and gather the resource limits
      from fw and use them to avoid exhaustinq all the resources.
      
      Design:
      
      cxgb4:
      
      Obtain the max_ordird_qp and max_ird_adapter device params from FW
      at init time and pass them up to the ULDs when they attach.  If these
      parameters are not available, due to older firmware, then hard-code
      the values based on the known values for older firmware.
      iw_cxgb4:
      
      Fix the c4iw_query_device() to report these correct values based on
      adapter parameters.  ibv_query_device() will always return:
      
      max_qp_rd_atom = max_qp_init_rd_atom = min(module_max, max_ordird_qp)
      max_res_rd_atom = max_ird_adapter
      
      Bump up the per qp max module option to 32, allowing it to be increased
      by the user up to the device max of max_ordird_qp.  32 seems to be
      sufficient to maximize throughput for streaming read benchmarks.
      
      Fail connection setup if the negotiated IRD exhausts the available
      adapter ird resources.  So the driver will track the amount of ird
      resource in use and not send an RI_WR/INIT to FW that would reduce the
      available ird resources below zero.
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NHariprasad Shenai <hariprasad@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4c2c5763