1. 22 3月, 2016 2 次提交
  2. 24 12月, 2015 6 次提交
  3. 09 12月, 2015 4 次提交
  4. 19 11月, 2015 2 次提交
    • E
      net: provide generic busy polling to all NAPI drivers · 93d05d4a
      Eric Dumazet 提交于
      NAPI drivers no longer need to observe a particular protocol
      to benefit from busy polling (CONFIG_NET_RX_BUSY_POLL=y)
      
      napi_hash_add() and napi_hash_del() are automatically called
      from core networking stack, respectively from
      netif_napi_add() and netif_napi_del()
      
      This patch depends on free_netdev() and netif_napi_del() being
      called from process context, which seems to be the norm.
      
      Drivers might still prefer to call napi_hash_del() on their
      own, since they might combine all the rcu grace periods into
      a single one, knowing their NAPI structures lifetime, while
      core networking stack has no idea of a possible combining.
      
      Once this patch proves to not bring serious regressions,
      we will cleanup drivers to either remove napi_hash_del()
      or provide appropriate rcu grace periods combining.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      93d05d4a
    • E
      net: move skb_mark_napi_id() into core networking stack · 93f93a44
      Eric Dumazet 提交于
      We would like to automatically provide busy polling support
      to all NAPI drivers, without them having to implement anything.
      
      skb_mark_napi_id() can be called from napi_gro_receive() and
      napi_get_frags().
      
      Few drivers are still calling skb_mark_napi_id() because
      they use netif_receive_skb(). They should eventually call
      napi_gro_receive() instead. I will leave this to drivers
      maintainers.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      93f93a44
  5. 29 9月, 2015 1 次提交
  6. 10 9月, 2015 1 次提交
  7. 13 8月, 2015 1 次提交
  8. 21 7月, 2015 1 次提交
  9. 12 6月, 2015 1 次提交
  10. 02 6月, 2015 1 次提交
  11. 28 5月, 2015 2 次提交
  12. 13 5月, 2015 1 次提交
  13. 06 5月, 2015 6 次提交
  14. 15 4月, 2015 5 次提交
  15. 10 4月, 2015 1 次提交
  16. 30 3月, 2015 1 次提交
    • H
      cxgb4: Allocate dynamic mem. for egress and ingress queue maps · 4b8e27a8
      Hariprasad Shenai 提交于
      QIDs (egress/ingress) from firmware in FW_*_CMD.alloc command
      can be anywhere in the range from EQ(IQFLINT)_START to EQ(IQFLINT)_END.
      For eg, in the first load eqid can be from 100 to 300.
      In the next load it can be from 301 to 500 (assume eq_start is 100 and eq_end is
      1000).
      
      The driver was assuming them to always start from EQ(IQFLINT)_START till
      MAX_EGRQ(INGQ). This was causing stack overflow and subsequent crash.
      
      Fixed it by dynamically allocating memory (of qsize (x_END - x_START + 1)) for
      these structures.
      
      Based on original work by Santosh Rastapur <santosh@chelsio.com>
      Signed-off-by: NHariprasad Shenai <hariprasad@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4b8e27a8
  17. 25 3月, 2015 1 次提交
  18. 05 2月, 2015 1 次提交
    • H
      cxgb4: Add low latency socket busy_poll support · 3a336cb1
      Hariprasad Shenai 提交于
      cxgb_busy_poll, corresponding to ndo_busy_poll, gets called by the socket
      waiting for data.
      
      With busy_poll enabled, improvement is seen in latency numbers as observed by
      collecting netperf TCP_RR numbers.
      Below are latency number, with and without busy-poll, in a switched environment
      for a particular msg size:
      netperf command: netperf -4 -H <ip> -l 30 -t TCP_RR -- -r1,1
      Latency without busy-poll: ~16.25 us
      Latency with busy-poll   : ~08.79 us
      
      Based on original work by Kumar Sanghvi <kumaras@chelsio.com>
      Signed-off-by: NHariprasad Shenai <hariprasad@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3a336cb1
  19. 14 1月, 2015 2 次提交