1. 14 11月, 2014 2 次提交
  2. 12 11月, 2014 2 次提交
  3. 11 11月, 2014 2 次提交
    • E
      mlx4: restore conditional call to napi_complete_done() · 2e1af7d7
      Eric Dumazet 提交于
      After commit 1a288172 ("mlx4: use napi_complete_done()") we ended up
      calling napi_complete_done() in the case NAPI poll consumed all its
      budget.
      
      This added extra interrupt pressure, this patch restores proper
      behavior.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Fixes: 1a288172 ("mlx4: use napi_complete_done()")
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2e1af7d7
    • E
      mlx4: use napi_complete_done() · 1a288172
      Eric Dumazet 提交于
      To enable gro_flush_timeout, a driver has to use napi_complete_done()
      instead of napi_complete().
      
      Tested:
       Ran 200 netperf TCP_STREAM from A to B (10Gbe mlx4 link, 8 RX queues)
      
      Without this feature, we send back about 305,000 ACK per second.
      
      GRO aggregation ratio is low (811/305 = 2.65 segments per GRO packet)
      
      Setting a timer of 2000 nsec is enough to increase GRO packet sizes
      and reduce number of ACK packets. (811/19.2 = 42)
      
      Receiver performs less calls to upper stacks, less wakes up.
      This also reduces cpu usage on the sender, as it receives less ACK
      packets.
      
      Note that reducing number of wakes up increases cpu efficiency, but can
      decrease QPS, as applications wont have the chance to warmup cpu caches
      doing a partial read of RPC requests/answers if they fit in one skb.
      
      B:~# sar -n DEV 1 10 | grep eth0 | tail -1
      Average:         eth0 811269.80 305732.30 1199462.57  19705.72      0.00
      0.00      0.50
      
      B:~# echo 2000 >/sys/class/net/eth0/gro_flush_timeout
      
      B:~# sar -n DEV 1 10 | grep eth0 | tail -1
      Average:         eth0 811577.30  19230.80 1199916.51   1239.80      0.00
      0.00      0.50
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1a288172
  4. 07 11月, 2014 2 次提交
  5. 04 11月, 2014 5 次提交
  6. 31 10月, 2014 3 次提交
  7. 29 10月, 2014 13 次提交
  8. 27 10月, 2014 2 次提交
    • E
      net/mlx4_core: Call synchronize_irq() before freeing EQ buffer · bf1bac5b
      Eli Cohen 提交于
      After moving the EQ ownership to software effectively destroying it, call
      synchronize_irq() to ensure that any handler routines running on other CPU
      cores finish execution. Only then free the EQ buffer.
      The same thing is done when we destroy a CQ which is one of the sources
      generating interrupts. In the case of CQ we want to avoid completion handlers
      on a CQ that was destroyed. In the case we do the same to avoid receiving
      asynchronous events after the EQ has been destroyed and its buffers freed.
      Signed-off-by: NEli Cohen <eli@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bf1bac5b
    • E
      net/mlx5_core: Call synchronize_irq() before freeing EQ buffer · 96e4be06
      Eli Cohen 提交于
      After destroying the EQ, the object responsible for generating interrupts, call
      synchronize_irq() to ensure that any handler routines running on other CPU
      cores finish execution. Only then free the EQ buffer. This patch solves a very
      rare case when we get panic on driver unload.
      The same thing is done when we destroy a CQ which is one of the sources
      generating interrupts. In the case of CQ we want to avoid completion handlers
      on a CQ that was destroyed. In the case we do the same to avoid receiving
      asynchronous events after the EQ has been destroyed and its buffers freed.
      Signed-off-by: NEli Cohen <eli@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      96e4be06
  9. 11 10月, 2014 1 次提交
  10. 09 10月, 2014 1 次提交
  11. 08 10月, 2014 1 次提交
  12. 06 10月, 2014 6 次提交