1. 19 7月, 2012 1 次提交
    • A
      net/mlx4_en: Add accelerated RFS support · 1eb8c695
      Amir Vadai 提交于
      Use RFS infrastructure and flow steering in HW to keep CPU
      affinity of rx interrupts and application per TCP stream.
      
      A flow steering filter is added to the HW whenever the RFS
      ndo callback is invoked by core networking code.
      
      Because the invocation takes place in interrupt context, the
      actual setup of HW is done using workqueue. Whenever new filter
      is added, the driver checks for expiry of existing filters.
      
      Since there's window in time between the point where the core
      RFS code invoked the ndo callback, to the point where the HW
      is configured from the workqueue context, the 2nd, 3rd etc
      packets from that stream will cause the net core to invoke
      the callback again and again.
      
      To prevent inefficient/double configuration of the HW, the filters
      are kept in a database which is indexed using hash function to enable
      fast access.
      Signed-off-by: NAmir Vadai <amirv@mellanox.com>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1eb8c695
  2. 17 7月, 2012 1 次提交
  3. 08 7月, 2012 5 次提交
  4. 26 6月, 2012 2 次提交
  5. 18 5月, 2012 1 次提交
    • A
      net/mlx4_en: num cores tx rings for every UP · bc6a4744
      Amir Vadai 提交于
      Change the TX ring scheme such that the number of rings for untagged packets
      and for tagged packets (per each of the vlan priorities) is the same, unlike
      the current situation where for tagged traffic there's one ring per priority
      and for untagged rings as the number of core.
      
      Queue selection is done as follows:
      
      If the mqprio qdisc is operates on the interface, such that the core networking
      code invoked the device setup_tc ndo callback, a mapping of skb->priority =>
      queue set is forced - for both, tagged and untagged traffic.
      
      Else, the egress map skb->priority =>  User priority is used for tagged traffic, and
      all untagged traffic is sent through tx rings of UP 0.
      
      The patch follows the convergence of discussing that issue with John Fastabend
      over this thread http://comments.gmane.org/gmane.linux.network/229877
      
      Cc: John Fastabend <john.r.fastabend@intel.com>
      Cc: Liran Liss <liranl@mellanox.com>
      Signed-off-by: NAmir Vadai <amirv@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bc6a4744
  6. 24 4月, 2012 3 次提交
  7. 05 4月, 2012 3 次提交
  8. 07 3月, 2012 1 次提交
    • Y
      net/mlx4_en: Saving mem access on data path · ebf8c9aa
      Yevgeny Petrilin 提交于
      Localized the pdev->dev, and using dma_map instead of pci_map
      There are multiple map/unmap operations on data path,
      optimizing those by saving redundant pointer access.
      Those places were identified as hot-spots when running kernel profiling
      during some benchmarks.
      The fixes had most impact when testing packet rate with small packets,
      reducing several % from CPU load, and in some case being the difference
      between reaching wire speed or being CPU bound.
      Signed-off-by: NYevgeny Petrilin <yevgenyp@mellanox.co.il>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ebf8c9aa
  9. 07 2月, 2012 1 次提交
  10. 01 2月, 2012 1 次提交
  11. 23 1月, 2012 2 次提交
  12. 20 12月, 2011 1 次提交
  13. 14 12月, 2011 1 次提交
  14. 09 12月, 2011 1 次提交
  15. 28 11月, 2011 1 次提交
  16. 19 10月, 2011 1 次提交
  17. 10 10月, 2011 4 次提交
  18. 18 8月, 2011 1 次提交
  19. 11 8月, 2011 1 次提交
  20. 22 7月, 2011 1 次提交
  21. 19 7月, 2011 1 次提交
  22. 16 4月, 2011 1 次提交
  23. 31 3月, 2011 1 次提交
  24. 28 3月, 2011 1 次提交
  25. 24 3月, 2011 3 次提交