1. 19 10月, 2021 3 次提交
  2. 16 10月, 2021 2 次提交
  3. 13 10月, 2021 1 次提交
  4. 20 8月, 2021 1 次提交
  5. 27 7月, 2021 1 次提交
    • M
      net/mlx5e: Block LRO if firmware asks for tunneled LRO · 26ab7b38
      Maxim Mikityanskiy 提交于
      This commit does a cleanup in LRO configuration.
      
      LRO is a parameter of an RQ, but its state is changed by modifying a TIR
      related to the RQ.
      
      The current status: LRO for tunneled packets is not supported in the
      driver, inner TIRs may enable LRO on creation, but LRO status of inner
      TIRs isn't changed in mlx5e_modify_tirs_lro(). This is inconsistent, but
      as long as the firmware doesn't declare support for tunneled LRO, it
      works, because the same RQs are shared between the inner and outer TIRs.
      
      This commit does two fixes:
      
      1. If the firmware has the tunneled LRO capability, LRO is blocked
      altogether, because it's not possible to block it for inner TIRs only,
      when the same RQs are shared between inner and outer TIRs, and the
      driver won't be able to handle tunneled LRO traffic.
      
      2. mlx5e_modify_tirs_lro() is patched to modify LRO state for all TIRs,
      including inner ones, because all TIRs related to an RQ should agree on
      their LRO state.
      
      Fixes: 7b3722fa ("net/mlx5e: Support RSS for GRE tunneled packets")
      Signed-off-by: NMaxim Mikityanskiy <maximmi@nvidia.com>
      Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
      26ab7b38
  6. 25 7月, 2021 1 次提交
  7. 18 7月, 2021 1 次提交
  8. 03 7月, 2021 1 次提交
  9. 26 6月, 2021 1 次提交
  10. 22 6月, 2021 1 次提交
  11. 15 6月, 2021 1 次提交
  12. 10 6月, 2021 1 次提交
  13. 02 6月, 2021 1 次提交
  14. 20 4月, 2021 3 次提交
  15. 17 4月, 2021 1 次提交
  16. 14 4月, 2021 1 次提交
  17. 07 4月, 2021 3 次提交
  18. 04 4月, 2021 1 次提交
  19. 13 3月, 2021 1 次提交
  20. 17 2月, 2021 2 次提交
  21. 16 2月, 2021 1 次提交
  22. 23 1月, 2021 1 次提交
    • M
      net/mlx5e: Support HTB offload · 214baf22
      Maxim Mikityanskiy 提交于
      This commit adds support for HTB offload in the mlx5e driver.
      
      Performance:
      
        NIC: Mellanox ConnectX-6 Dx
        CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz (24 cores with HT)
      
        100 Gbit/s line rate, 500 UDP streams @ ~200 Mbit/s each
        48 traffic classes, flower used for steering
        No shaping (rate limits set to 4 Gbit/s per TC) - checking for max
        throughput.
      
        Baseline: 98.7 Gbps, 8.25 Mpps
        HTB: 6.7 Gbps, 0.56 Mpps
        HTB offload: 95.6 Gbps, 8.00 Mpps
      
      Limitations:
      
      1. 256 leaf nodes, 3 levels of depth.
      
      2. Granularity for ceil is 1 Mbit/s. Rates are converted to weights, and
      the bandwidth is split among the siblings according to these weights.
      Other parameters for classes are not supported.
      
      Ethtool statistics support for QoS SQs are also added. The counters are
      called qos_txN_*, where N is the QoS queue number (starting from 0, the
      numeration is separate from the normal SQs), and * is the counter name
      (the counters are the same as for the normal SQs).
      Signed-off-by: NMaxim Mikityanskiy <maximmi@mellanox.com>
      Reviewed-by: NTariq Toukan <tariqt@nvidia.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      214baf22
  23. 19 1月, 2021 1 次提交
  24. 14 1月, 2021 1 次提交
  25. 08 1月, 2021 1 次提交
  26. 18 12月, 2020 1 次提交
  27. 04 12月, 2020 1 次提交
  28. 27 11月, 2020 5 次提交