1. 21 9月, 2016 10 次提交
  2. 20 9月, 2016 4 次提交
    • J
      mlx4: add missed recycle opportunity for XDP_TX on TX failure · 5737f6c9
      Jesper Dangaard Brouer 提交于
      Correct drop handling for XDP_TX on TX failure, were recently added in
      commit 95357907 ("mlx4: fix XDP_TX is acting like XDP_PASS on TX
      ring full").
      
      The change missed an opportunity for recycling the RX page, instead of
      going through the page allocator, like the regular XDP_DROP action does.
      This patch cease the opportunity, by going through the XDP_DROP case.
      
      Fixes: 95357907 ("mlx4: fix XDP_TX is acting like XDP_PASS on TX ring full")
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Reviewed-by: NTariq Toukan <tariqt@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5737f6c9
    • I
      mlxsw: spectrum: Fix sparse warnings · 1a9234e6
      Ido Schimmel 提交于
      drivers/net/ethernet/mellanox/mlxsw//spectrum.c:251:28: warning: symbol
      'mlxsw_sp_span_entry_find' was not declared. Should it be static?
      drivers/net/ethernet/mellanox/mlxsw//spectrum.c:265:28: warning: symbol
      'mlxsw_sp_span_entry_get' was not declared. Should it be static?
      drivers/net/ethernet/mellanox/mlxsw//spectrum.c:367:56: warning: mixing
      different enum types
      drivers/net/ethernet/mellanox/mlxsw//spectrum.c:367:56:     int enum
      mlxsw_sp_span_type  versus
      drivers/net/ethernet/mellanox/mlxsw//spectrum.c:367:56:     int enum
      mlxsw_reg_mpar_i_e
      ...
      drivers/net/ethernet/mellanox/mlxsw//spectrum_buffers.c:598:32: warning:
      mixing different enum types
      drivers/net/ethernet/mellanox/mlxsw//spectrum_buffers.c:598:32:     int
      enum mlxsw_reg_sbxx_dir  versus
      drivers/net/ethernet/mellanox/mlxsw//spectrum_buffers.c:598:32:     int
      enum devlink_sb_pool_type
      drivers/net/ethernet/mellanox/mlxsw//spectrum_buffers.c:600:39: warning:
      mixing different enum types
      drivers/net/ethernet/mellanox/mlxsw//spectrum_buffers.c:600:39:     int
      enum mlxsw_reg_sbpr_mode  versus
      drivers/net/ethernet/mellanox/mlxsw//spectrum_buffers.c:600:39:     int
      enum devlink_sb_threshold_type
      ...
      drivers/net/ethernet/mellanox/mlxsw//spectrum_router.c:255:54: warning:
      mixing different enum types
      drivers/net/ethernet/mellanox/mlxsw//spectrum_router.c:255:54:     int
      enum mlxsw_sp_l3proto  versus
      drivers/net/ethernet/mellanox/mlxsw//spectrum_router.c:255:54:     int
      enum mlxsw_reg_ralxx_protocol
      ...
      drivers/net/ethernet/mellanox/mlxsw//spectrum_router.c:1749:6: warning:
      symbol 'mlxsw_sp_fib_entry_put' was not declared. Should it be static?
      Signed-off-by: NIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1a9234e6
    • E
      mlxsw: Change the RX LAG hash function from XOR to CRC · 18c2d2c1
      Elad Raz 提交于
      Change the RX hash function from XOR to CRC in order to have better
      distribution of the traffic.
      Signed-off-by: NElad Raz <eladr@mellanox.com>
      Signed-off-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      18c2d2c1
    • B
      net/mlx5: clean function declarations in eswitch.c up · 766a0e97
      Baoyou Xie 提交于
      We get 2 warnings when building kernel with W=1:
      drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c:463:5: warning: no previous prototype for 'esw_offloads_init' [-Wmissing-prototypes]
      drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c:521:6: warning: no previous prototype for 'esw_offloads_cleanup' [-Wmissing-prototypes]
      
      In fact, both functions are declared in
      drivers/net/ethernet/mellanox/mlx5/core/eswitch.c,but should be
      declared in a header file, thus can be recognized in other file.
      
      So this patch moves the declarations into
      drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
      Signed-off-by: NBaoyou Xie <baoyou.xie@linaro.org>
      Acked-by: NLeon Romanovsky <leonro@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      766a0e97
  3. 19 9月, 2016 2 次提交
  4. 17 9月, 2016 3 次提交
    • T
      net/mlx5e: Implement RX mapped page cache for page recycle · 4415a031
      Tariq Toukan 提交于
      Instead of reallocating and mapping pages for RX data-path,
      recycle already used pages in a per ring cache.
      
      Performance tests:
      The following results were measured on a freshly booted system,
      giving optimal baseline performance, as high-order pages are yet to
      be fragmented and depleted.
      
      We ran pktgen single-stream benchmarks, with iptables-raw-drop:
      
      Single stride, 64 bytes:
      * 4,739,057 - baseline
      * 4,749,550 - order0 no cache
      * 4,786,899 - order0 with cache
      1% gain
      
      Larger packets, no page cross, 1024 bytes:
      * 3,982,361 - baseline
      * 3,845,682 - order0 no cache
      * 4,127,852 - order0 with cache
      3.7% gain
      
      Larger packets, every 3rd packet crosses a page, 1500 bytes:
      * 3,731,189 - baseline
      * 3,579,414 - order0 no cache
      * 3,931,708 - order0 with cache
      5.4% gain
      Signed-off-by: NTariq Toukan <tariqt@mellanox.com>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4415a031
    • T
      net/mlx5e: Introduce API for RX mapped pages · a5a0c590
      Tariq Toukan 提交于
      Manage the allocation and deallocation of mapped RX pages only
      through dedicated API functions.
      Signed-off-by: NTariq Toukan <tariqt@mellanox.com>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a5a0c590
    • T
      net/mlx5e: Single flow order-0 pages for Striding RQ · 7e426671
      Tariq Toukan 提交于
      To improve the memory consumption scheme, we omit the flow that
      demands and splits high-order pages in Striding RQ, and stay
      with a single Striding RQ flow that uses order-0 pages.
      
      Moving to fragmented memory allows the use of larger MPWQEs,
      which reduces the number of UMR posts and filler CQEs.
      
      Moving to a single flow allows several optimizations that improve
      performance, especially in production servers where we would
      anyway fallback to order-0 allocations:
      - inline functions that were called via function pointers.
      - improve the UMR post process.
      
      This patch alone is expected to give a slight performance reduction.
      However, the new memory scheme gives the possibility to use a page-cache
      of a fair size, that doesn't inflate the memory footprint, which will
      dramatically fix the reduction and even give a performance gain.
      
      Performance tests:
      The following results were measured on a freshly booted system,
      giving optimal baseline performance, as high-order pages are yet to
      be fragmented and depleted.
      
      We ran pktgen single-stream benchmarks, with iptables-raw-drop:
      
      Single stride, 64 bytes:
      * 4,739,057 - baseline
      * 4,749,550 - this patch
      no reduction
      
      Larger packets, no page cross, 1024 bytes:
      * 3,982,361 - baseline
      * 3,845,682 - this patch
      3.5% reduction
      
      Larger packets, every 3rd packet crosses a page, 1500 bytes:
      * 3,731,189 - baseline
      * 3,579,414 - this patch
      4% reduction
      
      Fixes: 461017cb ("net/mlx5e: Support RX multi-packet WQE (Striding RQ)")
      Fixes: bc77b240 ("net/mlx5e: Add fragmented memory support for RX multi packet WQE")
      Signed-off-by: NTariq Toukan <tariqt@mellanox.com>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7e426671
  5. 14 9月, 2016 5 次提交
  6. 12 9月, 2016 4 次提交
  7. 11 9月, 2016 11 次提交
  8. 10 9月, 2016 1 次提交