1. 27 3月, 2017 7 次提交
    • S
      net/mlx5e: Introduce switch channels · 55c2503d
      Saeed Mahameed 提交于
      A fail safe helper functions that allows switching to new channels on the
      fly,  In simple words:
      
      make_new_config(new_params)
      {
          new_channels = open_channels(new_params);
          if (!new_channels)
               return "Failed, but current channels are still active :)"
      
          switch_channels(new_channels);
      
          return "SUCCESS";
      }
      
      Demonstrate mlx5e_switch_priv_channels usage in set channels ethtool
      callback and make it fail-safe using the new switch channels mechanism.
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Reviewed-by: NTariq Toukan <tariqt@mellanox.com>
      55c2503d
    • S
      net/mlx5e: CQ and RQ don't need priv pointer · a43b25da
      Saeed Mahameed 提交于
      Remove mlx5e_priv pointer from CQ and RQ structs,
      it was needed only to access mdev pointer from priv pointer.
      
      Instead we now pass mdev where needed.
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Reviewed-by: NTariq Toukan <tariqt@mellanox.com>
      a43b25da
    • S
      net/mlx5e: Isolate open_channels from priv->params · 6a9764ef
      Saeed Mahameed 提交于
      In order to have a clean separation between channels resources creation
      flows and current active mlx5e netdev parameters, make sure each
      resource creation function do not access priv->params, and only works
      with on a new fresh set of parameters.
      
      For this we add "new" mlx5e_params field to mlx5e_channels structure
      and use it down the road to mlx5e_open_{cq,rq,sq} and so on.
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Reviewed-by: NTariq Toukan <tariqt@mellanox.com>
      6a9764ef
    • S
      net/mlx5e: Split open/close channels to stages · acc6c595
      Saeed Mahameed 提交于
      As a foundation for safe config flow, a simple clear API such as
      (Open then Activate) where the "Open" handles the heavy unsafe
      creation operation and the "activate" will be fast and fail safe,
      to enable the newly created channels.
      
      For this we split the RQs/TXQ SQs and channels open/close flows to
      open => activate, deactivate => close.
      
      This will simplify the ability to have fail safe configuration changes
      in downstream patches as follows:
      
      make_new_config(new_params)
      {
           old_channels = current_active_channels;
           new_channels = create_channels(new_params);
           if (!new_channels)
                    return "Failed, but current channels still active :)"
           deactivate_channels(old_channels); /* Can't fail */
           activate_channels(new_channels); /* Can't fail */
           close_channels(old_channels);
           current_active_channels = new_channels;
      
           return "SUCCESS";
      }
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Reviewed-by: NTariq Toukan <tariqt@mellanox.com>
      acc6c595
    • S
      net/mlx5e: Refactor refresh TIRs · b676f653
      Saeed Mahameed 提交于
      Rename mlx5e_refresh_tirs_self_loopback to mlx5e_refresh_tirs,
      as it will be used in downstream (Safe config flow) patches, and make it
      fail safe on mlx5e_open.
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Reviewed-by: NTariq Toukan <tariqt@mellanox.com>
      b676f653
    • S
      net/mlx5e: Redirect RQT refactoring · a5f97fee
      Saeed Mahameed 提交于
      RQ Tables are always created once (on netdev creation) pointing to drop RQ
      and at that stage, RQ tables (indirection tables) are always directed to
      drop RQ.
      
      We don't need to use mlx5e_fill_{direct,indir}_rqt_rqns to fill the drop
      RQ in create RQT procedure.
      
      Instead of having separate flows to redirect direct and indirect RQ Tables
      to the current active channels Receive Queues (RQs), we unify the two
      flows by introducing mlx5e_redirect_rqt function and redirect_rqt_param
      struct. Combined, they provide one generic logic to fill the RQ table RQ
      numbers regardless of the RQ table purpose (direct/indirect).
      
      Demonstrated the usage with mlx5e_redirect_rqts_to_channels which will
      be called on mlx5e_open and with mlx5e_redirect_rqts_to_drop which will
      be called on mlx5e_close.
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Reviewed-by: NTariq Toukan <tariqt@mellanox.com>
      a5f97fee
    • S
      net/mlx5e: Introduce mlx5e_channels · ff9c852f
      Saeed Mahameed 提交于
      Have a dedicated "channels" handler that will serve as channels
      (RQs/SQs/etc..) holder to help with separating channels/parameters
      operations, for the downstream fail-safe configuration flow, where we will
      create a new instance of mlx5e_channels with the new requested parameters
      and switch to the new channels on the fly.
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Reviewed-by: NTariq Toukan <tariqt@mellanox.com>
      ff9c852f
  2. 25 3月, 2017 8 次提交
  3. 23 3月, 2017 1 次提交
    • P
      net/mlx5e: Avoid supporting udp tunnel port ndo for VF reps · 1ad9a00a
      Paul Blakey 提交于
      This was added to allow the TC offloading code to identify offloading
      encap/decap vxlan rules.
      
      The VF reps are effectively related to the same mlx5 PCI device as the
      PF. Since the kernel invokes the (say) delete ndo for each netdev, the
      FW erred on multiple vxlan dst port deletes when the port was deleted
      from the system.
      
      We fix that by keeping the registration to be carried out only by the
      PF. Since the PF serves as the uplink device, the VF reps will look
      up a port there and realize if they are ok to offload that.
      
      Tested:
       <SETUP VFS>
       <SETUP switchdev mode to have representors>
       ip link add vxlan1 type vxlan id 44 dev ens5f0 dstport 9999
       ip link set vxlan1 up
       ip link del dev vxlan1
      
      Fixes: 4a25730e ('net/mlx5e: Add ndo_udp_tunnel_add to VF representors')
      Signed-off-by: NPaul Blakey <paulb@mellanox.com>
      Reviewed-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1ad9a00a
  4. 23 2月, 2017 1 次提交
    • S
      net/mlx5e: Update MPWQE stride size when modifying CQE compress state · 6dc4b54e
      Saeed Mahameed 提交于
      When the admin enables/disables cqe compression, updating
      mpwqe stride size is required:
          CQE compress ON  ==> stride size = 256B
          CQE compress OFF ==> stride size = 64B
      
      This is already done on driver load via mlx5e_set_rq_type_params, all we
      need is just to call it on arbitrary admin changes of cqe compression
      state via priv flags or when changing timestamping state
      (as it is mutually exclusive with cqe compression).
      
      This bug introduces no functional damage, it only makes cqe compression
      occur less often, since in ConnectX4-LX CQE compression is performed
      only on packets smaller than stride size.
      
      Tested:
       ethtool --set-priv-flags ethxx rx_cqe_compress on
       pktgen with  64 < pkt size < 256 and netperf TCP_STREAM (IPv4/IPv6)
       verify `ethtool -S ethxx | grep compress` are advancing more often
       (rapidly)
      
      Fixes: 7219ab34 ("net/mlx5e: CQE compression")
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Reviewed-by: NTariq Toukan <tariqt@mellanox.com>
      Cc: kernel-team@fb.com
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6dc4b54e
  5. 07 2月, 2017 3 次提交
    • S
      net/mlx5e: Bring back bfreg uar map dedicated pointer · 8ca967ab
      Saeed Mahameed 提交于
      4K Uar series modified the mlx5e driver to use the new bfreg API,
      and mistakenly removed the sq->uar_map iomem data path dedicated
      pointer, which was meant to be read from xmit path for cache locality
      utilization.
      
      Fix that by returning that pointer to the SQ struct.
      
      Fixes: 7309cb4ad71e ("IB/mlx5: Support 4k UAR for libmlx5")
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Reviewed-by: NTariq Toukan <tariqt@mellanox.com>
      8ca967ab
    • S
      net/mlx5e: XDP Tx, no inline copy on ConnectX-5 · b70149dd
      Saeed Mahameed 提交于
      ConnectX-5 and later HW generations will report min inline mode ==
      MLX5_INLINE_MODE_NONE, which means driver is not required to copy packet
      headers to inline fields of TX WQE.
      
      Avoid copy to inline segment in XDP TX routine when HW inline mode doesn't
      require it.
      
      This will improve CPU utilization and boost XDP TX performance.
      
      Tested with xdp2 single flow:
      CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
      HCA: Mellanox Technologies MT28800 Family [ConnectX-5 Ex]
      
      Before: 7.4Mpps
      After:  7.8Mpps
      Improvement: 5%
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Reviewed-by: NTariq Toukan <tariqt@mellanox.com>
      b70149dd
    • D
      net/mlx5: Configure cache line size for start and end padding · f32f5bd2
      Daniel Jurgens 提交于
      There is a hardware feature that will pad the start or end of a DMA to
      be cache line aligned to avoid RMWs on the last cache line. The default
      cache line size setting for this feature is 64B. This change configures
      the hardware to use 128B alignment on systems with 128B cache lines.
      
      In addition we lower bound MPWRQ stride by HCA cacheline in mlx5e,
      MPWRQ stride should be at least the HCA cacheline, the current default
      is 64B and in case HCA_CAP.cach_line_128byte capability is set, MPWRQ RX
      stride will automatically be aligned to 128B.
      Signed-off-by: NDaniel Jurgens <danielj@mellanox.com>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      f32f5bd2
  6. 30 1月, 2017 2 次提交
  7. 25 1月, 2017 2 次提交
  8. 20 1月, 2017 2 次提交
  9. 19 1月, 2017 1 次提交
    • M
      net/mlx5e: Support bpf_xdp_adjust_head() · d8bec2b2
      Martin KaFai Lau 提交于
      This patch adds bpf_xdp_adjust_head() support to mlx5e.
      
      1. rx_headroom is added to struct mlx5e_rq.  It uses
         an existing 4 byte hole in the struct.
      2. The adjusted data length is checked against
         MLX5E_XDP_MIN_INLINE and MLX5E_SW2HW_MTU(rq->netdev->mtu).
      3. The macro MLX5E_SW2HW_MTU is moved from en_main.c to en.h.
         MLX5E_HW2SW_MTU is also moved to en.h for symmetric reason
         but it is not a must.
      
      v2:
      - Keep the xdp specific logic in mlx5e_xdp_handle()
      - Update dma_len after the sanity checks in mlx5e_xmit_xdp_frame()
      Signed-off-by: NMartin KaFai Lau <kafai@fb.com>
      Acked-by: NSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d8bec2b2
  10. 10 1月, 2017 2 次提交
  11. 07 12月, 2016 1 次提交
  12. 02 12月, 2016 2 次提交
  13. 29 11月, 2016 7 次提交
  14. 25 11月, 2016 1 次提交