1. 07 2月, 2019 1 次提交
  2. 26 1月, 2019 1 次提交
  3. 23 1月, 2019 1 次提交
  4. 30 11月, 2018 1 次提交
  5. 28 11月, 2018 2 次提交
  6. 20 9月, 2018 1 次提交
  7. 10 8月, 2018 2 次提交
    • M
      qede: Ingress tc flower offload (drop action) support. · 2ce9c93e
      Manish Chopra 提交于
      The main motive of this patch is to lay down driver's
      tc offload infrastructure in place.
      
      With these changes tc can offload various supported flow
      profiles (4 tuples, src-ip, dst-ip, l4 port) for the drop
      action. Dropped flows statistic is a global counter for
      all the offloaded flows for drop action and is populated
      in ethtool statistics as common "gft_filter_drop".
      
      Examples -
      
      tc qdisc add dev p4p1 ingress
      tc filter add dev p4p1 protocol ipv4 parent ffff: flower \
      	skip_sw ip_proto tcp dst_ip 192.168.40.200 action drop
      tc filter add dev p4p1 protocol ipv4 parent ffff: flower \
      	skip_sw ip_proto udp src_ip 192.168.40.100 action drop
      tc filter add dev p4p1 protocol ipv4 parent ffff: flower \
      	skip_sw ip_proto tcp src_ip 192.168.40.100 dst_ip 192.168.40.200 \
      	src_port 453 dst_port 876 action drop
      tc filter add dev p4p1 protocol ipv4 parent ffff: flower \
      	skip_sw ip_proto tcp dst_port 98 action drop
      Signed-off-by: NManish Chopra <manish.chopra@cavium.com>
      Signed-off-by: NAriel Elior <ariel.elior@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2ce9c93e
    • M
      qed/qede: Multi CoS support. · 5e7baf0f
      Manish Chopra 提交于
      This patch adds support for tc mqprio offload,
      using this different traffic classes on the adapter
      can be utilized based on configured priority to tc map.
      
      For example -
      
      tc qdisc add dev eth0 root mqprio num_tc 4 map 0 1 2 3
      
      This will cause SKBs with priority 0,1,2,3 to transmit
      over tc 0,1,2,3 hardware queues respectively.
      Signed-off-by: NManish Chopra <manish.chopra@cavium.com>
      Signed-off-by: NAriel Elior <ariel.elior@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5e7baf0f
  8. 01 6月, 2018 1 次提交
  9. 26 5月, 2018 1 次提交
  10. 18 5月, 2018 1 次提交
    • M
      qede: Add build_skb() support. · 8a863397
      Manish Chopra 提交于
      This patch makes use of build_skb() throughout in driver's receieve
      data path [HW gro flow and non HW gro flow]. With this, driver can
      build skb directly from the page segments which are already mapped
      to the hardware instead of allocating new SKB via netdev_alloc_skb()
      and memcpy the data which is quite costly.
      
      This really improves performance (keeping same or slight gain in rx
      throughput) in terms of CPU utilization which is significantly reduced
      [almost half] in non HW gro flow where for every incoming MTU sized
      packet driver had to allocate skb, memcpy headers etc. Additionally
      in that flow, it also gets rid of bunch of additional overheads
      [eth_get_headlen() etc.] to split headers and data in the skb.
      
      Tested with:
      system: 2 sockets, 4 cores per socket, hyperthreading, 2x4x2=16 cores
      iperf [server]: iperf -s
      iperf [client]: iperf -c <server_ip> -t 500 -i 10 -P 32
      
      HW GRO off – w/o build_skb(), throughput: 36.8 Gbits/sec
      
      Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
      Average:     all    0.59    0.00   32.93    0.00    0.00   43.07    0.00    0.00   23.42
      
      HW GRO off - with build_skb(), throughput: 36.9 Gbits/sec
      
      Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
      Average:     all    0.70    0.00   31.70    0.00    0.00   25.68    0.00    0.00   41.92
      
      HW GRO on - w/o build_skb(), throughput: 36.9 Gbits/sec
      
      Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
      Average:     all    0.86    0.00   24.14    0.00    0.00    6.59    0.00    0.00   68.41
      
      HW GRO on - with build_skb(), throughput: 37.5 Gbits/sec
      
      Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
      Average:     all    0.87    0.00   23.75    0.00    0.00    6.19    0.00    0.00   69.19
      Signed-off-by: NAriel Elior <ariel.elior@cavium.com>
      Signed-off-by: NManish Chopra <manish.chopra@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8a863397
  11. 06 1月, 2018 1 次提交
    • J
      xdp/qede: setup xdp_rxq_info and intro xdp_rxq_info_is_reg · c0124f32
      Jesper Dangaard Brouer 提交于
      The driver code qede_free_fp_array() depend on kfree() can be called
      with a NULL pointer. This stems from the qede_alloc_fp_array()
      function which either (kz)alloc memory for fp->txq or fp->rxq.
      This also simplifies error handling code in case of memory allocation
      failures, but xdp_rxq_info_unreg need to know the difference.
      
      Introduce xdp_rxq_info_is_reg() to handle if a memory allocation fails
      and detect this is the failure path by seeing that xdp_rxq_info was
      not registred yet, which first happens after successful alloaction in
      qede_init_fp().
      
      Driver hook points for xdp_rxq_info:
       * reg  : qede_init_fp
       * unreg: qede_free_fp_array
      
      Tested on actual hardware with samples/bpf program.
      
      V2: Driver have no proper error path for failed XDP RX-queue info reg, as
      qede_init_fp() is a void function.
      
      Cc: everest-linux-l2@cavium.com
      Cc: Ariel Elior <Ariel.Elior@cavium.com>
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      c0124f32
  12. 03 1月, 2018 1 次提交
  13. 19 12月, 2017 1 次提交
  14. 05 11月, 2017 1 次提交
  15. 27 7月, 2017 2 次提交
  16. 21 6月, 2017 2 次提交
  17. 22 5月, 2017 1 次提交
    • M
      qede: Don't use an internal MAC field · 492a1d98
      Mintz, Yuval 提交于
      Driver maintains its primary MAC in a private field which
      gets updated when ndo_dev_set_mac() gets called.
      
      However, there are flows where the primary MAC of the device can change
      without said NDO being called [bond device in TLB mode configuring
      slaves' addresses], resulting in a configuration where there's a mismatch
      between what's apparent to user [the netdevice's value] and what's
      configured in the HW [the private value].
      
      As we don't have any real motivation of maintaining this
      private field, simply remove it and start using the netdevice's
      field instead.
      Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      492a1d98
  18. 03 5月, 2017 1 次提交
  19. 25 4月, 2017 2 次提交
  20. 18 4月, 2017 1 次提交
  21. 07 4月, 2017 2 次提交
    • M
      qede: Add support for ingress headroom · 15ed8a47
      Mintz, Yuval 提交于
      Driver currently doesn't support any headroom; The only 'available'
      space it has in the head of the buffer is due to the placement
      offset.
      In order to allow [later] support of XDP adjustment of headroom,
      modify the the ingress flow to properly handle a scenario where
      the packets would have such.
      Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      15ed8a47
    • M
      qede: Correct XDP forward unmapping · 89e1afc4
      Mintz, Yuval 提交于
      Driver is currently using dma_unmap_single() with the address it
      passed to device for the purpose of forwarding, but the XDP
      transmission buffer was originally a page allocated for the rx-queue.
      The mapped address is likely to differ from the original mapped
      address due to the placement offset.
      
      This difference is going to get even bigger once we support headroom.
      
      Cache the original mapped address of the page, and use it for unmapping
      of the buffer when completion arrives for the XDP forwarded packet.
      Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      89e1afc4
  22. 15 3月, 2017 1 次提交
  23. 14 3月, 2017 1 次提交
  24. 16 2月, 2017 1 次提交
  25. 02 1月, 2017 6 次提交
  26. 01 12月, 2016 4 次提交
    • M
      qede: Add support for XDP_TX · cb6aeb07
      Mintz, Yuval 提交于
      Add support for forwarding via XDP. Once the eBPF is attached,
      driver would allocate & configure a designated transmission queue
      meant solely for forwarding packets. Said queue would share the
      receive-queue's interrupt line, and would have it's own Tx statistics.
      
      Infrastructure changes required for this [spread-out through the code]:
       - Determine the DMA direction of the receive buffers based on the presence
      of the eBPF program.
       - Turn the sw Tx ring into a union, as regular/XDP queues have different
      needs for releasing resources after completion [regular requires the SKB,
      XDP requires the transmitted page].
      Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cb6aeb07
    • M
      qede: Add basic XDP support · 496e0517
      Mintz, Yuval 提交于
      Add support for the ndo_xdp callback. This patch would support XDP_PASS,
      XDP_DROP and XDP_ABORTED commands.
      
      This also adds a per Rx queue statistic which counts number of packets
      which didn't reach the stack [due to XDP].
      Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      496e0517
    • M
      qede: Better utilize the qede_[rt]x_queue · 9eb22357
      Mintz, Yuval 提交于
      Improve the cacheline usage of both queues by reordering -
      This reduces the cachelines required for egress datapath processing
      from 3 to 2 and those required by ingress datapath processing by 2.
      
      It also changes a couple of datapath related functions that currently
      require either the fastpath or the qede_dev, changing them to be based
      on the tx/rx queue instead.
      Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9eb22357
    • M
      qed*: Handle-based L2-queues. · 3da7a37a
      Mintz, Yuval 提交于
      The driver needs to maintain several FW/HW-indices for each one of
      its queues. Currently, that mapping is done by the QED where it uses
      an rx/tx array of so-called hw-cids, populating them whenever a new
      queue is opened and clearing them upon destruction of said queues.
      
      This maintenance is far from ideal - there's no real reason why
      QED needs to maintain such a data-structure. It becomes even worse
      when considering the fact that the PF's queues and its child VFs' queues
      are all mapped into the same data-structure.
      As a by-product, the set of parameters an interface needs to supply for
      queue APIs is non-trivial, and some of the variables in the API
      structures have different meaning depending on their exact place
      in the configuration flow.
      
      This patch re-organizes the way L2 queues are configured and maintained.
      In short:
        - Required parameters for queue init are now well-defined.
        - Qed would allocate a queue-cid based on parameters.
          Upon initialization success, it would return a handle to caller.
        - Queue-handle would be maintained by entity requesting queue-init,
          not necessarily qed.
        - All further queue-APIs [update, destroy] would use the opaque
          handle as reference for the queue instead of various indices.
      
      The possible owners of such handles:
        - PF queues [qede] - complete handles based on provided configuration.
        - VF queues [qede] - fw-context-less handles, containing only relative
          information; Only the PF-side would need the absolute indices
          for configuration, so they're omitted here.
        - VF queues [qed, PF-side] - complete handles based on VF initialization.
      Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3da7a37a