1. 24 4月, 2020 1 次提交
  2. 19 2月, 2020 1 次提交
  3. 10 12月, 2019 1 次提交
  4. 01 9月, 2019 2 次提交
  5. 10 7月, 2019 1 次提交
  6. 29 5月, 2019 1 次提交
  7. 21 3月, 2019 1 次提交
    • P
      net: remove 'fallback' argument from dev->ndo_select_queue() · a350ecce
      Paolo Abeni 提交于
      After the previous patch, all the callers of ndo_select_queue()
      provide as a 'fallback' argument netdev_pick_tx.
      The only exceptions are nested calls to ndo_select_queue(),
      which pass down the 'fallback' available in the current scope
      - still netdev_pick_tx.
      
      We can drop such argument and replace fallback() invocation with
      netdev_pick_tx(). This avoids an indirect call per xmit packet
      in some scenarios (TCP syn, UDP unconnected, XDP generic, pktgen)
      with device drivers implementing such ndo. It also clean the code
      a bit.
      
      Tested with ixgbe and CONFIG_FCOE=m
      
      With pktgen using queue xmit:
      threads		vanilla 	patched
      		(kpps)		(kpps)
      1		2334		2428
      2		4166		4278
      4		7895		8100
      
       v1 -> v2:
       - rebased after helper's name change
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a350ecce
  8. 07 2月, 2019 2 次提交
  9. 29 1月, 2019 1 次提交
  10. 26 1月, 2019 1 次提交
  11. 23 1月, 2019 1 次提交
  12. 30 11月, 2018 1 次提交
  13. 28 11月, 2018 2 次提交
  14. 20 9月, 2018 1 次提交
  15. 10 8月, 2018 2 次提交
    • M
      qede: Ingress tc flower offload (drop action) support. · 2ce9c93e
      Manish Chopra 提交于
      The main motive of this patch is to lay down driver's
      tc offload infrastructure in place.
      
      With these changes tc can offload various supported flow
      profiles (4 tuples, src-ip, dst-ip, l4 port) for the drop
      action. Dropped flows statistic is a global counter for
      all the offloaded flows for drop action and is populated
      in ethtool statistics as common "gft_filter_drop".
      
      Examples -
      
      tc qdisc add dev p4p1 ingress
      tc filter add dev p4p1 protocol ipv4 parent ffff: flower \
      	skip_sw ip_proto tcp dst_ip 192.168.40.200 action drop
      tc filter add dev p4p1 protocol ipv4 parent ffff: flower \
      	skip_sw ip_proto udp src_ip 192.168.40.100 action drop
      tc filter add dev p4p1 protocol ipv4 parent ffff: flower \
      	skip_sw ip_proto tcp src_ip 192.168.40.100 dst_ip 192.168.40.200 \
      	src_port 453 dst_port 876 action drop
      tc filter add dev p4p1 protocol ipv4 parent ffff: flower \
      	skip_sw ip_proto tcp dst_port 98 action drop
      Signed-off-by: NManish Chopra <manish.chopra@cavium.com>
      Signed-off-by: NAriel Elior <ariel.elior@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2ce9c93e
    • M
      qed/qede: Multi CoS support. · 5e7baf0f
      Manish Chopra 提交于
      This patch adds support for tc mqprio offload,
      using this different traffic classes on the adapter
      can be utilized based on configured priority to tc map.
      
      For example -
      
      tc qdisc add dev eth0 root mqprio num_tc 4 map 0 1 2 3
      
      This will cause SKBs with priority 0,1,2,3 to transmit
      over tc 0,1,2,3 hardware queues respectively.
      Signed-off-by: NManish Chopra <manish.chopra@cavium.com>
      Signed-off-by: NAriel Elior <ariel.elior@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5e7baf0f
  16. 01 6月, 2018 1 次提交
  17. 26 5月, 2018 1 次提交
  18. 18 5月, 2018 1 次提交
    • M
      qede: Add build_skb() support. · 8a863397
      Manish Chopra 提交于
      This patch makes use of build_skb() throughout in driver's receieve
      data path [HW gro flow and non HW gro flow]. With this, driver can
      build skb directly from the page segments which are already mapped
      to the hardware instead of allocating new SKB via netdev_alloc_skb()
      and memcpy the data which is quite costly.
      
      This really improves performance (keeping same or slight gain in rx
      throughput) in terms of CPU utilization which is significantly reduced
      [almost half] in non HW gro flow where for every incoming MTU sized
      packet driver had to allocate skb, memcpy headers etc. Additionally
      in that flow, it also gets rid of bunch of additional overheads
      [eth_get_headlen() etc.] to split headers and data in the skb.
      
      Tested with:
      system: 2 sockets, 4 cores per socket, hyperthreading, 2x4x2=16 cores
      iperf [server]: iperf -s
      iperf [client]: iperf -c <server_ip> -t 500 -i 10 -P 32
      
      HW GRO off – w/o build_skb(), throughput: 36.8 Gbits/sec
      
      Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
      Average:     all    0.59    0.00   32.93    0.00    0.00   43.07    0.00    0.00   23.42
      
      HW GRO off - with build_skb(), throughput: 36.9 Gbits/sec
      
      Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
      Average:     all    0.70    0.00   31.70    0.00    0.00   25.68    0.00    0.00   41.92
      
      HW GRO on - w/o build_skb(), throughput: 36.9 Gbits/sec
      
      Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
      Average:     all    0.86    0.00   24.14    0.00    0.00    6.59    0.00    0.00   68.41
      
      HW GRO on - with build_skb(), throughput: 37.5 Gbits/sec
      
      Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
      Average:     all    0.87    0.00   23.75    0.00    0.00    6.19    0.00    0.00   69.19
      Signed-off-by: NAriel Elior <ariel.elior@cavium.com>
      Signed-off-by: NManish Chopra <manish.chopra@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8a863397
  19. 06 1月, 2018 1 次提交
    • J
      xdp/qede: setup xdp_rxq_info and intro xdp_rxq_info_is_reg · c0124f32
      Jesper Dangaard Brouer 提交于
      The driver code qede_free_fp_array() depend on kfree() can be called
      with a NULL pointer. This stems from the qede_alloc_fp_array()
      function which either (kz)alloc memory for fp->txq or fp->rxq.
      This also simplifies error handling code in case of memory allocation
      failures, but xdp_rxq_info_unreg need to know the difference.
      
      Introduce xdp_rxq_info_is_reg() to handle if a memory allocation fails
      and detect this is the failure path by seeing that xdp_rxq_info was
      not registred yet, which first happens after successful alloaction in
      qede_init_fp().
      
      Driver hook points for xdp_rxq_info:
       * reg  : qede_init_fp
       * unreg: qede_free_fp_array
      
      Tested on actual hardware with samples/bpf program.
      
      V2: Driver have no proper error path for failed XDP RX-queue info reg, as
      qede_init_fp() is a void function.
      
      Cc: everest-linux-l2@cavium.com
      Cc: Ariel Elior <Ariel.Elior@cavium.com>
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      c0124f32
  20. 03 1月, 2018 1 次提交
  21. 19 12月, 2017 1 次提交
  22. 05 11月, 2017 1 次提交
  23. 27 7月, 2017 2 次提交
  24. 21 6月, 2017 2 次提交
  25. 22 5月, 2017 1 次提交
    • M
      qede: Don't use an internal MAC field · 492a1d98
      Mintz, Yuval 提交于
      Driver maintains its primary MAC in a private field which
      gets updated when ndo_dev_set_mac() gets called.
      
      However, there are flows where the primary MAC of the device can change
      without said NDO being called [bond device in TLB mode configuring
      slaves' addresses], resulting in a configuration where there's a mismatch
      between what's apparent to user [the netdevice's value] and what's
      configured in the HW [the private value].
      
      As we don't have any real motivation of maintaining this
      private field, simply remove it and start using the netdevice's
      field instead.
      Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      492a1d98
  26. 03 5月, 2017 1 次提交
  27. 25 4月, 2017 2 次提交
  28. 18 4月, 2017 1 次提交
  29. 07 4月, 2017 2 次提交
    • M
      qede: Add support for ingress headroom · 15ed8a47
      Mintz, Yuval 提交于
      Driver currently doesn't support any headroom; The only 'available'
      space it has in the head of the buffer is due to the placement
      offset.
      In order to allow [later] support of XDP adjustment of headroom,
      modify the the ingress flow to properly handle a scenario where
      the packets would have such.
      Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      15ed8a47
    • M
      qede: Correct XDP forward unmapping · 89e1afc4
      Mintz, Yuval 提交于
      Driver is currently using dma_unmap_single() with the address it
      passed to device for the purpose of forwarding, but the XDP
      transmission buffer was originally a page allocated for the rx-queue.
      The mapped address is likely to differ from the original mapped
      address due to the placement offset.
      
      This difference is going to get even bigger once we support headroom.
      
      Cache the original mapped address of the page, and use it for unmapping
      of the buffer when completion arrives for the XDP forwarded packet.
      Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      89e1afc4
  30. 15 3月, 2017 1 次提交
  31. 14 3月, 2017 1 次提交
  32. 16 2月, 2017 1 次提交