1. 23 5月, 2018 1 次提交
  2. 18 5月, 2018 1 次提交
    • M
      qede: Add build_skb() support. · 8a863397
      Manish Chopra 提交于
      This patch makes use of build_skb() throughout in driver's receieve
      data path [HW gro flow and non HW gro flow]. With this, driver can
      build skb directly from the page segments which are already mapped
      to the hardware instead of allocating new SKB via netdev_alloc_skb()
      and memcpy the data which is quite costly.
      
      This really improves performance (keeping same or slight gain in rx
      throughput) in terms of CPU utilization which is significantly reduced
      [almost half] in non HW gro flow where for every incoming MTU sized
      packet driver had to allocate skb, memcpy headers etc. Additionally
      in that flow, it also gets rid of bunch of additional overheads
      [eth_get_headlen() etc.] to split headers and data in the skb.
      
      Tested with:
      system: 2 sockets, 4 cores per socket, hyperthreading, 2x4x2=16 cores
      iperf [server]: iperf -s
      iperf [client]: iperf -c <server_ip> -t 500 -i 10 -P 32
      
      HW GRO off – w/o build_skb(), throughput: 36.8 Gbits/sec
      
      Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
      Average:     all    0.59    0.00   32.93    0.00    0.00   43.07    0.00    0.00   23.42
      
      HW GRO off - with build_skb(), throughput: 36.9 Gbits/sec
      
      Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
      Average:     all    0.70    0.00   31.70    0.00    0.00   25.68    0.00    0.00   41.92
      
      HW GRO on - w/o build_skb(), throughput: 36.9 Gbits/sec
      
      Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
      Average:     all    0.86    0.00   24.14    0.00    0.00    6.59    0.00    0.00   68.41
      
      HW GRO on - with build_skb(), throughput: 37.5 Gbits/sec
      
      Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
      Average:     all    0.87    0.00   23.75    0.00    0.00    6.19    0.00    0.00   69.19
      Signed-off-by: NAriel Elior <ariel.elior@cavium.com>
      Signed-off-by: NManish Chopra <manish.chopra@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8a863397
  3. 14 5月, 2018 1 次提交
  4. 08 5月, 2018 1 次提交
  5. 22 3月, 2018 1 次提交
  6. 17 3月, 2018 1 次提交
  7. 06 1月, 2018 1 次提交
    • J
      xdp/qede: setup xdp_rxq_info and intro xdp_rxq_info_is_reg · c0124f32
      Jesper Dangaard Brouer 提交于
      The driver code qede_free_fp_array() depend on kfree() can be called
      with a NULL pointer. This stems from the qede_alloc_fp_array()
      function which either (kz)alloc memory for fp->txq or fp->rxq.
      This also simplifies error handling code in case of memory allocation
      failures, but xdp_rxq_info_unreg need to know the difference.
      
      Introduce xdp_rxq_info_is_reg() to handle if a memory allocation fails
      and detect this is the failure path by seeing that xdp_rxq_info was
      not registred yet, which first happens after successful alloaction in
      qede_init_fp().
      
      Driver hook points for xdp_rxq_info:
       * reg  : qede_init_fp
       * unreg: qede_free_fp_array
      
      Tested on actual hardware with samples/bpf program.
      
      V2: Driver have no proper error path for failed XDP RX-queue info reg, as
      qede_init_fp() is a void function.
      
      Cc: everest-linux-l2@cavium.com
      Cc: Ariel Elior <Ariel.Elior@cavium.com>
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      c0124f32
  8. 03 1月, 2018 1 次提交
  9. 19 12月, 2017 1 次提交
  10. 03 12月, 2017 1 次提交
  11. 05 11月, 2017 1 次提交
  12. 27 7月, 2017 1 次提交
  13. 21 6月, 2017 3 次提交
  14. 05 6月, 2017 2 次提交
  15. 25 5月, 2017 2 次提交
  16. 22 5月, 2017 5 次提交
  17. 09 5月, 2017 2 次提交
  18. 28 4月, 2017 1 次提交
  19. 25 4月, 2017 3 次提交
  20. 18 4月, 2017 1 次提交
  21. 07 4月, 2017 3 次提交
    • M
      qede: Support XDP adjustment of headers · 059eeb07
      Mintz, Yuval 提交于
      In case an XDP program is attached, reserve XDP_PACKET_HEADROOM
      bytes at the beginning of the packet for the program to play
      with.
      
      Modify the XDP logic in the driver to fill-in the missing bits
      and re-calculate offsets and length after the program has finished
      running to properly reflect the current status of the packet.
      
      We can then go and remove the limitation of not supporting XDP programs
      where xdp_adjust_head is set.
      Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      059eeb07
    • M
      qede: Add support for ingress headroom · 15ed8a47
      Mintz, Yuval 提交于
      Driver currently doesn't support any headroom; The only 'available'
      space it has in the head of the buffer is due to the placement
      offset.
      In order to allow [later] support of XDP adjustment of headroom,
      modify the the ingress flow to properly handle a scenario where
      the packets would have such.
      Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      15ed8a47
    • M
      qede: Correct XDP forward unmapping · 89e1afc4
      Mintz, Yuval 提交于
      Driver is currently using dma_unmap_single() with the address it
      passed to device for the purpose of forwarding, but the XDP
      transmission buffer was originally a page allocated for the rx-queue.
      The mapped address is likely to differ from the original mapped
      address due to the placement offset.
      
      This difference is going to get even bigger once we support headroom.
      
      Cache the original mapped address of the page, and use it for unmapping
      of the buffer when completion arrives for the XDP forwarded packet.
      Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      89e1afc4
  22. 15 3月, 2017 1 次提交
  23. 21 2月, 2017 3 次提交
  24. 16 2月, 2017 1 次提交
  25. 09 1月, 2017 1 次提交