1. 27 7月, 2017 1 次提交
  2. 21 6月, 2017 3 次提交
  3. 05 6月, 2017 2 次提交
  4. 25 5月, 2017 2 次提交
  5. 22 5月, 2017 5 次提交
  6. 09 5月, 2017 2 次提交
  7. 28 4月, 2017 1 次提交
  8. 25 4月, 2017 3 次提交
  9. 18 4月, 2017 1 次提交
  10. 07 4月, 2017 3 次提交
    • M
      qede: Support XDP adjustment of headers · 059eeb07
      Mintz, Yuval 提交于
      In case an XDP program is attached, reserve XDP_PACKET_HEADROOM
      bytes at the beginning of the packet for the program to play
      with.
      
      Modify the XDP logic in the driver to fill-in the missing bits
      and re-calculate offsets and length after the program has finished
      running to properly reflect the current status of the packet.
      
      We can then go and remove the limitation of not supporting XDP programs
      where xdp_adjust_head is set.
      Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      059eeb07
    • M
      qede: Add support for ingress headroom · 15ed8a47
      Mintz, Yuval 提交于
      Driver currently doesn't support any headroom; The only 'available'
      space it has in the head of the buffer is due to the placement
      offset.
      In order to allow [later] support of XDP adjustment of headroom,
      modify the the ingress flow to properly handle a scenario where
      the packets would have such.
      Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      15ed8a47
    • M
      qede: Correct XDP forward unmapping · 89e1afc4
      Mintz, Yuval 提交于
      Driver is currently using dma_unmap_single() with the address it
      passed to device for the purpose of forwarding, but the XDP
      transmission buffer was originally a page allocated for the rx-queue.
      The mapped address is likely to differ from the original mapped
      address due to the placement offset.
      
      This difference is going to get even bigger once we support headroom.
      
      Cache the original mapped address of the page, and use it for unmapping
      of the buffer when completion arrives for the XDP forwarded packet.
      Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      89e1afc4
  11. 15 3月, 2017 1 次提交
  12. 21 2月, 2017 3 次提交
  13. 16 2月, 2017 1 次提交
  14. 09 1月, 2017 1 次提交
  15. 02 1月, 2017 7 次提交
  16. 09 12月, 2016 1 次提交
  17. 04 12月, 2016 1 次提交
  18. 03 12月, 2016 1 次提交
    • D
      bpf, xdp: drop rcu_read_lock from bpf_prog_run_xdp and move to caller · 366cbf2f
      Daniel Borkmann 提交于
      After 326fe02d ("net/mlx4_en: protect ring->xdp_prog with rcu_read_lock"),
      the rcu_read_lock() in bpf_prog_run_xdp() is superfluous, since callers
      need to hold rcu_read_lock() already to make sure BPF program doesn't
      get released in the background.
      
      Thus, drop it from bpf_prog_run_xdp(), as it can otherwise be misleading.
      Still keeping the bpf_prog_run_xdp() is useful as it allows for grepping
      in XDP supported drivers and to keep the typecheck on the context intact.
      For mlx4, this means we don't have a double rcu_read_lock() anymore. nfp can
      just make use of bpf_prog_run_xdp(), too. For qede, just move rcu_read_lock()
      out of the helper. When the driver gets atomic replace support, this will
      move to call-sites eventually.
      
      mlx5 needs actual fixing as it has the same issue as described already in
      326fe02d ("net/mlx4_en: protect ring->xdp_prog with rcu_read_lock"),
      that is, we're under RCU bh at this time, BPF programs are released via
      call_rcu(), and call_rcu() != call_rcu_bh(), so we need to properly mark
      read side as programs can get xchg()'ed in mlx5e_xdp_set() without queue
      reset.
      
      Fixes: 86994156 ("net/mlx5e: XDP fast RX drop bpf programs support")
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      366cbf2f
  19. 01 12月, 2016 1 次提交
    • M
      qede: Add support for XDP_TX · cb6aeb07
      Mintz, Yuval 提交于
      Add support for forwarding via XDP. Once the eBPF is attached,
      driver would allocate & configure a designated transmission queue
      meant solely for forwarding packets. Said queue would share the
      receive-queue's interrupt line, and would have it's own Tx statistics.
      
      Infrastructure changes required for this [spread-out through the code]:
       - Determine the DMA direction of the receive buffers based on the presence
      of the eBPF program.
       - Turn the sw Tx ring into a union, as regular/XDP queues have different
      needs for releasing resources after completion [regular requires the SKB,
      XDP requires the transmitted page].
      Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cb6aeb07