1. 08 1月, 2018 19 次提交
  2. 07 1月, 2018 3 次提交
    • D
      Merge branch 'bpf-stacktrace-map-next-key-support' · 9be99bad
      Daniel Borkmann 提交于
      Yonghong Song says:
      
      ====================
      The patch set implements bpf syscall command BPF_MAP_GET_NEXT_KEY
      for stacktrace map. Patch #1 is the core implementation
      and Patch #2 implements a bpf test at tools/testing/selftests/bpf
      directory. Please see individual patch comments for details.
      
      Changelog:
        v1 -> v2:
         - For invalid key (key pointer is non-NULL), sets next_key to be the first valid key.
      ====================
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      9be99bad
    • Y
      tools/bpf: add a bpf selftest for stacktrace · 3ced9b60
      Yonghong Song 提交于
      Added a bpf selftest in test_progs at tools directory for stacktrace.
      The test will populate a hashtable map and a stacktrace map
      at the same time with the same key, stackid.
      The user space will compare both maps, using BPF_MAP_LOOKUP_ELEM
      command and BPF_MAP_GET_NEXT_KEY command, to ensure that both have
      the same set of keys.
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      3ced9b60
    • Y
      bpf: implement syscall command BPF_MAP_GET_NEXT_KEY for stacktrace map · 16f07c55
      Yonghong Song 提交于
      Currently, bpf syscall command BPF_MAP_GET_NEXT_KEY is not
      supported for stacktrace map. However, there are use cases where
      user space wants to enumerate all stacktrace map entries where
      BPF_MAP_GET_NEXT_KEY command will be really helpful.
      In addition, if user space wants to delete all map entries
      in order to save memory and does not want to close the
      map file descriptor, BPF_MAP_GET_NEXT_KEY may help improve
      performance if map entries are sparsely populated.
      
      The implementation has similar behavior for
      BPF_MAP_GET_NEXT_KEY implementation in hashtab. If user provides
      a NULL key pointer or an invalid key, the first key is returned.
      Otherwise, the first valid key after the input parameter "key"
      is returned, or -ENOENT if no valid key can be found.
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      16f07c55
  3. 06 1月, 2018 18 次提交
    • A
      Merge branch 'xdp_rxq_info' · 11d16edb
      Alexei Starovoitov 提交于
      Jesper Dangaard Brouer says:
      
      ====================
      V4:
      * Added reviewers/acks to patches
      * Fix patch desc in i40e that got out-of-sync with code
      * Add SPDX license headers for the two new files added in patch 14
      
      V3:
      * Fixed bug in virtio_net driver
      * Removed export of xdp_rxq_info_init()
      
      V2:
      * Changed API exposed to drivers
        - Removed invocation of "init" in drivers, and only call "reg"
          (Suggested by Saeed)
        - Allow "reg" to fail and handle this in drivers
          (Suggested by David Ahern)
      * Removed the SINKQ qtype, instead allow to register as "unused"
      * Also fixed some drivers during testing on actual HW (noted in patches)
      
      There is a need for XDP to know more about the RX-queue a given XDP
      frames have arrived on.  For both the XDP bpf-prog and kernel side.
      
      Instead of extending struct xdp_buff each time new info is needed,
      this patchset takes a different approach.  Struct xdp_buff is only
      extended with a pointer to a struct xdp_rxq_info (allowing for easier
      extending this later).  This xdp_rxq_info contains information related
      to how the driver have setup the individual RX-queue's.  This is
      read-mostly information, and all xdp_buff frames (in drivers
      napi_poll) point to the same xdp_rxq_info (per RX-queue).
      
      We stress this data/cache-line is for read-mostly info.  This is NOT
      for dynamic per packet info, use the data_meta for such use-cases.
      
      This patchset start out small, and only expose ingress_ifindex and the
      RX-queue index to the XDP/BPF program. Access to tangible info like
      the ingress ifindex and RX queue index, is fairly easy to comprehent.
      The other future use-cases could allow XDP frames to be recycled back
      to the originating device driver, by providing info on RX device and
      queue number.
      
      As XDP doesn't have driver feature flags, and eBPF code due to
      bpf-tail-calls cannot determine that XDP driver invoke it, this
      patchset have to update every driver that support XDP.
      
      For driver developers (review individual driver patches!):
      
      The xdp_rxq_info is tied to the drivers RX-ring(s). Whenever a RX-ring
      modification require (temporary) stopping RX frames, then the
      xdp_rxq_info should (likely) also be unregistred and re-registered,
      especially if reallocating the pages in the ring. Make sure ethtool
      set_channels does the right thing. When replacing XDP prog, if and
      only if RX-ring need to be changed, then also re-register the
      xdp_rxq_info.
      
      I'm Cc'ing the individual driver patches to the registered maintainers.
      
      Testing:
      
      I've only tested the NIC drivers I have hardware for.  The general
      test procedure is to (DUT = Device Under Test):
       (1) run pktgen script pktgen_sample04_many_flows.sh       (against DUT)
       (2) run samples/bpf program xdp_rxq_info --dev $DEV       (on DUT)
       (3) runtime modify number of NIC queues via ethtool -L    (on DUT)
       (4) runtime modify number of NIC ring-size via ethtool -G (on DUT)
      
      Patch based on git tree bpf-next (at commit fb982666):
       https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/
      ====================
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      11d16edb
    • J
      samples/bpf: program demonstrating access to xdp_rxq_info · 0fca931a
      Jesper Dangaard Brouer 提交于
      This sample program can be used for monitoring and reporting how many
      packets per sec (pps) are received per NIC RX queue index and which
      CPU processed the packet. In itself it is a useful tool for quickly
      identifying RSS imbalance issues, see below.
      
      The default XDP action is XDP_PASS in-order to provide a monitor
      mode. For benchmarking purposes it is possible to specify other XDP
      actions on the cmdline --action.
      
      Output below shows an imbalance RSS case where most RXQ's deliver to
      CPU-0 while CPU-2 only get packets from a single RXQ.  Looking at
      things from a CPU level the two CPUs are processing approx the same
      amount, BUT looking at the rx_queue_index levels it is clear that
      RXQ-2 receive much better service, than other RXQs which all share CPU-0.
      
      Running XDP on dev:i40e1 (ifindex:3) action:XDP_PASS
      XDP stats       CPU     pps         issue-pps
      XDP-RX CPU      0       900,473     0
      XDP-RX CPU      2       906,921     0
      XDP-RX CPU      total   1,807,395
      
      RXQ stats       RXQ:CPU pps         issue-pps
      rx_queue_index    0:0   180,098     0
      rx_queue_index    0:sum 180,098
      rx_queue_index    1:0   180,098     0
      rx_queue_index    1:sum 180,098
      rx_queue_index    2:2   906,921     0
      rx_queue_index    2:sum 906,921
      rx_queue_index    3:0   180,098     0
      rx_queue_index    3:sum 180,098
      rx_queue_index    4:0   180,082     0
      rx_queue_index    4:sum 180,082
      rx_queue_index    5:0   180,093     0
      rx_queue_index    5:sum 180,093
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      0fca931a
    • J
      bpf: finally expose xdp_rxq_info to XDP bpf-programs · 02dd3291
      Jesper Dangaard Brouer 提交于
      Now all XDP driver have been updated to setup xdp_rxq_info and assign
      this to xdp_buff->rxq.  Thus, it is now safe to enable access to some
      of the xdp_rxq_info struct members.
      
      This patch extend xdp_md and expose UAPI to userspace for
      ingress_ifindex and rx_queue_index.  Access happens via bpf
      instruction rewrite, that load data directly from struct xdp_rxq_info.
      
      * ingress_ifindex map to xdp_rxq_info->dev->ifindex
      * rx_queue_index  map to xdp_rxq_info->queue_index
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      02dd3291
    • J
      xdp: generic XDP handling of xdp_rxq_info · e817f856
      Jesper Dangaard Brouer 提交于
      Hook points for xdp_rxq_info:
       * reg  : netif_alloc_rx_queues
       * unreg: netif_free_rx_queues
      
      The net_device have some members (num_rx_queues + real_num_rx_queues)
      and data-area (dev->_rx with struct netdev_rx_queue's) that were
      primarily used for exporting information about RPS (CONFIG_RPS) queues
      to sysfs (CONFIG_SYSFS).
      
      For generic XDP extend struct netdev_rx_queue with the xdp_rxq_info,
      and remove some of the CONFIG_SYSFS ifdefs.
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      e817f856
    • J
      virtio_net: setup xdp_rxq_info · 754b8a21
      Jesper Dangaard Brouer 提交于
      The virtio_net driver doesn't dynamically change the RX-ring queue
      layout and backing pages, but instead reject XDP setup if all the
      conditions for XDP is not meet.  Thus, the xdp_rxq_info also remains
      fairly static.  This allow us to simply add the reg/unreg to
      net_device open/close functions.
      
      Driver hook points for xdp_rxq_info:
       * reg  : virtnet_open
       * unreg: virtnet_close
      
      V3:
       - bugfix, also setup xdp.rxq in receive_mergeable()
       - Tested bpf-sample prog inside guest on a virtio_net device
      
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: Jason Wang <jasowang@redhat.com>
      Cc: virtualization@lists.linux-foundation.org
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Reviewed-by: NJason Wang <jasowang@redhat.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      754b8a21
    • J
      tun: setup xdp_rxq_info · 8bf5c4ee
      Jesper Dangaard Brouer 提交于
      Driver hook points for xdp_rxq_info:
       * reg  : tun_attach
       * unreg: __tun_detach
      
      I've done some manual testing of this tun driver, but I would
      appriciate good review and someone else running their use-case tests,
      as I'm not 100% sure I understand the tfile->detached semantics.
      
      V2: Removed the skb_array_cleanup() call from V1 by request from Jason Wang.
      
      Cc: Jason Wang <jasowang@redhat.com>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: Willem de Bruijn <willemb@google.com>
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Reviewed-by: NJason Wang <jasowang@redhat.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      8bf5c4ee
    • J
      thunderx: setup xdp_rxq_info · 27e95e36
      Jesper Dangaard Brouer 提交于
      This driver uses a bool scheme for "enable"/"disable" when setting up
      different resources.  Thus, the hook points for xdp_rxq_info is done
      in the same function call nicvf_rcv_queue_config().  This is activated
      through enable/disable via nicvf_config_data_transfer(), which is tied
      into nicvf_stop()/nicvf_open().
      
      Extending driver packet handler call-path nicvf_rcv_pkt_handler() with
      a pointer to the given struct rcv_queue, in-order to access the
      xdp_rxq_info data area (in nicvf_xdp_rx()).
      
      V2: Driver have no proper error path for failed XDP RX-queue info reg,
      as nicvf_rcv_queue_config is a void function.
      
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: Sunil Goutham <sgoutham@cavium.com>
      Cc: Robert Richter <rric@kernel.org>
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      27e95e36
    • J
      nfp: setup xdp_rxq_info · 7f1c684a
      Jesper Dangaard Brouer 提交于
      Driver hook points for xdp_rxq_info:
       * reg  : nfp_net_rx_ring_alloc
       * unreg: nfp_net_rx_ring_free
      
      In struct nfp_net_rx_ring moved member @size into a hole on 64-bit.
      Thus, the size remaines the same after adding member @xdp_rxq.
      
      Cc: oss-drivers@netronome.com
      Cc: Jakub Kicinski <jakub.kicinski@netronome.com>
      Cc: Simon Horman <simon.horman@netronome.com>
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      7f1c684a
    • J
      bnxt_en: setup xdp_rxq_info · 96a8604f
      Jesper Dangaard Brouer 提交于
      Driver hook points for xdp_rxq_info:
       * reg  : bnxt_alloc_rx_rings
       * unreg: bnxt_free_rx_rings
      
      This driver should be updated to re-register when changing
      allocation mode of RX rings.
      
      Tested on actual hardware.
      
      Cc: Andy Gospodarek <andy@greyhouse.net>
      Cc: Michael Chan <michael.chan@broadcom.com>
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      96a8604f
    • J
      mlx4: setup xdp_rxq_info · ae75415d
      Jesper Dangaard Brouer 提交于
      Driver hook points for xdp_rxq_info:
       * reg  : mlx4_en_create_rx_ring
       * unreg: mlx4_en_destroy_rx_ring
      
      Tested on actual hardware.
      
      Cc: Tariq Toukan <tariqt@mellanox.com>
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Reviewed-by: NTariq Toukan <tariqt@mellanox.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      ae75415d
    • J
      xdp/qede: setup xdp_rxq_info and intro xdp_rxq_info_is_reg · c0124f32
      Jesper Dangaard Brouer 提交于
      The driver code qede_free_fp_array() depend on kfree() can be called
      with a NULL pointer. This stems from the qede_alloc_fp_array()
      function which either (kz)alloc memory for fp->txq or fp->rxq.
      This also simplifies error handling code in case of memory allocation
      failures, but xdp_rxq_info_unreg need to know the difference.
      
      Introduce xdp_rxq_info_is_reg() to handle if a memory allocation fails
      and detect this is the failure path by seeing that xdp_rxq_info was
      not registred yet, which first happens after successful alloaction in
      qede_init_fp().
      
      Driver hook points for xdp_rxq_info:
       * reg  : qede_init_fp
       * unreg: qede_free_fp_array
      
      Tested on actual hardware with samples/bpf program.
      
      V2: Driver have no proper error path for failed XDP RX-queue info reg, as
      qede_init_fp() is a void function.
      
      Cc: everest-linux-l2@cavium.com
      Cc: Ariel Elior <Ariel.Elior@cavium.com>
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      c0124f32
    • J
      ixgbe: setup xdp_rxq_info · 99ffc5ad
      Jesper Dangaard Brouer 提交于
      Driver hook points for xdp_rxq_info:
       * reg  : ixgbe_setup_rx_resources()
       * unreg: ixgbe_free_rx_resources()
      
      Tested on actual hardware.
      
      V2: Fix ixgbe_set_ringparam, clear xdp_rxq_info in temp_ring
      
      Cc: intel-wired-lan@lists.osuosl.org
      Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      Cc: Alexander Duyck <alexander.duyck@gmail.com>
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      99ffc5ad
    • J
      i40e: setup xdp_rxq_info · 87128824
      Jesper Dangaard Brouer 提交于
      The i40e driver has a special "FDIR" RX-ring (I40E_VSI_FDIR) which is
      a sideband channel for configuring/updating the flow director tables.
      This (i40e_vsi_)type does not invoke XDP-ebpf code.
      
      As suggested by Björn (V2): Instead of marking this I40E_VSI_FDIR RX-ring
      a special case, reverse the logic and only select RX-rings of type
      I40E_VSI_MAIN to register xdp_rxq_info's for.
      
      Driver hook points for xdp_rxq_info:
       * reg  : i40e_setup_rx_descriptors (via i40e_vsi_setup_rx_resources)
       * unreg: i40e_free_rx_resources    (via i40e_vsi_free_rx_resources)
      
      Tested on actual hardware with samples/bpf program.
      
      V2: Fixed bug in i40e_set_ringparam (memset zero) + match on I40E_VSI_MAIN.
      V4: Update patch desc that got out-of-sync with code.
      
      Cc: intel-wired-lan@lists.osuosl.org
      Cc: Björn Töpel <bjorn.topel@intel.com>
      Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      Cc: Paul Menzel <pmenzel@molgen.mpg.de>
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Reviewed-by: NPaul Menzel <pmenzel@molgen.mpg.de>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      87128824
    • J
      xdp/mlx5: setup xdp_rxq_info · 0ddf5432
      Jesper Dangaard Brouer 提交于
      The mlx5 driver have a special drop-RQ queue (one per interface) that
      simply drops all incoming traffic. It helps driver keep other HW
      objects (flow steering) alive upon down/up operations.  It is
      temporarily pointed by flow steering objects during the interface
      setup, and when interface is down. It lacks many fields that are set
      in a regular RQ (for example its state is never switched to
      MLX5_RQC_STATE_RDY). (Thanks to Tariq Toukan for explanation).
      
      The XDP RX-queue info for this drop-RQ marked as unused, which
      allow us to use the same takedown/free code path as other RX-queues.
      
      Driver hook points for xdp_rxq_info:
       * reg   : mlx5e_alloc_rq()
       * unused: mlx5e_alloc_drop_rq()
       * unreg : mlx5e_free_rq()
      
      Tested on actual hardware with samples/bpf program
      
      Cc: Saeed Mahameed <saeedm@mellanox.com>
      Cc: Matan Barak <matanb@mellanox.com>
      Cc: Tariq Toukan <tariqt@mellanox.com>
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      0ddf5432
    • J
      xdp: base API for new XDP rx-queue info concept · aecd67b6
      Jesper Dangaard Brouer 提交于
      This patch only introduce the core data structures and API functions.
      All XDP enabled drivers must use the API before this info can used.
      
      There is a need for XDP to know more about the RX-queue a given XDP
      frames have arrived on.  For both the XDP bpf-prog and kernel side.
      
      Instead of extending xdp_buff each time new info is needed, the patch
      creates a separate read-mostly struct xdp_rxq_info, that contains this
      info.  We stress this data/cache-line is for read-only info.  This is
      NOT for dynamic per packet info, use the data_meta for such use-cases.
      
      The performance advantage is this info can be setup at RX-ring init
      time, instead of updating N-members in xdp_buff.  A possible (driver
      level) micro optimization is that xdp_buff->rxq assignment could be
      done once per XDP/NAPI loop.  The extra pointer deref only happens for
      program needing access to this info (thus, no slowdown to existing
      use-cases).
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      aecd67b6
    • J
      nfp: add basic multicast filtering · d0adb51e
      Jakub Kicinski 提交于
      We currently always pass all multicast traffic through.
      Only set L2MC when actually needed.  Since the driver
      was not making use of the capability to filter out mcast
      frames, some FW projects don't implement it any more.
      Don't warn users if capability is not present (like we
      do for promisc flag).  The lack of L2MC capability is
      assumed to mean all multicast traffic goes through.
      Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NSimon Horman <simon.horman@netronome.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d0adb51e
    • D
      Merge branch 'rds-use-RCU-between-work-enqueue-and-connection-teardown' · ad521763
      David S. Miller 提交于
      Sowmini Varadhan says:
      
      ====================
      rds: use RCU between work-enqueue and connection teardown
      
      This patchset follows up on the root-cause mentioned in
      https://www.spinics.net/lists/netdev/msg472849.html
      
      Patch1 implements some code refactoring that was suggeseted
      as an enhancement in http://patchwork.ozlabs.org/patch/843157/
      It replaces the c_destroy_in_prog bit in rds_connection with
      an atomically managed flag in rds_conn_path.
      
      Patch2 builds on Patch1 and uses RCU to make sure that
      work is only enqueued if the connection destroy is not already
      in progress: the test-flag-and-enqueue is done under rcu_read_lock,
      while destroy first sets the flag, uses synchronize_rcu to
      wait for existing reader threads to complete, and then starts
      all the work-cancellation.
      
      Since I have not been able to reproduce the original stack traces
      reported by syszbot, and these are fixes for a race condition that
      are based on code-inspection I am not marking these as reported-by
      at this time.
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ad521763
    • S
      rds: use RCU to synchronize work-enqueue with connection teardown · 3db6e0d1
      Sowmini Varadhan 提交于
      rds_sendmsg() can enqueue work on cp_send_w from process context, but
      it should not enqueue this work if connection teardown  has commenced
      (else we risk enquing work after rds_conn_path_destroy() has assumed that
      all work has been cancelled/flushed).
      
      Similarly some other functions like rds_cong_queue_updates
      and rds_tcp_data_ready are called in softirq context, and may end
      up enqueuing work on rds_wq after rds_conn_path_destroy() has assumed
      that all workqs are quiesced.
      
      Check the RDS_DESTROY_PENDING bit and use rcu synchronization to avoid
      all these races.
      Signed-off-by: NSowmini Varadhan <sowmini.varadhan@oracle.com>
      Acked-by: NSantosh Shilimkar <santosh.shilimkar@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3db6e0d1
新手
引导
客服 返回
顶部