1. 24 11月, 2022 1 次提交
  2. 28 10月, 2022 1 次提交
    • V
      net: enetc: survive memory pressure without crashing · 84ce1ca3
      Vladimir Oltean 提交于
      Under memory pressure, enetc_refill_rx_ring() may fail, and when called
      during the enetc_open() -> enetc_setup_rxbdr() procedure, this is not
      checked for.
      
      An extreme case of memory pressure will result in exactly zero buffers
      being allocated for the RX ring, and in such a case it is expected that
      hardware drops all RX packets due to lack of buffers.
      
      This does not happen, because the reset-default value of the consumer
      and produces index is 0, and this makes the ENETC think that all buffers
      have been initialized and that it owns them (when in reality none were).
      
      The hardware guide explains this best:
      
      | Configure the receive ring producer index register RBaPIR with a value
      | of 0. The producer index is initially configured by software but owned
      | by hardware after the ring has been enabled. Hardware increments the
      | index when a frame is received which may consume one or more BDs.
      | Hardware is not allowed to increment the producer index to match the
      | consumer index since it is used to indicate an empty condition. The ring
      | can hold at most RBLENR[LENGTH]-1 received BDs.
      |
      | Configure the receive ring consumer index register RBaCIR. The
      | consumer index is owned by software and updated during operation of the
      | of the BD ring by software, to indicate that any receive data occupied
      | in the BD has been processed and it has been prepared for new data.
      | - If consumer index and producer index are initialized to the same
      |   value, it indicates that all BDs in the ring have been prepared and
      |   hardware owns all of the entries.
      | - If consumer index is initialized to producer index plus N, it would
      |   indicate N BDs have been prepared. Note that hardware cannot start if
      |   only a single buffer is prepared due to the restrictions described in
      |   (2).
      | - Software may write consumer index to match producer index anytime
      |   while the ring is operational to indicate all received BDs prior have
      |   been processed and new BDs prepared for hardware.
      
      Normally, the value of rx_ring->rcir (consumer index) is brought in sync
      with the rx_ring->next_to_use software index, but this only happens if
      page allocation ever succeeded.
      
      When PI==CI==0, the hardware appears to receive frames and write them to
      DMA address 0x0 (?!), then set the READY bit in the BD.
      
      The enetc_clean_rx_ring() function (and its XDP derivative) is naturally
      not prepared to handle such a condition. It will attempt to process
      those frames using the rx_swbd structure associated with index i of the
      RX ring, but that structure is not fully initialized (enetc_new_page()
      does all of that). So what happens next is undefined behavior.
      
      To operate using no buffer, we must initialize the CI to PI + 1, which
      will block the hardware from advancing the CI any further, and drop
      everything.
      
      The issue was seen while adding support for zero-copy AF_XDP sockets,
      where buffer memory comes from user space, which can even decide to
      supply no buffers at all (example: "xdpsock --txonly"). However, the bug
      is present also with the network stack code, even though it would take a
      very determined person to trigger a page allocation failure at the
      perfect time (a series of ifup/ifdown under memory pressure should
      eventually reproduce it given enough retries).
      
      Fixes: d4fd0404 ("enetc: Introduce basic PF and VF ENETC ethernet drivers")
      Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com>
      Reviewed-by: NClaudiu Manoil <claudiu.manoil@nxp.com>
      Link: https://lore.kernel.org/r/20221027182925.3256653-1-vladimir.oltean@nxp.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
      84ce1ca3
  3. 30 9月, 2022 1 次提交
  4. 29 9月, 2022 1 次提交
  5. 21 9月, 2022 2 次提交
    • V
      net: enetc: deny offload of tc-based TSN features on VF interfaces · 5641c751
      Vladimir Oltean 提交于
      TSN features on the ENETC (taprio, cbs, gate, police) are configured
      through a mix of command BD ring messages and port registers:
      enetc_port_rd(), enetc_port_wr().
      
      Port registers are a region of the ENETC memory map which are only
      accessible from the PCIe Physical Function. They are not accessible from
      the Virtual Functions.
      
      Moreover, attempting to access these registers crashes the kernel:
      
      $ echo 1 > /sys/bus/pci/devices/0000\:00\:00.0/sriov_numvfs
      pci 0000:00:01.0: [1957:ef00] type 00 class 0x020001
      fsl_enetc_vf 0000:00:01.0: Adding to iommu group 15
      fsl_enetc_vf 0000:00:01.0: enabling device (0000 -> 0002)
      fsl_enetc_vf 0000:00:01.0 eno0vf0: renamed from eth0
      $ tc qdisc replace dev eno0vf0 root taprio num_tc 8 map 0 1 2 3 4 5 6 7 \
      	queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 base-time 0 \
      	sched-entry S 0x7f 900000 sched-entry S 0x80 100000 flags 0x2
      Unable to handle kernel paging request at virtual address ffff800009551a08
      Internal error: Oops: 96000007 [#1] PREEMPT SMP
      pc : enetc_setup_tc_taprio+0x170/0x47c
      lr : enetc_setup_tc_taprio+0x16c/0x47c
      Call trace:
       enetc_setup_tc_taprio+0x170/0x47c
       enetc_setup_tc+0x38/0x2dc
       taprio_change+0x43c/0x970
       taprio_init+0x188/0x1e0
       qdisc_create+0x114/0x470
       tc_modify_qdisc+0x1fc/0x6c0
       rtnetlink_rcv_msg+0x12c/0x390
      
      Split enetc_setup_tc() into separate functions for the PF and for the
      VF drivers. Also remove enetc_qos.o from being included into
      enetc-vf.ko, since it serves absolutely no purpose there.
      
      Fixes: 34c6adf1 ("enetc: Configure the Time-Aware Scheduler via tc-taprio offload")
      Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com>
      Link: https://lore.kernel.org/r/20220916133209.3351399-2-vladimir.oltean@nxp.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
      5641c751
    • V
      net: enetc: move enetc_set_psfp() out of the common enetc_set_features() · fed38e64
      Vladimir Oltean 提交于
      The VF netdev driver shouldn't respond to changes in the NETIF_F_HW_TC
      flag; only PFs should. Moreover, TSN-specific code should go to
      enetc_qos.c, which should not be included in the VF driver.
      
      Fixes: 79e49982 ("net: enetc: add hw tc hw offload features for PSPF capability")
      Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com>
      Link: https://lore.kernel.org/r/20220916133209.3351399-1-vladimir.oltean@nxp.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
      fed38e64
  6. 12 5月, 2022 1 次提交
    • P
      net: enetc: count the tc-taprio window drops · 285e8ded
      Po Liu 提交于
      The enetc scheduler for IEEE 802.1Qbv has 2 options (depending on
      PTGCR[TG_DROP_DISABLE]) when we attempt to send an oversized packet
      which will never fit in its allotted time slot for its traffic class:
      either block the entire port due to head-of-line blocking, or drop the
      packet and set a bit in the writeback format of the transmit buffer
      descriptor, allowing other packets to be sent.
      
      We obviously choose the second option in the driver, but we do not
      detect the drop condition, so from the perspective of the network stack,
      the packet is sent and no error counter is incremented.
      
      This change checks the writeback of the TX BD when tc-taprio is enabled,
      and increments a specific ethtool statistics counter and a generic
      "tx_dropped" counter in ndo_get_stats64.
      Signed-off-by: NPo Liu <Po.Liu@nxp.com>
      Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com>
      Reviewed-by: NClaudiu Manoil <claudiu.manoil@nxp.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      285e8ded
  7. 10 1月, 2022 1 次提交
  8. 14 12月, 2021 1 次提交
  9. 22 10月, 2021 2 次提交
  10. 14 10月, 2021 1 次提交
  11. 13 10月, 2021 1 次提交
  12. 08 10月, 2021 2 次提交
  13. 19 9月, 2021 2 次提交
    • C
      enetc: Fix uninitialized struct dim_sample field usage · 9f7afa05
      Claudiu Manoil 提交于
      The only struct dim_sample member that does not get
      initialized by dim_update_sample() is comp_ctr. (There
      is special API to initialize comp_ctr:
      dim_update_sample_with_comps(), and it is currently used
      only for RDMA.) comp_ctr is used to compute curr_stats->cmps
      and curr_stats->cpe_ratio (see dim_calc_stats()) which in
      turn are consumed by the rdma_dim_*() API.  Therefore,
      functionally, the net_dim*() API consumers are not affected.
      Nevertheless, fix the computation of statistics based
      on an uninitialized variable, even if the mentioned statistics
      are not used at the moment.
      
      Fixes: ae0e6a5d ("enetc: Add adaptive interrupt coalescing")
      Signed-off-by: NClaudiu Manoil <claudiu.manoil@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9f7afa05
    • C
      enetc: Fix illegal access when reading affinity_hint · 7237a494
      Claudiu Manoil 提交于
      irq_set_affinity_hit() stores a reference to the cpumask_t
      parameter in the irq descriptor, and that reference can be
      accessed later from irq_affinity_hint_proc_show(). Since
      the cpu_mask parameter passed to irq_set_affinity_hit() has
      only temporary storage (it's on the stack memory), later
      accesses to it are illegal. Thus reads from the corresponding
      procfs affinity_hint file can result in paging request oops.
      
      The issue is fixed by the get_cpu_mask() helper, which provides
      a permanent storage for the cpumask_t parameter.
      
      Fixes: d4fd0404 ("enetc: Introduce basic PF and VF ENETC ethernet drivers")
      Signed-off-by: NClaudiu Manoil <claudiu.manoil@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7237a494
  14. 16 9月, 2021 1 次提交
  15. 24 4月, 2021 1 次提交
    • Y
      enetc: fix locking for one-step timestamping packet transfer · 7ce9c3d3
      Yangbo Lu 提交于
      The previous patch to support PTP Sync packet one-step timestamping
      described one-step timestamping packet handling logic as below in
      commit message:
      
      - Trasmit packet immediately if no other one in transfer, or queue to
        skb queue if there is already one in transfer.
        The test_and_set_bit_lock() is used here to lock and check state.
      - Start a work when complete transfer on hardware, to release the bit
        lock and to send one skb in skb queue if has.
      
      There was not problem of the description, but there was a mistake in
      implementation. The locking/test_and_set_bit_lock() should be put in
      enetc_start_xmit() which may be called by worker, rather than in
      enetc_xmit(). Otherwise, the worker calling enetc_start_xmit() after
      bit lock released is not able to lock again for transfer.
      
      Fixes: 7294380c ("enetc: support PTP Sync packet one-step timestamping")
      Signed-off-by: NYangbo Lu <yangbo.lu@nxp.com>
      Reviewed-by: NClaudiu Manoil <claudiu.manoil@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7ce9c3d3
  16. 17 4月, 2021 9 次提交
    • V
      net: enetc: apply the MDIO workaround for XDP_REDIRECT too · 24e39309
      Vladimir Oltean 提交于
      Described in fd5736bf ("enetc: Workaround for MDIO register access
      issue") is a workaround for a hardware bug that requires a register
      access of the MDIO controller to never happen concurrently with a
      register access of a port PF. To avoid that, a mutual exclusion scheme
      with rwlocks was implemented - the port PF accessors are the 'read'
      side, and the MDIO accessors are the 'write' side.
      
      When we do XDP_REDIRECT between two ENETC interfaces, all is fine
      because the MDIO lock is already taken from the NAPI poll loop.
      
      But when the ingress interface is not ENETC, just the egress is, the
      MDIO lock is not taken, so we might access the port PF registers
      concurrently with MDIO, which will make the link flap due to wrong
      values returned from the PHY.
      
      To avoid this, let's just slap an enetc_lock_mdio/enetc_unlock_mdio at
      the beginning and ending of enetc_xdp_xmit. The fact that the MDIO lock
      is designed as a rwlock is important here, because the read side is
      reentrant (that is one of the main reasons why we chose it). Usually,
      the way we benefit of its reentrancy is by running the data path
      concurrently on both CPUs, but in this case, we benefit from the
      reentrancy by taking the lock even when the lock is already taken
      (and that's the situation where ENETC is both the ingress and the egress
      interface for XDP_REDIRECT, which was fine before and still is fine now).
      Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      24e39309
    • V
      net: enetc: fix buffer leaks with XDP_TX enqueue rejections · 92ff9a6e
      Vladimir Oltean 提交于
      If the TX ring is congested, enetc_xdp_tx() returns false for the
      current XDP frame (represented as an array of software BDs).
      
      This array of software TX BDs is constructed in enetc_rx_swbd_to_xdp_tx_swbd
      from software BDs freshly cleaned from the RX ring. The issue is that we
      scrub the RX software BDs too soon, more precisely before we know that
      we can enqueue the TX BDs successfully into the TX ring.
      
      If we can't enqueue them (and enetc_xdp_tx returns false), we call
      enetc_xdp_drop which attempts to recycle the buffers held by the RX
      software BDs. But because we scrubbed those RX BDs already, two things
      happen:
      
      (a) we leak their memory
      (b) we populate the RX software BD ring with an all-zero rx_swbd
          structure, which makes the buffer refill path allocate more memory.
      
      enetc_refill_rx_ring
      -> if (unlikely(!rx_swbd->page))
         -> enetc_new_page
      
      That is a recipe for fast OOM.
      
      Fixes: 7ed2bc80 ("net: enetc: add support for XDP_TX")
      Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      92ff9a6e
    • V
      net: enetc: handle the invalid XDP action the same way as XDP_DROP · 975acc83
      Vladimir Oltean 提交于
      When the XDP program returns an invalid action, we should free the RX
      buffer.
      Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      975acc83
    • V
      net: enetc: use dedicated TX rings for XDP · 7eab503b
      Vladimir Oltean 提交于
      It is possible for one CPU to perform TX hashing (see netdev_pick_tx)
      between the 8 ENETC TX rings, and the TX hashing to select TX queue 1.
      
      At the same time, it is possible for the other CPU to already use TX
      ring 1 for XDP (either XDP_TX or XDP_REDIRECT). Since there is no mutual
      exclusion between XDP and the network stack, we run into an issue
      because the ENETC TX procedure is not reentrant.
      
      The obvious approach would be to just make XDP take the lock of the
      network stack's TX queue corresponding to the ring it's about to enqueue
      in.
      
      For XDP_REDIRECT, this is quite straightforward, a lock at the beginning
      and end of enetc_xdp_xmit() should do the trick.
      
      But for XDP_TX, it's a bit more complicated. For one, we do TX batching
      all by ourselves for frames with the XDP_TX verdict. This is something
      we would like to keep the way it is, for performance reasons. But
      batching means that the network stack's lock should be kept from the
      first enqueued XDP_TX frame and until we ring the doorbell. That is
      mostly fine, except for cases when in the same NAPI loop we have mixed
      XDP_TX and XDP_REDIRECT frames. So if enetc_xdp_xmit() gets called while
      we are holding the lock from the RX NAPI, then bam, deadlock. The naive
      answer could be 'just flush the XDP_TX frames first, then release the
      network stack's TX queue lock, then call xdp_do_flush_map()'. But even
      xdp_do_redirect() is capable of flushing the batched XDP_REDIRECT
      frames, so unless we unlock/relock the TX queue around xdp_do_redirect(),
      there simply isn't any clean way to protect XDP_TX from concurrent
      network stack .ndo_start_xmit() on another CPU.
      
      So we need to take a different approach, and that is to reserve two
      rings for the sole use of XDP. We leave TX rings
      0..ndev->real_num_tx_queues-1 to be handled by the network stack, and we
      pick them from the end of the priv->tx_ring array.
      
      We make an effort to keep the mapping done by enetc_alloc_msix() which
      decides which CPU handles the TX completions of which TX ring in its
      NAPI poll. So the XDP TX ring of CPU 0 is handled by TX ring 6, and the
      XDP TX ring of CPU 1 is handled by TX ring 7.
      Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7eab503b
    • V
      net: enetc: remove unneeded xdp_do_flush_map() · a6369fe6
      Vladimir Oltean 提交于
      xdp_do_redirect already contains:
      -> dev_map_enqueue
         -> __xdp_enqueue
            -> bq_enqueue
               -> bq_xmit_all // if we have more than 16 frames
      
      So the logic from enetc will never be hit, because ENETC_DEFAULT_TX_WORK
      is 128.
      Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a6369fe6
    • V
      net: enetc: stop XDP NAPI processing when build_skb() fails · 8f50d8bb
      Vladimir Oltean 提交于
      When the code path below fails:
      
      enetc_clean_rx_ring_xdp // XDP_PASS
      -> enetc_build_skb
         -> enetc_map_rx_buff_to_skb
            -> build_skb
      
      enetc_clean_rx_ring_xdp will 'break', but that 'break' instruction isn't
      strong enough to actually break the NAPI poll loop, just the switch/case
      statement for XDP actions. So we increment rx_frm_cnt and go to the next
      frames minding our own business.
      
      Instead let's do what the skb NAPI poll function does, and break the
      loop now, waiting for the memory pressure to go away. Otherwise the next
      calls to build_skb() are likely to fail too.
      Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8f50d8bb
    • V
      net: enetc: recycle buffers for frames with RX errors · 672f9a21
      Vladimir Oltean 提交于
      When receiving a frame with errors, currently we do nothing with it (we
      don't construct an skb or an xdp_buff), we just exit the NAPI poll loop.
      
      Let's put the buffer back into the RX ring (similar to XDP_DROP).
      Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      672f9a21
    • V
      net: enetc: rename the buffer reuse helpers · 6b04830d
      Vladimir Oltean 提交于
      enetc_put_xdp_buff has nothing to do with XDP, frankly, it is just a
      helper to populate the recycle end of the shadow RX BD ring
      (next_to_alloc) with a given buffer.
      
      On the other hand, enetc_put_rx_buff plays more tricks than its name
      would suggest.
      
      So let's rename enetc_put_rx_buff into enetc_flip_rx_buff to reflect the
      half-page buffer reuse tricks that it employs, and enetc_put_xdp_buff
      into enetc_put_rx_buff which suggests a more garden-variety operation.
      Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6b04830d
    • V
      net: enetc: remove redundant clearing of skb/xdp_frame pointer in TX conf path · e9e49ae8
      Vladimir Oltean 提交于
      Later in enetc_clean_tx_ring we have:
      
      		/* Scrub the swbd here so we don't have to do that
      		 * when we reuse it during xmit
      		 */
      		memset(tx_swbd, 0, sizeof(*tx_swbd));
      
      So these assignments are unnecessary.
      Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e9e49ae8
  17. 16 4月, 2021 1 次提交
  18. 13 4月, 2021 2 次提交
    • Y
      enetc: support PTP Sync packet one-step timestamping · 7294380c
      Yangbo Lu 提交于
      This patch is to add support for PTP Sync packet one-step timestamping.
      Since ENETC single-step register has to be configured dynamically per
      packet for correctionField offeset and UDP checksum update, current
      one-step timestamping packet has to be sent only when the last one
      completes transmitting on hardware. So, on the TX, this patch handles
      one-step timestamping packet as below:
      
      - Trasmit packet immediately if no other one in transfer, or queue to
        skb queue if there is already one in transfer.
        The test_and_set_bit_lock() is used here to lock and check state.
      - Start a work when complete transfer on hardware, to release the bit
        lock and to send one skb in skb queue if has.
      
      And the configuration for one-step timestamping on ENETC before
      transmitting is,
      
      - Set one-step timestamping flag in extension BD.
      - Write 30 bits current timestamp in tstamp field of extension BD.
      - Update PTP Sync packet originTimestamp field with current timestamp.
      - Configure single-step register for correctionField offeset and UDP
        checksum update.
      Signed-off-by: NYangbo Lu <yangbo.lu@nxp.com>
      Reviewed-by: NClaudiu Manoil <claudiu.manoil@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7294380c
    • Y
      enetc: mark TX timestamp type per skb · f768e751
      Yangbo Lu 提交于
      Mark TX timestamp type per skb on skb->cb[0], instead of
      global variable for all skbs. This is a preparation for
      one step timestamp support.
      
      For one-step timestamping enablement, there will be both
      one-step and two-step PTP messages to transfer. And a skb
      queue is needed for one-step PTP messages making sure
      start to send current message only after the last one
      completed on hardware. (ENETC single-step register has to
      be dynamically configured per message.) So, marking TX
      timestamp type per skb is required.
      Signed-off-by: NYangbo Lu <yangbo.lu@nxp.com>
      Reviewed-by: NClaudiu Manoil <claudiu.manoil@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f768e751
  19. 10 4月, 2021 3 次提交
  20. 01 4月, 2021 6 次提交
    • V
      net: enetc: add support for XDP_REDIRECT · 9d2b68cc
      Vladimir Oltean 提交于
      The driver implementation of the XDP_REDIRECT action reuses parts from
      XDP_TX, most notably the enetc_xdp_tx function which transmits an array
      of TX software BDs. Only this time, the buffers don't have DMA mappings,
      we need to create them.
      
      When a BPF program reaches the XDP_REDIRECT verdict for a frame, we can
      employ the same buffer reuse strategy as for the normal processing path
      and for XDP_PASS: we can flip to the other page half and seed that to
      the RX ring.
      
      Note that scatter/gather support is there, but disabled due to lack of
      multi-buffer support in XDP (which is added by this series):
      https://patchwork.kernel.org/project/netdevbpf/cover/cover.1616179034.git.lorenzo@kernel.org/Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9d2b68cc
    • V
      net: enetc: add support for XDP_TX · 7ed2bc80
      Vladimir Oltean 提交于
      For reflecting packets back into the interface they came from, we create
      an array of TX software BDs derived from the RX software BDs. Therefore,
      we need to extend the TX software BD structure to contain most of the
      stuff that's already present in the RX software BD structure, for
      reasons that will become evident in a moment.
      
      For a frame with the XDP_TX verdict, we don't reuse any buffer right
      away as we do for XDP_DROP (the same page half) or XDP_PASS (the other
      page half, same as the skb code path).
      
      Because the buffer transfers ownership from the RX ring to the TX ring,
      reusing any page half right away is very dangerous. So what we can do is
      we can recycle the same page half as soon as TX is complete.
      
      The code path is:
      enetc_poll
      -> enetc_clean_rx_ring_xdp
         -> enetc_xdp_tx
         -> enetc_refill_rx_ring
      (time passes, another MSI interrupt is raised)
      enetc_poll
      -> enetc_clean_tx_ring
         -> enetc_recycle_xdp_tx_buff
      
      But that creates a problem, because there is a potentially large time
      window between enetc_xdp_tx and enetc_recycle_xdp_tx_buff, period in
      which we'll have less and less RX buffers.
      
      Basically, when the ship starts sinking, the knee-jerk reaction is to
      let enetc_refill_rx_ring do what it does for the standard skb code path
      (refill every 16 consumed buffers), but that turns out to be very
      inefficient. The problem is that we have no rx_swbd->page at our
      disposal from the enetc_reuse_page path, so enetc_refill_rx_ring would
      have to call enetc_new_page for every buffer that we refill (if we
      choose to refill at this early stage). Very inefficient, it only makes
      the problem worse, because page allocation is an expensive process, and
      CPU time is exactly what we're lacking.
      
      Additionally, there is an even bigger problem: if we let
      enetc_refill_rx_ring top up the ring's buffers again from the RX path,
      remember that the buffers sent to transmission haven't disappeared
      anywhere. They will be eventually sent, and processed in
      enetc_clean_tx_ring, and an attempt will be made to recycle them.
      But surprise, the RX ring is already full of new buffers, because we
      were premature in deciding that we should refill. So not only we took
      the expensive decision of allocating new pages, but now we must throw
      away perfectly good and reusable buffers.
      
      So what we do is we implement an elastic refill mechanism, which keeps
      track of the number of in-flight XDP_TX buffer descriptors. We top up
      the RX ring only up to the total ring capacity minus the number of BDs
      that are in flight (because we know that those BDs will return to us
      eventually).
      
      The enetc driver manages 1 RX ring per CPU, and the default TX ring
      management is the same. So we do XDP_TX towards the TX ring of the same
      index, because it is affined to the same CPU. This will probably not
      produce great results when we have a tc-taprio/tc-mqprio qdisc on the
      interface, because in that case, the number of TX rings might be
      greater, but I didn't add any checks for that yet (mostly because I
      didn't know what checks to add).
      
      It should also be noted that we need to change the DMA mapping direction
      for RX buffers, since they may now be reflected into the TX ring of the
      same device. We choose to use DMA_BIDIRECTIONAL instead of unmapping and
      remapping as DMA_TO_DEVICE, because performance is better this way.
      Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7ed2bc80
    • V
      net: enetc: add support for XDP_DROP and XDP_PASS · d1b15102
      Vladimir Oltean 提交于
      For the RX ring, enetc uses an allocation scheme based on pages split
      into two buffers, which is already very efficient in terms of preventing
      reallocations / maximizing reuse, so I see no reason why I would change
      that.
      
       +--------+--------+--------+--------+--------+--------+--------+
       |        |        |        |        |        |        |        |
       | half B | half B | half B | half B | half B | half B | half B |
       |        |        |        |        |        |        |        |
       +--------+--------+--------+--------+--------+--------+--------+
       |        |        |        |        |        |        |        |
       | half A | half A | half A | half A | half A | half A | half A | RX ring
       |        |        |        |        |        |        |        |
       +--------+--------+--------+--------+--------+--------+--------+
           ^                                                     ^
           |                                                     |
       next_to_clean                                       next_to_alloc
                                                            next_to_use
      
                         +--------+--------+--------+--------+--------+
                         |        |        |        |        |        |
                         | half B | half B | half B | half B | half B |
                         |        |        |        |        |        |
       +--------+--------+--------+--------+--------+--------+--------+
       |        |        |        |        |        |        |        |
       | half B | half B | half A | half A | half A | half A | half A | RX ring
       |        |        |        |        |        |        |        |
       +--------+--------+--------+--------+--------+--------+--------+
       |        |        |   ^                                   ^
       | half A | half A |   |                                   |
       |        |        | next_to_clean                   next_to_use
       +--------+--------+
                    ^
                    |
               next_to_alloc
      
      then when enetc_refill_rx_ring is called, whose purpose is to advance
      next_to_use, it sees that it can take buffers up to next_to_alloc, and
      it says "oh, hey, rx_swbd->page isn't NULL, I don't need to allocate
      one!".
      
      The only problem is that for default PAGE_SIZE values of 4096, buffer
      sizes are 2048 bytes. While this is enough for normal skb allocations at
      an MTU of 1500 bytes, for XDP it isn't, because the XDP headroom is 256
      bytes, and including skb_shared_info and alignment, we end up being able
      to make use of only 1472 bytes, which is insufficient for the default
      MTU.
      
      To solve that problem, we implement scatter/gather processing in the
      driver, because we would really like to keep the existing allocation
      scheme. A packet of 1500 bytes is received in a buffer of 1472 bytes and
      another one of 28 bytes.
      
      Because the headroom required by XDP is different (and much larger) than
      the one required by the network stack, whenever a BPF program is added
      or deleted on the port, we drain the existing RX buffers and seed new
      ones with the required headroom. We also keep the required headroom in
      rx_ring->buffer_offset.
      
      The simplest way to implement XDP_PASS, where an skb must be created, is
      to create an xdp_buff based on the next_to_clean RX BDs, but not clear
      those BDs from the RX ring yet, just keep the original index at which
      the BDs for this frame started. Then, if the verdict is XDP_PASS,
      instead of converting the xdb_buff to an skb, we replay a call to
      enetc_build_skb (just as in the normal enetc_clean_rx_ring case),
      starting from the original BD index.
      
      We would also like to be minimally invasive to the regular RX data path,
      and not check whether there is a BPF program attached to the ring on
      every packet. So we create a separate RX ring processing function for
      XDP.
      
      Because we only install/remove the BPF program while the interface is
      down, we forgo the rcu_read_lock() in enetc_clean_rx_ring, since there
      shouldn't be any circumstance in which we are processing packets and
      there is a potentially freed BPF program attached to the RX ring.
      Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d1b15102
    • V
      net: enetc: move up enetc_reuse_page and enetc_page_reusable · 65d0cbb4
      Vladimir Oltean 提交于
      For XDP_TX, we need to call enetc_reuse_page from enetc_clean_tx_ring,
      so we need to avoid a forward declaration.
      Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      65d0cbb4
    • V
      net: enetc: clean the TX software BD on the TX confirmation path · 1ee8d6f3
      Vladimir Oltean 提交于
      With the future introduction of some new fields into enetc_tx_swbd such
      as is_xdp_tx, is_xdp_redirect etc, we need not only to set these bits
      to true from the XDP_TX/XDP_REDIRECT code path, but also to false from
      the old code paths.
      
      This is because TX software buffer descriptors are kept in a ring that
      is shadow of the hardware TX ring, so these structures keep getting
      reused, and there is always the possibility that when a software BD is
      reused (after we ran a full circle through the TX ring), the old user of
      the tx_swbd had set is_xdp_tx = true, and now we are sending a regular
      skb, which would need to set is_xdp_tx = false.
      
      To be minimally invasive to the old code paths, let's just scrub the
      software TX BD in the TX confirmation path (enetc_clean_tx_ring), once
      we know that nobody uses this software TX BD (tx_ring->next_to_clean
      hasn't yet been updated, and the TX paths check enetc_bd_unused which
      tells them if there's any more space in the TX ring for a new enqueue).
      Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1ee8d6f3
    • V
      net: enetc: add a dedicated is_eof bit in the TX software BD · d504498d
      Vladimir Oltean 提交于
      In the transmit path, if we have a scatter/gather frame, it is put into
      multiple software buffer descriptors, the last of which has the skb
      pointer populated (which is necessary for rearming the TX MSI vector and
      for collecting the two-step TX timestamp from the TX confirmation path).
      
      At the moment, this is sufficient, but with XDP_TX, we'll need to
      service TX software buffer descriptors that don't have an skb pointer,
      however they might be final nonetheless. So add a dedicated bit for
      final software BDs that we populate and check explicitly. Also, we keep
      looking just for an skb when doing TX timestamping, because we don't
      want/need that for XDP.
      Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d504498d