1. 24 4月, 2020 8 次提交
    • E
      net/mlx4_en: use napi_complete_done() in TX completion · cf4058db
      Eric Dumazet 提交于
      In order to benefit from the new napi_defer_hard_irqs feature,
      we need to use napi_complete_done() variant in this driver.
      
      RX path is already using it, this patch implements TX completion side.
      
      mlx4_en_process_tx_cq() now returns the amount of retired packets,
      instead of a boolean, so that mlx4_en_poll_tx_cq() can pass
      this value to napi_complete_done().
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cf4058db
    • E
      net: napi: use READ_ONCE()/WRITE_ONCE() · 7e417a66
      Eric Dumazet 提交于
      gro_flush_timeout and napi_defer_hard_irqs can be read
      from napi_complete_done() while other cpus write the value,
      whithout explicit synchronization.
      
      Use READ_ONCE()/WRITE_ONCE() to annotate the races.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7e417a66
    • E
      net: napi: add hard irqs deferral feature · 6f8b12d6
      Eric Dumazet 提交于
      Back in commit 3b47d303 ("net: gro: add a per device gro flush timer")
      we added the ability to arm one high resolution timer, that we used
      to keep not-complete packets in GRO engine a bit longer, hoping that further
      frames might be added to them.
      
      Since then, we added the napi_complete_done() interface, and commit
      364b6055 ("net: busy-poll: return busypolling status to drivers")
      allowed drivers to avoid re-arming NIC interrupts if we made a promise
      that their NAPI poll() handler would be called in the near future.
      
      This infrastructure can be leveraged, thanks to a new device parameter,
      which allows to arm the napi hrtimer, instead of re-arming the device
      hard IRQ.
      
      We have noticed that on some servers with 32 RX queues or more, the chit-chat
      between the NIC and the host caused by IRQ delivery and re-arming could hurt
      throughput by ~20% on 100Gbit NIC.
      
      In contrast, hrtimers are using local (percpu) resources and might have lower
      cost.
      
      The new tunable, named napi_defer_hard_irqs, is placed in the same hierarchy
      than gro_flush_timeout (/sys/class/net/ethX/)
      
      By default, both gro_flush_timeout and napi_defer_hard_irqs are zero.
      
      This patch does not change the prior behavior of gro_flush_timeout
      if used alone : NIC hard irqs should be rearmed as before.
      
      One concrete usage can be :
      
      echo 20000 >/sys/class/net/eth1/gro_flush_timeout
      echo 10 >/sys/class/net/eth1/napi_defer_hard_irqs
      
      If at least one packet is retired, then we will reset napi counter
      to 10 (napi_defer_hard_irqs), ensuring at least 10 periodic scans
      of the queue.
      
      On busy queues, this should avoid NIC hard IRQ, while before this patch IRQ
      avoidance was only possible if napi->poll() was exhausting its budget
      and not call napi_complete_done().
      
      This feature also can be used to work around some non-optimal NIC irq
      coalescing strategies.
      
      Having the ability to insert XX usec delays between each napi->poll()
      can increase cache efficiency, since we increase batch sizes.
      
      It also keeps serving cpus not idle too long, reducing tail latencies.
      Co-developed-by: NLuigi Rizzo <lrizzo@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6f8b12d6
    • D
      Merge branch 'qed-aer' · e6acd2b6
      David S. Miller 提交于
      Sudarsana Reddy Kalluru says:
      
      ====================
      qed*: Add support for pcie advanced error recovery.
      
      The patch series adds qed/qede driver changes for PCIe Advanced Error
      Recovery (AER) support.
      Patch (1) adds qed changes to enable the device to send error messages
      to root port when detected.
      Patch (2) adds qede support for handling the detected errors (AERs).
      
      Changes from previous version:
      -------------------------------
      v2: use pci_num_vf() instead of caching the value in edev.
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e6acd2b6
    • S
      qede: Add support for handling the pcie errors. · 731815e7
      Sudarsana Reddy Kalluru 提交于
      The error recovery is handled by management firmware (MFW) with the help of
      qed/qede drivers. Upon detecting the errors, driver informs MFW about this
      event which in turn starts a recovery process. MFW sends ERROR_RECOVERY
      notification to the driver which performs the required cleanup/recovery
      from the driver side.
      Signed-off-by: NSudarsana Reddy Kalluru <skalluru@marvell.com>
      Acked-by: NJakub Kicinski <kuba@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      731815e7
    • S
      qed: Enable device error reporting capability. · 2196d831
      Sudarsana Reddy Kalluru 提交于
      The patch enables the device to send error messages to root port when
      an error is detected.
      Signed-off-by: NSudarsana Reddy Kalluru <skalluru@marvell.com>
      Signed-off-by: NAriel Elior <aelior@marvell.com>
      Signed-off-by: NIgor Russkikh <irusskikh@marvell.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2196d831
    • A
      net: dsa: add GRO support via gro_cells · e131a563
      Alexander Lobakin 提交于
      gro_cells lib is used by different encapsulating netdevices, such as
      geneve, macsec, vxlan etc. to speed up decapsulated traffic processing.
      CPU tag is a sort of "encapsulation", and we can use the same mechs to
      greatly improve overall DSA performance.
      skbs are passed to the GRO layer after removing CPU tags, so we don't
      need any new packet offload types as it was firstly proposed by me in
      the first GRO-over-DSA variant [1].
      
      The size of struct gro_cells is sizeof(void *), so hot struct
      dsa_slave_priv becomes only 4/8 bytes bigger, and all critical fields
      remain in one 32-byte cacheline.
      The other positive side effect is that drivers for network devices
      that can be shipped as CPU ports of DSA-driven switches can now use
      napi_gro_frags() to pass skbs to kernel. Packets built that way are
      completely non-linear and are likely being dropped without GRO.
      
      This was tested on to-be-mainlined-soon Ethernet driver that uses
      napi_gro_frags(), and the overall performance was on par with the
      variant from [1], sometimes even better due to minimal overhead.
      net.core.gro_normal_batch tuning may help to push it to the limit
      on particular setups and platforms.
      
      iperf3 IPoE VLAN NAT TCP forwarding (port1.218 -> port0) setup
      on 1.2 GHz MIPS board:
      
      5.7-rc2 baseline:
      
      [ID]  Interval         Transfer     Bitrate        Retr
      [ 5]  0.00-120.01 sec  9.00 GBytes  644 Mbits/sec  413  sender
      [ 5]  0.00-120.00 sec  8.99 GBytes  644 Mbits/sec       receiver
      
      Iface      RX packets  TX packets
      eth0       7097731     7097702
      port0      426050      6671829
      port1      6671681     425862
      port1.218  6671677     425851
      
      With this patch:
      
      [ID]  Interval         Transfer     Bitrate        Retr
      [ 5]  0.00-120.01 sec  12.2 GBytes  870 Mbits/sec  122  sender
      [ 5]  0.00-120.00 sec  12.2 GBytes  870 Mbits/sec       receiver
      
      Iface      RX packets  TX packets
      eth0       9474792     9474777
      port0      455200      353288
      port1      9019592     455035
      port1.218  353144      455024
      
      v2:
       - Add some performance examples in the commit message;
       - No functional changes.
      
      [1] https://lore.kernel.org/netdev/20191230143028.27313-1-alobakin@dlink.ru/Signed-off-by: NAlexander Lobakin <bloodyreaper@yandex.ru>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e131a563
    • F
      ipv6: Honor all IPv6 PIO Valid Lifetime values · b75326c2
      Fernando Gont 提交于
      RFC4862 5.5.3 e) prevents received Router Advertisements from reducing
      the Valid Lifetime of configured addresses to less than two hours, thus
      preventing hosts from reacting to the information provided by a router
      that has positive knowledge that a prefix has become invalid.
      
      This patch makes hosts honor all Valid Lifetime values, as per
      draft-gont-6man-slaac-renum-06, Section 4.2. This is meant to help
      mitigate the problem discussed in draft-ietf-v6ops-slaac-renum.
      
      Note: Attacks aiming at disabling an advertised prefix via a Valid
      Lifetime of 0 are not really more harmful than other attacks
      that can be performed via forged RA messages, such as those
      aiming at completely disabling a next-hop router via an RA that
      advertises a Router Lifetime of 0, or performing a Denial of
      Service (DoS) attack by advertising illegitimate prefixes via
      forged PIOs.  In scenarios where RA-based attacks are of concern,
      proper mitigations such as RA-Guard [RFC6105] [RFC7113] should
      be implemented.
      Signed-off-by: NFernando Gont <fgont@si6networks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b75326c2
  2. 23 4月, 2020 32 次提交