1. 29 12月, 2021 1 次提交
  2. 21 12月, 2021 1 次提交
  3. 16 12月, 2021 1 次提交
  4. 25 11月, 2021 1 次提交
  5. 17 11月, 2021 1 次提交
  6. 30 10月, 2021 1 次提交
    • J
      igb: unbreak I2C bit-banging on i350 · a97f8783
      Jan Kundrát 提交于
      The driver tried to use Linux' native software I2C bus master
      (i2c-algo-bits) for exporting the I2C interface that talks to the SFP
      cage(s) towards userspace. As-is, however, the physical SCL/SDA pins
      were not moving at all, staying at logical 1 all the time.
      
      The main culprit was the I2CPARAMS register where igb was not setting
      the I2CBB_EN bit. That meant that all the careful signal bit-banging was
      actually not being propagated to the chip pads (I verified this with a
      scope).
      
      The bit-banging was not correct either, because I2C is supposed to be an
      open-collector bus, and the code was driving both lines via a totem
      pole. The code was also trying to do operations which did not make any
      sense with the i2c-algo-bits, namely manipulating both SDA and SCL from
      igb_set_i2c_data (which is only supposed to set SDA). I'm not sure if
      that was meant as an optimization, or was just flat out wrong, but given
      that the i2c-algo-bits is set up to work with a totally dumb GPIO-ish
      implementation underneath, there's no need for this code to be smart.
      
      The open-drain vs. totem-pole is fixed by the usual trick where the
      logical zero is implemented via regular output mode and outputting a
      logical 0, and the logical high is implemented via the IO pad configured
      as an input (thus floating), and letting the mandatory pull-up resistors
      do the rest. Anything else is actually wrong on I2C where all devices
      are supposed to have open-drain connection to the bus.
      
      The missing I2CBB_EN is set (along with a safe initial value of the
      GPIOs) just before registering this software I2C bus.
      
      The chip datasheet mentions HW-implemented I2C transactions (SFP EEPROM
      reads and writes) as well, but I'm not touching these for simplicity.
      
      Tested on a LR-Link LRES2203PF-2SFP (which is an almost-miniPCIe form
      factor card, a cable, and a module with two SFP cages). There was one
      casualty, an old broken SFP we had laying around, which was used to
      solder some thin wires as a DIY I2C breakout. Thanks for your service.
      With this patch in place, I can `i2cdump -y 3 0x51 c` and read back data
      which make sense. Yay.
      Signed-off-by: NJan Kundrát <jan.kundrat@cesnet.cz>
      See-also: https://www.spinics.net/lists/netdev/msg490554.htmlReviewed-by: NJesse Brandeburg <jesse.brandeburg@intel.com>
      Tested-by: NTony Brelinski <tony.brelinski@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      a97f8783
  7. 05 10月, 2021 1 次提交
  8. 28 7月, 2021 1 次提交
    • A
      dev_ioctl: split out ndo_eth_ioctl · a7605370
      Arnd Bergmann 提交于
      Most users of ndo_do_ioctl are ethernet drivers that implement
      the MII commands SIOCGMIIPHY/SIOCGMIIREG/SIOCSMIIREG, or hardware
      timestamping with SIOCSHWTSTAMP/SIOCGHWTSTAMP.
      
      Separate these from the few drivers that use ndo_do_ioctl to
      implement SIOCBOND, SIOCBR and SIOCWANDEV commands.
      
      This is a purely cosmetic change intended to help readers find
      their way through the implementation.
      
      Cc: Doug Ledford <dledford@redhat.com>
      Cc: Jason Gunthorpe <jgg@ziepe.ca>
      Cc: Jay Vosburgh <j.vosburgh@gmail.com>
      Cc: Veaceslav Falico <vfalico@gmail.com>
      Cc: Andy Gospodarek <andy@greyhouse.net>
      Cc: Andrew Lunn <andrew@lunn.ch>
      Cc: Vivien Didelot <vivien.didelot@gmail.com>
      Cc: Florian Fainelli <f.fainelli@gmail.com>
      Cc: Vladimir Oltean <olteanv@gmail.com>
      Cc: Leon Romanovsky <leon@kernel.org>
      Cc: linux-rdma@vger.kernel.org
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Acked-by: NJason Gunthorpe <jgg@nvidia.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a7605370
  9. 02 7月, 2021 4 次提交
  10. 25 6月, 2021 1 次提交
    • T
      intel: Remove rcu_read_lock() around XDP program invocation · 49589b23
      Toke Høiland-Jørgensen 提交于
      The Intel drivers all have rcu_read_lock()/rcu_read_unlock() pairs around
      XDP program invocations. However, the actual lifetime of the objects
      referred by the XDP program invocation is longer, all the way through to
      the call to xdp_do_flush(), making the scope of the rcu_read_lock() too
      small. This turns out to be harmless because it all happens in a single
      NAPI poll cycle (and thus under local_bh_disable()), but it makes the
      rcu_read_lock() misleading.
      
      Rather than extend the scope of the rcu_read_lock(), just get rid of it
      entirely. With the addition of RCU annotations to the XDP_REDIRECT map
      types that take bh execution into account, lockdep even understands this to
      be safe, so there's really no reason to keep it around.
      Signed-off-by: NToke Høiland-Jørgensen <toke@redhat.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Tested-by: Jesper Dangaard Brouer <brouer@redhat.com> # i40e
      Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
      Cc: Tony Nguyen <anthony.l.nguyen@intel.com>
      Cc: intel-wired-lan@lists.osuosl.org
      Link: https://lore.kernel.org/bpf/20210624160609.292325-12-toke@redhat.com
      49589b23
  11. 03 6月, 2021 2 次提交
  12. 27 5月, 2021 3 次提交
  13. 17 4月, 2021 1 次提交
    • E
      igb: Redistribute memory for transmit packet buffers when in Qav mode · 26b67f5a
      Ederson de Souza 提交于
      i210 has a total of 24KB of transmit packet buffer. When in Qav mode,
      this buffer is divided into four pieces, one for each Tx queue.
      Currently, 8KB are given to each of the two SR queues and 4KB are given
      to each of the two SP queues.
      
      However, it was noticed that such distribution can make best effort
      traffic (which would usually go to the SP queues when Qav is enabled, as
      the SR queues would be used by ETF or CBS qdiscs for TSN-aware traffic)
      perform poorly. Using iperf3 to measure, one could see the performance
      of best effort traffic drop by nearly a third (from 935Mbps to 578Mbps),
      with no TSN traffic competing.
      
      This patch redistributes the 24KB to each queue equally: 6KB each. On
      tests, there was no notable performance reduction of best effort traffic
      performance when there was no TSN traffic competing.
      
      Below, more details about the data collected:
      
      All experiments were run using the following qdisc setup:
      
      qdisc taprio 100: root refcnt 9 tc 4 map 3 3 3 2 3 0 0 3 3 3 3 3 3 3 3 3
          queues offset 0 count 1 offset 1 count 1 offset 2 count 1 offset 3 count 1
          clockid TAI base-time 0 cycle-time 10000000 cycle-time-extension 0
          index 0 cmd S gatemask 0xf interval 10000000
      
      qdisc etf 8045: parent 100:1 clockid TAI delta 1000000 offload on
          deadline_mode off skip_sock_check off
      
      TSN traffic, when enabled, had this characteristics:
       Packet size: 1500 bytes
       Transmission interval: 125us
      
      ----------------------------------
      Without this patch:
      ----------------------------------
      - TCP data:
          - No TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  1.35 GBytes   578 Mbits/sec    0
      
          - With TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  1.07 GBytes   460 Mbits/sec    1
      
      - TCP data limiting iperf3 buffer size to 4K:
          - No TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  1.35 GBytes   579 Mbits/sec    0
      
          - With TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  1.08 GBytes   462 Mbits/sec    0
      
      - TCP data limiting iperf3 buffer size to 192 bytes (smallest size without
       serious performance degradation):
          - No TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  1.34 GBytes   577 Mbits/sec    0
      
          - With TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  1.07 GBytes   461 Mbits/sec    1
      
      - UDP data at 1000Mbit/sec:
          - No TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
              [  5]   0.00-20.00  sec  1.36 GBytes   586 Mbits/sec  0.000 ms  0/1011407 (0%)
      
          - With TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
              [  5]   0.00-20.00  sec  1.05 GBytes   451 Mbits/sec  0.000 ms  0/778672 (0%)
      
      ----------------------------------
      With this patch:
      ----------------------------------
      
      - TCP data:
          - No TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  2.17 GBytes   932 Mbits/sec    0
      
          - With TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  1.50 GBytes   646 Mbits/sec    1
      
      - TCP data limiting iperf3 buffer size to 4K:
          - No TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  2.17 GBytes   931 Mbits/sec    0
      
          - With TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  1.50 GBytes   645 Mbits/sec    0
      
      - TCP data limiting iperf3 buffer size to 192 bytes (smallest size without
       serious performance degradation):
          - No TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  2.17 GBytes   932 Mbits/sec    1
      
          - With TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  1.50 GBytes   645 Mbits/sec    0
      
      - UDP data at 1000Mbit/sec:
          - No TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
              [  5]   0.00-20.00  sec  2.23 GBytes   956 Mbits/sec  0.000 ms  0/1650226 (0%)
      
          - With TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
              [  5]   0.00-20.00  sec  1.51 GBytes   649 Mbits/sec  0.000 ms  0/1120264 (0%)
      Signed-off-by: NEderson de Souza <ederson.desouza@intel.com>
      Tested-by: NTony Brelinski <tonyx.brelinski@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      26b67f5a
  14. 24 3月, 2021 1 次提交
  15. 19 3月, 2021 1 次提交
  16. 18 3月, 2021 1 次提交
  17. 16 3月, 2021 1 次提交
  18. 13 3月, 2021 1 次提交
    • L
      igb: avoid premature Rx buffer reuse · 98dfb02a
      Li RongQing 提交于
      Igb needs a similar fix as commit 75aab4e1 ("i40e: avoid
      premature Rx buffer reuse")
      
      The page recycle code, incorrectly, relied on that a page fragment
      could not be freed inside xdp_do_redirect(). This assumption leads to
      that page fragments that are used by the stack/XDP redirect can be
      reused and overwritten.
      
      To avoid this, store the page count prior invoking xdp_do_redirect().
      
      Longer explanation:
      
      Intel NICs have a recycle mechanism. The main idea is that a page is
      split into two parts. One part is owned by the driver, one part might
      be owned by someone else, such as the stack.
      
      t0: Page is allocated, and put on the Rx ring
                    +---------------
      used by NIC ->| upper buffer
      (rx_buffer)   +---------------
                    | lower buffer
                    +---------------
        page count  == USHRT_MAX
        rx_buffer->pagecnt_bias == USHRT_MAX
      
      t1: Buffer is received, and passed to the stack (e.g.)
                    +---------------
                    | upper buff (skb)
                    +---------------
      used by NIC ->| lower buffer
      (rx_buffer)   +---------------
        page count  == USHRT_MAX
        rx_buffer->pagecnt_bias == USHRT_MAX - 1
      
      t2: Buffer is received, and redirected
                    +---------------
                    | upper buff (skb)
                    +---------------
      used by NIC ->| lower buffer
      (rx_buffer)   +---------------
      
      Now, prior calling xdp_do_redirect():
        page count  == USHRT_MAX
        rx_buffer->pagecnt_bias == USHRT_MAX - 2
      
      This means that buffer *cannot* be flipped/reused, because the skb is
      still using it.
      
      The problem arises when xdp_do_redirect() actually frees the
      segment. Then we get:
        page count  == USHRT_MAX - 1
        rx_buffer->pagecnt_bias == USHRT_MAX - 2
      
      From a recycle perspective, the buffer can be flipped and reused,
      which means that the skb data area is passed to the Rx HW ring!
      
      To work around this, the page count is stored prior calling
      xdp_do_redirect().
      
      Fixes: 9cbc948b ("igb: add XDP support")
      Signed-off-by: NLi RongQing <lirongqing@baidu.com>
      Reviewed-by: NAlexander Duyck <alexanderduyck@fb.com>
      Tested-by: NVishakha Jambekar <vishakha.jambekar@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      98dfb02a
  19. 11 3月, 2021 1 次提交
  20. 05 2月, 2021 1 次提交
  21. 04 2月, 2021 3 次提交
  22. 20 1月, 2021 1 次提交
  23. 09 1月, 2021 2 次提交
  24. 10 12月, 2020 6 次提交
  25. 01 12月, 2020 1 次提交
  26. 30 9月, 2020 1 次提交