1. 24 4月, 2021 2 次提交
  2. 23 4月, 2021 12 次提交
  3. 17 4月, 2021 6 次提交
    • S
      igc: Expose LPI counters · 1feaf60f
      Sasha Neftin 提交于
      Expose EEE Tx and Rx low power idle counters via ethtool
      A EEE TX or RX LPI event occurs when the transmitter or the receiver
      enters EEE (IEEE802.3az) LPI state.
      ethtool --statistics <iface>
      Signed-off-by: NSasha Neftin <sasha.neftin@intel.com>
      Tested-by: NDvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      1feaf60f
    • S
      igc: Fix overwrites return value · b3d4f405
      Sasha Neftin 提交于
      drivers/net/ethernet/intel/igc/igc_i225.c:235 igc_write_nvm_srwr()
      warn: loop overwrites return value 'ret_val'
      Reported-by: NDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: NSasha Neftin <sasha.neftin@intel.com>
      Tested-by: NDvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      b3d4f405
    • E
      igc: enable auxiliary PHC functions for the i225 · 87938851
      Ederson de Souza 提交于
      The i225 device offers a number of special PTP Hardware Clock features on
      the Software Defined Pins (SDPs) - much like i210, which is used as
      inspiration for this patch. It enables two possible functions, namely
      time stamping external events and periodic output signals.
      
      The assignment of PHC functions to the four SDP can be freely chosen by
      the user.
      
      For the external events time stamping, when the SDP (configured as input
      by user) level changes, an interrupt is generated and the kernel
      Precision Time Protocol (PTP) is informed.
      
      For the periodic output signals, the i225 is configured to generate them
      (so the SDP level will change periodically) and the driver also has to
      keep updating the time of the next level change. However, this work is
      not necessary for some frequencies as the i225 takes care of them
      (namely, anything with a half-cycle of 500ms, 250ms, 125ms or < 70ms).
      
      While i225 allows up to four timers to be used to source the time used
      on the external events or output signals, this patch uses only one of
      those timers. Main reason is to keep it simple, as it's not clear how
      these extra timers would be exposed to users. Note that currently a NIC
      can expose a single PTP device.
      Signed-off-by: NEderson de Souza <ederson.desouza@intel.com>
      Tested-by: NDvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      87938851
    • E
      igc: Enable internal i225 PPS · 64433e5b
      Ederson de Souza 提交于
      The i225 device can produce one interrupt on the full second, much
      like i210 - from where this patch is inspired.
      
      This patch sets up the full second interruption on the i225 and when
      receiving it, it sends a PPS event to PTP (Precision Time Protocol)
      kernel subsystem.
      
      The PTP subsystem exposes the PPS events via ioctl and sysfs, and one
      can use the `testptp` tool (tools/testing/selftests/ptp) to check that
      the events are being generated.
      Signed-off-by: NEderson de Souza <ederson.desouza@intel.com>
      Tested-by: NDvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      64433e5b
    • G
      igb: Add double-check MTA_REGISTER for i210 and i211 · 1d3cb90c
      Grzegorz Siwik 提交于
      Add new function which checks MTA_REGISTER if its filled correctly.
      If not then writes again to same register.
      There is possibility that i210 and i211 could not accept
      MTA_REGISTER settings, specially when you add and remove
      many of multicast addresses in short time.
      Without this patch there is possibility that multicast settings will be
      not always set correctly in hardware.
      Signed-off-by: NGrzegorz Siwik <grzegorz.siwik@intel.com>
      Tested-by: NDave Switzer <david.switzer@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      1d3cb90c
    • E
      igb: Redistribute memory for transmit packet buffers when in Qav mode · 26b67f5a
      Ederson de Souza 提交于
      i210 has a total of 24KB of transmit packet buffer. When in Qav mode,
      this buffer is divided into four pieces, one for each Tx queue.
      Currently, 8KB are given to each of the two SR queues and 4KB are given
      to each of the two SP queues.
      
      However, it was noticed that such distribution can make best effort
      traffic (which would usually go to the SP queues when Qav is enabled, as
      the SR queues would be used by ETF or CBS qdiscs for TSN-aware traffic)
      perform poorly. Using iperf3 to measure, one could see the performance
      of best effort traffic drop by nearly a third (from 935Mbps to 578Mbps),
      with no TSN traffic competing.
      
      This patch redistributes the 24KB to each queue equally: 6KB each. On
      tests, there was no notable performance reduction of best effort traffic
      performance when there was no TSN traffic competing.
      
      Below, more details about the data collected:
      
      All experiments were run using the following qdisc setup:
      
      qdisc taprio 100: root refcnt 9 tc 4 map 3 3 3 2 3 0 0 3 3 3 3 3 3 3 3 3
          queues offset 0 count 1 offset 1 count 1 offset 2 count 1 offset 3 count 1
          clockid TAI base-time 0 cycle-time 10000000 cycle-time-extension 0
          index 0 cmd S gatemask 0xf interval 10000000
      
      qdisc etf 8045: parent 100:1 clockid TAI delta 1000000 offload on
          deadline_mode off skip_sock_check off
      
      TSN traffic, when enabled, had this characteristics:
       Packet size: 1500 bytes
       Transmission interval: 125us
      
      ----------------------------------
      Without this patch:
      ----------------------------------
      - TCP data:
          - No TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  1.35 GBytes   578 Mbits/sec    0
      
          - With TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  1.07 GBytes   460 Mbits/sec    1
      
      - TCP data limiting iperf3 buffer size to 4K:
          - No TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  1.35 GBytes   579 Mbits/sec    0
      
          - With TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  1.08 GBytes   462 Mbits/sec    0
      
      - TCP data limiting iperf3 buffer size to 192 bytes (smallest size without
       serious performance degradation):
          - No TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  1.34 GBytes   577 Mbits/sec    0
      
          - With TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  1.07 GBytes   461 Mbits/sec    1
      
      - UDP data at 1000Mbit/sec:
          - No TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
              [  5]   0.00-20.00  sec  1.36 GBytes   586 Mbits/sec  0.000 ms  0/1011407 (0%)
      
          - With TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
              [  5]   0.00-20.00  sec  1.05 GBytes   451 Mbits/sec  0.000 ms  0/778672 (0%)
      
      ----------------------------------
      With this patch:
      ----------------------------------
      
      - TCP data:
          - No TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  2.17 GBytes   932 Mbits/sec    0
      
          - With TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  1.50 GBytes   646 Mbits/sec    1
      
      - TCP data limiting iperf3 buffer size to 4K:
          - No TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  2.17 GBytes   931 Mbits/sec    0
      
          - With TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  1.50 GBytes   645 Mbits/sec    0
      
      - TCP data limiting iperf3 buffer size to 192 bytes (smallest size without
       serious performance degradation):
          - No TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  2.17 GBytes   932 Mbits/sec    1
      
          - With TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Retr
              [  5]   0.00-20.00  sec  1.50 GBytes   645 Mbits/sec    0
      
      - UDP data at 1000Mbit/sec:
          - No TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
              [  5]   0.00-20.00  sec  2.23 GBytes   956 Mbits/sec  0.000 ms  0/1650226 (0%)
      
          - With TSN traffic:
              [ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
              [  5]   0.00-20.00  sec  1.51 GBytes   649 Mbits/sec  0.000 ms  0/1120264 (0%)
      Signed-off-by: NEderson de Souza <ederson.desouza@intel.com>
      Tested-by: NTony Brelinski <tonyx.brelinski@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      26b67f5a
  4. 16 4月, 2021 1 次提交
    • J
      i40e: fix the panic when running bpf in xdpdrv mode · 4e39a072
      Jason Xing 提交于
      Fix this panic by adding more rules to calculate the value of @rss_size_max
      which could be used in allocating the queues when bpf is loaded, which,
      however, could cause the failure and then trigger the NULL pointer of
      vsi->rx_rings. Prio to this fix, the machine doesn't care about how many
      cpus are online and then allocates 256 queues on the machine with 32 cpus
      online actually.
      
      Once the load of bpf begins, the log will go like this "failed to get
      tracking for 256 queues for VSI 0 err -12" and this "setup of MAIN VSI
      failed".
      
      Thus, I attach the key information of the crash-log here.
      
      BUG: unable to handle kernel NULL pointer dereference at
      0000000000000000
      RIP: 0010:i40e_xdp+0xdd/0x1b0 [i40e]
      Call Trace:
      [2160294.717292]  ? i40e_reconfig_rss_queues+0x170/0x170 [i40e]
      [2160294.717666]  dev_xdp_install+0x4f/0x70
      [2160294.718036]  dev_change_xdp_fd+0x11f/0x230
      [2160294.718380]  ? dev_disable_lro+0xe0/0xe0
      [2160294.718705]  do_setlink+0xac7/0xe70
      [2160294.719035]  ? __nla_parse+0xed/0x120
      [2160294.719365]  rtnl_newlink+0x73b/0x860
      
      Fixes: 41c445ff ("i40e: main driver core")
      Co-developed-by: NShujin Li <lishujin@kuaishou.com>
      Signed-off-by: NShujin Li <lishujin@kuaishou.com>
      Signed-off-by: NJason Xing <xingwanli@kuaishou.com>
      Reviewed-by: NJesse Brandeburg <jesse.brandeburg@intel.com>
      Acked-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4e39a072
  5. 15 4月, 2021 15 次提交
  6. 14 4月, 2021 4 次提交