1. 10 2月, 2022 2 次提交
  2. 15 12月, 2021 3 次提交
  3. 30 10月, 2021 1 次提交
  4. 21 10月, 2021 2 次提交
  5. 20 10月, 2021 2 次提交
    • J
      ice: update dim usage and moderation · d8eb7ad5
      Jesse Brandeburg 提交于
      The driver was having trouble with unreliable latency when doing single
      threaded ping-pong tests. This was root caused to the DIM algorithm
      landing on a too slow interrupt value, which caused high latency, and it
      was especially present when queues were being switched frequently by the
      scheduler as happens on default setups today.
      
      In attempting to improve this, we allow the upper rate limit for
      interrupts to move to rate limit of 4 microseconds as a max, which means
      that no vector can generate more than 250,000 interrupts per second. The
      old config was up to 100,000. The driver previously tried to program the
      rate limit too frequently and if the receive and transmit side were both
      active on the same vector, the INTRL would be set incorrectly, and this
      change fixes that issue as a side effect of the redesign.
      
      This driver will operate from now on with a slightly changed DIM table
      with more emphasis towards latency sensitivity by having more table
      entries with lower latency than with high latency (high being >= 64
      microseconds).
      
      The driver also resets the DIM algorithm state with a new stats set when
      there is no work done and the data becomes stale (older than 1 second),
      for the respective receive or transmit portion of the interrupt.
      
      Add a new helper for setting rate limit, which will be used more
      in a followup patch.
      Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com>
      Tested-by: NGurucharan G <gurucharanx.g@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      d8eb7ad5
    • B
      ice: Add support for VF rate limiting · 4ecc8633
      Brett Creeley 提交于
      Implement ndo_set_vf_rate to support setting of min_tx_rate and
      max_tx_rate; set the appropriate bandwidth in the scheduler for the
      node representing the specified VF VSI.
      Co-developed-by: NTarun Singh <tarun.k.singh@intel.com>
      Signed-off-by: NTarun Singh <tarun.k.singh@intel.com>
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NKonrad Jankowski <konrad0.jankowski@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      4ecc8633
  6. 15 10月, 2021 1 次提交
    • M
      ice: split ice_ring onto Tx/Rx separate structs · e72bba21
      Maciej Fijalkowski 提交于
      While it was convenient to have a generic ring structure that served
      both Tx and Rx sides, next commits are going to introduce several
      Tx-specific fields, so in order to avoid hurting the Rx side, let's
      pull out the Tx ring onto new ice_tx_ring and ice_rx_ring structs.
      
      Rx ring could be handled by the old ice_ring which would reduce the code
      churn within this patch, but this would make things asymmetric.
      
      Make the union out of the ring container within ice_q_vector so that it
      is possible to iterate over newly introduced ice_tx_ring.
      
      Remove the @size as it's only accessed from control path and it can be
      calculated pretty easily.
      
      Change definitions of ice_update_ring_stats and
      ice_fetch_u64_stats_per_ring so that they are ring agnostic and can be
      used for both Rx and Tx rings.
      
      Sizes of Rx and Tx ring structs are 256 and 192 bytes, respectively. In
      Rx ring xdp_rxq_info occupies its own cacheline, so it's the major
      difference now.
      Signed-off-by: NMaciej Fijalkowski <maciej.fijalkowski@intel.com>
      Tested-by: NGurucharan G <gurucharanx.g@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      e72bba21
  7. 14 10月, 2021 1 次提交
  8. 08 10月, 2021 1 次提交
  9. 29 9月, 2021 1 次提交
  10. 11 6月, 2021 1 次提交
    • J
      ice: enable receive hardware timestamping · 77a78115
      Jacob Keller 提交于
      Add SIOCGHWTSTAMP and SIOCSHWTSTAMP ioctl handlers to respond to
      requests to enable timestamping support. If the request is for enabling
      Rx timestamps, set a bit in the Rx descriptors to indicate that receive
      timestamps should be reported.
      
      Hardware captures receive timestamps in the PHY which only captures part
      of the timer, and reports only 40 bits into the Rx descriptor. The upper
      32 bits represent the contents of GLTSYN_TIME_L at the point of packet
      reception, while the lower 8 bits represent the upper 8 bits of
      GLTSYN_TIME_0.
      
      The networking and PTP stack expect 64 bit timestamps in nanoseconds. To
      support this, implement some logic to extend the timestamps by using the
      full PHC time.
      
      If the Rx timestamp was captured prior to the PHC time, then the real
      timestamp is
      
        PHC - (lower_32_bits(PHC) - timestamp)
      
      If the Rx timestamp was captured after the PHC time, then the real
      timestamp is
      
        PHC + (timestamp - lower_32_bits(PHC))
      
      These calculations are correct as long as neither the PHC timestamp nor
      the Rx timestamps are more than 2^32-1 nanseconds old. Further, we can
      detect when the Rx timestamp is before or after the PHC as long as the
      PHC timestamp is no more than 2^31-1 nanoseconds old.
      
      In that case, we calculate the delta between the lower 32 bits of the
      PHC and the Rx timestamp. If it's larger than 2^31-1 then the Rx
      timestamp must have been captured in the past. If it's smaller, then the
      Rx timestamp must have been captured after PHC time.
      
      Add an ice_ptp_extend_32b_ts function that relies on a cached copy of
      the PHC time and implements this algorithm to calculate the proper upper
      32bits of the Rx timestamps.
      
      Cache the PHC time periodically in all of the Rx rings. This enables
      each Rx ring to simply call the extension function with a recent copy of
      the PHC time. By ensuring that the PHC time is kept up to date
      periodically, we ensure this algorithm doesn't use stale data and
      produce incorrect results.
      
      To cache the time, introduce a kworker and a kwork item to periodically
      store the Rx time. It might seem like we should use the .do_aux_work
      interface of the PTP clock. This doesn't work because all PFs must cache
      this time, but only one PF owns the PTP clock device.
      
      Thus, the ice driver will manage its own kthread instead of relying on
      the PTP do_aux_work handler.
      
      With this change, the driver can now report Rx timestamps on all
      incoming packets.
      Signed-off-by: NJacob Keller <jacob.e.keller@intel.com>
      Tested-by: NTony Brelinski <tonyx.brelinski@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      77a78115
  11. 07 6月, 2021 2 次提交
    • J
      ice: wait for reset before reporting devlink info · 1c08052e
      Jacob Keller 提交于
      Requesting device firmware information while the device is busy cleaning
      up after a reset can result in an unexpected failure:
      
      This occurs because the command is attempting to access the device
      AdminQ while it is down. Resolve this by having the command wait for
      a while until the reset is complete. To do this, introduce
      a reset_wait_queue and associated helper function "ice_wait_for_reset".
      
      This helper will use the wait queue to sleep until the driver is done
      rebuilding. Use of a wait queue is preferred because the potential sleep
      duration can be several seconds.
      
      To ensure that the thread wakes up properly, a new wake_up call is added
      during all code paths which clear the reset state bits associated with
      the driver rebuild flow.
      
      Using this ensures that tools can request device information without
      worrying about whether the driver is cleaning up from a reset.
      Specifically, it is expected that a flash update could result in
      a device reset, and it is better to delay the response for information
      until the reset is complete rather than exit with an immediate failure.
      Signed-off-by: NJacob Keller <jacob.e.keller@intel.com>
      Tested-by: NTony Brelinski <tonyx.brelinski@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      1c08052e
    • B
      ice: Refactor VIRTCHNL_OP_CONFIG_VSI_QUEUES handling · 7ad15440
      Brett Creeley 提交于
      Currently, when a VF requests queue configuration via
      VIRTCHNL_OP_CONFIG_VSI_QUEUES the PF driver expects that this message
      will only be called once and we always assume the queues being
      configured start from 0. This is incorrect and is causing issues when
      a VF tries to send this message for multiple queue blocks. Fix this by
      using the queue_id specified in the virtchnl message and allowing for
      individual Rx and/or Tx queues to be configured.
      
      Also, reduce the duplicated for loops for configuring the queues by
      moving all the logic into a single for loop.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NKonrad Jankowski <konrad0.jankowski@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      7ad15440
  12. 29 5月, 2021 1 次提交
  13. 15 4月, 2021 2 次提交
  14. 08 4月, 2021 1 次提交
  15. 25 9月, 2020 1 次提交
    • J
      ice: fix memory leak if register_netdev_fails · 135f4b9e
      Jacob Keller 提交于
      The ice_setup_pf_sw function can cause a memory leak if register_netdev
      fails, due to accidentally failing to free the VSI rings. Fix the memory
      leak by using ice_vsi_release, ensuring we actually go through the full
      teardown process.
      
      This should be safe even if the netdevice is not registered because we
      will have set the netdev pointer to NULL, ensuring ice_vsi_release won't
      call unregister_netdev.
      
      An alternative fix would be moving management of the PF VSI netdev into
      the main VSI setup code. This is complicated and likely requires
      significant refactor in how we manage VSIs
      
      Fixes: 3a858ba3 ("ice: Add support for VSI allocation and deallocation")
      Signed-off-by: NJacob Keller <jacob.e.keller@intel.com>
      Tested-by: NAaron Brown <aaron.f.brown@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      135f4b9e
  16. 24 7月, 2020 1 次提交
  17. 31 5月, 2020 1 次提交
  18. 23 5月, 2020 1 次提交
  19. 22 5月, 2020 1 次提交
  20. 11 3月, 2020 1 次提交
  21. 16 2月, 2020 2 次提交
    • B
      ice: Add support to enable/disable all Rx queues before waiting · 13a6233b
      Brett Creeley 提交于
      Currently when we enable/disable all Rx queues we do the following
      sequence for each Rx queue and then move to the next queue.
      
      1. Enable/Disable the Rx queue via register write.
      2. Read the configuration register to determine if the Rx queue was
      enabled/disabled successfully.
      
      In some cases enabling/disabling queue 0 fails because of step 2 above.
      Fix this by doing step 1 for all of the Rx queues and then step 2 for
      all of the Rx queues.
      
      Also, there are cases where we enable/disable a single queue (i.e.
      SR-IOV and XDP) so add a new function that does step 1 and 2 above with
      a read flush in between.
      
      This change also required a single Rx queue to be enabled/disabled with
      and without waiting for the change to propagate through hardware. Fix
      this by adding a boolean wait flag to the necessary functions.
      
      Also, add the keywords "one" and "all" to distinguish between
      enabling/disabling a single Rx queue and all Rx queues respectively.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      13a6233b
    • B
      ice: Add initial support for QinQ · 42f3efef
      Brett Creeley 提交于
      Allow support for S-Tag + C-Tag VLAN traffic by disabling pruning when
      there are no 0x8100 VLAN interfaces currently created on top of the PF.
      When an 0x8100 VLAN interface is configured, enable pruning and only
      support single and double C-Tag VLAN traffic. If all of the 0x8100
      interfaces that were created on top of the PF are removed via
      ethtool -K <iface> rx-vlan-filter off or via ip tools, then disable
      pruning and allow S-Tag + C-Tag traffic again.
      
      Add VLAN 0 filter by default for the PF. This is because a bridge
      sets the default_pvid to 1, sends the request down to
      ice_vlan_rx_add_vid(), and we never get the request to add VLAN 0 via
      the 8021q module which causes all untagged traffic to be dropped.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      42f3efef
  22. 13 2月, 2020 1 次提交
  23. 04 1月, 2020 1 次提交
    • B
      ice: Add code to keep track of current dflt_vsi · fc0f39bc
      Brett Creeley 提交于
      We can't have more than one default VSI so prevent another VSI from
      overwriting the current dflt_vsi. This was achieved by adding the
      following functions:
      
      ice_is_dflt_vsi_in_use()
      - Used to check if the default VSI is already being used.
      
      ice_is_vsi_dflt_vsi()
      - Used to check if VSI passed in is in fact the default VSI.
      
      ice_set_dflt_vsi()
      - Used to set the default VSI via a switch rule
      
      ice_clear_dflt_vsi()
      - Used to clear the default VSI via a switch rule.
      
      Also, there was no need to introduce any locking because all mailbox
      events and synchronization of switch filters for the PF happen in the
      service task.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      fc0f39bc
  24. 23 11月, 2019 1 次提交
  25. 09 11月, 2019 2 次提交
  26. 05 11月, 2019 3 次提交
  27. 13 9月, 2019 2 次提交
  28. 27 8月, 2019 1 次提交