1. 14 10月, 2021 1 次提交
  2. 08 10月, 2021 1 次提交
  3. 29 9月, 2021 1 次提交
  4. 11 6月, 2021 1 次提交
    • J
      ice: enable receive hardware timestamping · 77a78115
      Jacob Keller 提交于
      Add SIOCGHWTSTAMP and SIOCSHWTSTAMP ioctl handlers to respond to
      requests to enable timestamping support. If the request is for enabling
      Rx timestamps, set a bit in the Rx descriptors to indicate that receive
      timestamps should be reported.
      
      Hardware captures receive timestamps in the PHY which only captures part
      of the timer, and reports only 40 bits into the Rx descriptor. The upper
      32 bits represent the contents of GLTSYN_TIME_L at the point of packet
      reception, while the lower 8 bits represent the upper 8 bits of
      GLTSYN_TIME_0.
      
      The networking and PTP stack expect 64 bit timestamps in nanoseconds. To
      support this, implement some logic to extend the timestamps by using the
      full PHC time.
      
      If the Rx timestamp was captured prior to the PHC time, then the real
      timestamp is
      
        PHC - (lower_32_bits(PHC) - timestamp)
      
      If the Rx timestamp was captured after the PHC time, then the real
      timestamp is
      
        PHC + (timestamp - lower_32_bits(PHC))
      
      These calculations are correct as long as neither the PHC timestamp nor
      the Rx timestamps are more than 2^32-1 nanseconds old. Further, we can
      detect when the Rx timestamp is before or after the PHC as long as the
      PHC timestamp is no more than 2^31-1 nanoseconds old.
      
      In that case, we calculate the delta between the lower 32 bits of the
      PHC and the Rx timestamp. If it's larger than 2^31-1 then the Rx
      timestamp must have been captured in the past. If it's smaller, then the
      Rx timestamp must have been captured after PHC time.
      
      Add an ice_ptp_extend_32b_ts function that relies on a cached copy of
      the PHC time and implements this algorithm to calculate the proper upper
      32bits of the Rx timestamps.
      
      Cache the PHC time periodically in all of the Rx rings. This enables
      each Rx ring to simply call the extension function with a recent copy of
      the PHC time. By ensuring that the PHC time is kept up to date
      periodically, we ensure this algorithm doesn't use stale data and
      produce incorrect results.
      
      To cache the time, introduce a kworker and a kwork item to periodically
      store the Rx time. It might seem like we should use the .do_aux_work
      interface of the PTP clock. This doesn't work because all PFs must cache
      this time, but only one PF owns the PTP clock device.
      
      Thus, the ice driver will manage its own kthread instead of relying on
      the PTP do_aux_work handler.
      
      With this change, the driver can now report Rx timestamps on all
      incoming packets.
      Signed-off-by: NJacob Keller <jacob.e.keller@intel.com>
      Tested-by: NTony Brelinski <tonyx.brelinski@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      77a78115
  5. 07 6月, 2021 2 次提交
    • J
      ice: wait for reset before reporting devlink info · 1c08052e
      Jacob Keller 提交于
      Requesting device firmware information while the device is busy cleaning
      up after a reset can result in an unexpected failure:
      
      This occurs because the command is attempting to access the device
      AdminQ while it is down. Resolve this by having the command wait for
      a while until the reset is complete. To do this, introduce
      a reset_wait_queue and associated helper function "ice_wait_for_reset".
      
      This helper will use the wait queue to sleep until the driver is done
      rebuilding. Use of a wait queue is preferred because the potential sleep
      duration can be several seconds.
      
      To ensure that the thread wakes up properly, a new wake_up call is added
      during all code paths which clear the reset state bits associated with
      the driver rebuild flow.
      
      Using this ensures that tools can request device information without
      worrying about whether the driver is cleaning up from a reset.
      Specifically, it is expected that a flash update could result in
      a device reset, and it is better to delay the response for information
      until the reset is complete rather than exit with an immediate failure.
      Signed-off-by: NJacob Keller <jacob.e.keller@intel.com>
      Tested-by: NTony Brelinski <tonyx.brelinski@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      1c08052e
    • B
      ice: Refactor VIRTCHNL_OP_CONFIG_VSI_QUEUES handling · 7ad15440
      Brett Creeley 提交于
      Currently, when a VF requests queue configuration via
      VIRTCHNL_OP_CONFIG_VSI_QUEUES the PF driver expects that this message
      will only be called once and we always assume the queues being
      configured start from 0. This is incorrect and is causing issues when
      a VF tries to send this message for multiple queue blocks. Fix this by
      using the queue_id specified in the virtchnl message and allowing for
      individual Rx and/or Tx queues to be configured.
      
      Also, reduce the duplicated for loops for configuring the queues by
      moving all the logic into a single for loop.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NKonrad Jankowski <konrad0.jankowski@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      7ad15440
  6. 29 5月, 2021 1 次提交
  7. 15 4月, 2021 2 次提交
  8. 08 4月, 2021 1 次提交
  9. 25 9月, 2020 1 次提交
    • J
      ice: fix memory leak if register_netdev_fails · 135f4b9e
      Jacob Keller 提交于
      The ice_setup_pf_sw function can cause a memory leak if register_netdev
      fails, due to accidentally failing to free the VSI rings. Fix the memory
      leak by using ice_vsi_release, ensuring we actually go through the full
      teardown process.
      
      This should be safe even if the netdevice is not registered because we
      will have set the netdev pointer to NULL, ensuring ice_vsi_release won't
      call unregister_netdev.
      
      An alternative fix would be moving management of the PF VSI netdev into
      the main VSI setup code. This is complicated and likely requires
      significant refactor in how we manage VSIs
      
      Fixes: 3a858ba3 ("ice: Add support for VSI allocation and deallocation")
      Signed-off-by: NJacob Keller <jacob.e.keller@intel.com>
      Tested-by: NAaron Brown <aaron.f.brown@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      135f4b9e
  10. 24 7月, 2020 1 次提交
  11. 31 5月, 2020 1 次提交
  12. 23 5月, 2020 1 次提交
  13. 22 5月, 2020 1 次提交
  14. 11 3月, 2020 1 次提交
  15. 16 2月, 2020 2 次提交
    • B
      ice: Add support to enable/disable all Rx queues before waiting · 13a6233b
      Brett Creeley 提交于
      Currently when we enable/disable all Rx queues we do the following
      sequence for each Rx queue and then move to the next queue.
      
      1. Enable/Disable the Rx queue via register write.
      2. Read the configuration register to determine if the Rx queue was
      enabled/disabled successfully.
      
      In some cases enabling/disabling queue 0 fails because of step 2 above.
      Fix this by doing step 1 for all of the Rx queues and then step 2 for
      all of the Rx queues.
      
      Also, there are cases where we enable/disable a single queue (i.e.
      SR-IOV and XDP) so add a new function that does step 1 and 2 above with
      a read flush in between.
      
      This change also required a single Rx queue to be enabled/disabled with
      and without waiting for the change to propagate through hardware. Fix
      this by adding a boolean wait flag to the necessary functions.
      
      Also, add the keywords "one" and "all" to distinguish between
      enabling/disabling a single Rx queue and all Rx queues respectively.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      13a6233b
    • B
      ice: Add initial support for QinQ · 42f3efef
      Brett Creeley 提交于
      Allow support for S-Tag + C-Tag VLAN traffic by disabling pruning when
      there are no 0x8100 VLAN interfaces currently created on top of the PF.
      When an 0x8100 VLAN interface is configured, enable pruning and only
      support single and double C-Tag VLAN traffic. If all of the 0x8100
      interfaces that were created on top of the PF are removed via
      ethtool -K <iface> rx-vlan-filter off or via ip tools, then disable
      pruning and allow S-Tag + C-Tag traffic again.
      
      Add VLAN 0 filter by default for the PF. This is because a bridge
      sets the default_pvid to 1, sends the request down to
      ice_vlan_rx_add_vid(), and we never get the request to add VLAN 0 via
      the 8021q module which causes all untagged traffic to be dropped.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      42f3efef
  16. 13 2月, 2020 1 次提交
  17. 04 1月, 2020 1 次提交
    • B
      ice: Add code to keep track of current dflt_vsi · fc0f39bc
      Brett Creeley 提交于
      We can't have more than one default VSI so prevent another VSI from
      overwriting the current dflt_vsi. This was achieved by adding the
      following functions:
      
      ice_is_dflt_vsi_in_use()
      - Used to check if the default VSI is already being used.
      
      ice_is_vsi_dflt_vsi()
      - Used to check if VSI passed in is in fact the default VSI.
      
      ice_set_dflt_vsi()
      - Used to set the default VSI via a switch rule
      
      ice_clear_dflt_vsi()
      - Used to clear the default VSI via a switch rule.
      
      Also, there was no need to introduce any locking because all mailbox
      events and synchronization of switch filters for the PF happen in the
      service task.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      fc0f39bc
  18. 23 11月, 2019 1 次提交
  19. 09 11月, 2019 2 次提交
  20. 05 11月, 2019 3 次提交
  21. 13 9月, 2019 2 次提交
  22. 27 8月, 2019 2 次提交
  23. 24 8月, 2019 1 次提交
    • A
      ice: Fix issues updating VSI MAC filters · bbb968e8
      Akeem G Abodunrin 提交于
      VSI, especially VF could request to add or remove filter for another VSI,
      driver should really guide such request and disallow it.
      However, instead of returning error for such malicious request, driver
      can simply return success.
      
      In addition, we are not tracking number of MAC filters configured per
      VF correctly - and this leads to issue updating VF MAC filters whenever
      they were removed and re-configured via bringing VF interface down and
      up. Also, since VF could send request to update multiple MAC filters at
      once, driver should program those filters individually in the switch, in
      order to determine which action resulted to error, and communicate
      accordingly to the VF.
      
      So, with this changes, we now track number of filters added right from
      when VF resources allocation is done, and could properly add filters for
      both trusted and non_trusted VFs, without MAC filters mis-match issue in
      the switch...
      
      Also refactor code, so that driver can use new function to add or remove
      MAC filters.
      Signed-off-by: NAkeem G Abodunrin <akeem.g.abodunrin@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      bbb968e8
  24. 31 5月, 2019 1 次提交
  25. 30 5月, 2019 2 次提交
  26. 29 5月, 2019 1 次提交
  27. 02 5月, 2019 1 次提交
  28. 18 4月, 2019 1 次提交
    • A
      ice: Add code for DCB initialization part 3/4 · 7b9ffc76
      Anirudh Venkataramanan 提交于
      This patch adds a new function ice_pf_dcb_cfg (and related helpers)
      which applies the DCB configuration obtained from the firmware. As
      part of this, VSIs/netdevs are updated with traffic class information.
      
      This patch requires a bit of a refactor of existing code.
      
      1. For a MIB change event, the associated VSI is closed and brought up
         again. The gap between closing and opening the VSI can cause a race
         condition. Fix this by grabbing the rtnl_lock prior to closing the
         VSI and then only free it after re-opening the VSI during a MIB
         change event.
      
      2. ice_sched_query_elem is used in ice_sched.c and with this patch, in
         ice_dcb.c as well. However, ice_dcb.c is not built when CONFIG_DCB is
         unset. This results in namespace warnings (ice_sched.o: Externally
         defined symbols with no external references) when CONFIG_DCB is unset.
         To avoid this move ice_sched_query_elem from ice_sched.c to
         ice_common.c.
      Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      7b9ffc76
  29. 27 3月, 2019 1 次提交
  30. 22 3月, 2019 1 次提交
  31. 16 1月, 2019 1 次提交