1. 09 2月, 2021 2 次提交
  2. 10 10月, 2020 1 次提交
  3. 01 8月, 2020 2 次提交
  4. 29 7月, 2020 1 次提交
  5. 24 7月, 2020 1 次提交
  6. 31 5月, 2020 3 次提交
  7. 29 5月, 2020 10 次提交
  8. 28 5月, 2020 1 次提交
  9. 23 5月, 2020 4 次提交
  10. 22 5月, 2020 4 次提交
  11. 11 3月, 2020 4 次提交
  12. 20 2月, 2020 3 次提交
    • J
      ice: add backslash-n to strings · af23635a
      Jesse Brandeburg 提交于
      There were several strings found without line feeds, fix
      them by adding a line feed, as is typical.  Without this
      lotsofmessagescanbejumbledtogether.
      
      This patch has known checkpatch warnings from long lines
      for the NL_* messages, because checkpatch doesn't know
      how to ignore them.
      Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      af23635a
    • P
      ice: update malicious driver detection event handling · 9d5c5a52
      Paul Greenwalt 提交于
      Update the PF VFs MDD event message to rate limit once per second and
      report the total number Rx|Tx event count. Add support to print pending
      MDD events that occur during the rate limit. The use of net_ratelimit did
      not allow for per VF Rx|Tx granularity.
      
      Additional PF MDD log messages are guarded by netif_msg_[rx|tx]_err().
      
      Since VF RX MDD events disable the queue, add ethtool private flag
      mdd-auto-reset-vf to configure VF reset to re-enable the queue.
      
      Disable anti-spoof detection interrupt to prevent spurious events
      during a function reset.
      
      To avoid race condition do not make PF MDD register reads conditional
      on global MDD result.
      Signed-off-by: NPaul Greenwalt <paul.greenwalt@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      9d5c5a52
    • B
      ice: Wait for VF to be reset/ready before configuration · c54d209c
      Brett Creeley 提交于
      The configuration/command below is failing when the VF in the xml
      file is already bound to the host iavf driver.
      
      pci_0000_af_0_0.xml:
      
      <interface type='hostdev' managed='yes'>
      <source>
      <address type='pci' domain='0x0000' bus='0xaf' slot='0x0' function='0x0'/>
      </source>
      <mac address='00:de:ad:00:11:01'/>
      </interface>
      
      > virsh attach-device domain_name pci_0000_af_0_0.xml
      error: Failed to attach device from pci_0000_af_0_0.xml
      error: Cannot set interface MAC/vlanid to 00:de:ad:00:11:01/0 for
      	ifname ens1f1 vf 0: Device or resource busy
      
      This is failing because the VF has not been completely removed/reset
      after being unbound (via the virsh command above) from the host iavf
      driver and ice_set_vf_mac() checks if the VF is disabled before waiting
      for the reset to finish.
      
      Fix this by waiting for the VF remove/reset process to happen before
      checking if the VF is disabled. Also, since many functions for VF
      administration on the PF were more or less calling the same 3 functions
      (ice_wait_on_vf_reset(), ice_is_vf_disabled(), and ice_check_vf_init())
      move these into the helper function ice_check_vf_ready_for_cfg(). Then
      call this function in any flow that attempts to configure/query a VF
      from the PF.
      
      Lastly, increase the maximum wait time in ice_wait_on_vf_reset() to
      800ms, and modify/add the #define(s) that determine the wait time.
      This was done for robustness because in rare/stress cases VF removal can
      take a max of ~800ms and previously the wait was a max of ~300ms.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      c54d209c
  13. 16 2月, 2020 4 次提交
    • B
      ice: Fix virtchnl_queue_select bitmap validation · 24e2e2a0
      Brett Creeley 提交于
      Currently in ice_vc_ena_qs_msg() we are incorrectly validating the
      virtchnl queue select bitmaps. The virtchnl_queue_select rx_queues and
      tx_queue bitmap is being compared against ICE_MAX_BASE_QS_PER_VF, but
      the problem is that these bitmaps can have a value greater than
      ICE_MAX_BASE_QS_PER_VF. Fix this by comparing the bitmaps against
      BIT(ICE_MAX_BASE_QS_PER_VF).
      
      Also, add the function ice_vc_validate_vqs_bitmaps() that checks to see
      if both virtchnl_queue_select bitmaps are empty along with checking that
      the bitmaps only have valid bits set. This function can then be used in
      both the queue enable and disable flows.
      
      Arkady Gilinksky's patch on the intel-wired-lan mailing list
      ("i40e/iavf: Fix msg interface between VF and PF") made me
      aware of this issue.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      24e2e2a0
    • B
      ice: Fix and refactor Rx queue disable for VFs · e1fe6926
      Brett Creeley 提交于
      Currently when a VF driver sends the PF a request to disable Rx queues
      we will disable them one at a time, even if the VF driver sent us a
      batch of queues to disable. This is causing issues where the Rx queue
      disable times out with LFC enabled. This can be improved by detecting
      when the VF is trying to disable all of its queues.
      
      Also remove the variable num_qs_ena from the ice_vf structure as it was
      only used to see if there were no Rx and no Tx queues active. Instead
      add a function that checks if both the vf->rxq_ena and vf->txq_ena
      bitmaps are empty.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      e1fe6926
    • B
      ice: Handle LAN overflow event for VF queues · 2309ae38
      Brett Creeley 提交于
      Currently we are not handling LAN overflow events. There can be cases
      where LAN overflow events occur on VF queues, especially with Link Flow
      Control (LFC) enabled on the controlling PF. In order to recover from
      the LAN overflow event caused by a VF we need to determine if the queue
      belongs to a VF and reset that VF accordingly.
      
      The struct ice_aqc_event_lan_overflow returns a copy of the GLDCB_RTCTQ
      register, which tells us what the queue index is in the global/device
      space. The global queue index needs to first be converted to a PF space
      queue index and then it can be used to find if a VF owns it.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      2309ae38
    • B
      ice: Add support to enable/disable all Rx queues before waiting · 13a6233b
      Brett Creeley 提交于
      Currently when we enable/disable all Rx queues we do the following
      sequence for each Rx queue and then move to the next queue.
      
      1. Enable/Disable the Rx queue via register write.
      2. Read the configuration register to determine if the Rx queue was
      enabled/disabled successfully.
      
      In some cases enabling/disabling queue 0 fails because of step 2 above.
      Fix this by doing step 1 for all of the Rx queues and then step 2 for
      all of the Rx queues.
      
      Also, there are cases where we enable/disable a single queue (i.e.
      SR-IOV and XDP) so add a new function that does step 1 and 2 above with
      a read flush in between.
      
      This change also required a single Rx queue to be enabled/disabled with
      and without waiting for the change to propagate through hardware. Fix
      this by adding a boolean wait flag to the necessary functions.
      
      Also, add the keywords "one" and "all" to distinguish between
      enabling/disabling a single Rx queue and all Rx queues respectively.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      13a6233b