1. 11 3月, 2020 3 次提交
    • B
      ice: Fix removing driver while bare-metal VFs pass traffic · f844d521
      Brett Creeley 提交于
      Currently, if there are bare-metal VFs passing traffic and the ice
      driver is removed, there is a possibility of VFs triggering a Tx timeout
      right before iavf_remove(). This is causing iavf_close() to not be
      called because there is a check in the beginning of iavf_remove() that
      bails out early if (adapter->state < IAVF_DOWN_PENDING). This makes it
      so some resources do not get cleaned up. Specifically, free_irq()
      is never called for data interrupts, which results in the following line
      of code to trigger:
      
      pci_disable_msix()
      	free_msi_irqs()
      		...
      		BUG_ON(irq_has_action(entry->irq + i));
      		...
      
      To prevent the Tx timeout from occurring on the VF during driver unload
      for ice and the iavf there are a few changes that are needed.
      
      [1] Don't disable all active VF Tx/Rx queues prior to calling
      pci_disable_sriov.
      
      [2] Call ice_free_vfs() before disabling the service task.
      
      [3] Disable VF resets when the ice driver is being unloaded by setting
      the pf->state flag __ICE_VF_RESETS_DISABLED.
      
      Changing [1] and [2] allow each VF driver's remove flow to successfully
      send VIRTCHNL requests, which includes queue disable. This prevents
      unexpected Tx timeouts because the PF driver is no longer forcefully
      disabling queues.
      
      Due to [1] and [2] there is a possibility that the PF driver will get a
      VFLR or reset request over VIRTCHNL from a VF during PF driver unload.
      Prevent that by doing [3].
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      f844d521
    • B
      ice: Improve clarity of prints and variables · 46c276ce
      Brett Creeley 提交于
      Currently when the device runs out of MSI-X interrupts a cryptic and
      unhelpful message is printed. This will cause confusion when hitting this
      case. Fix this by clearing up the error message for both SR-IOV and non
      SR-IOV use cases.
      
      Also, make a few minor changes to increase clarity of variables.
      1. Change per VF MSI-X and queue pair variables in the PF structure.
      2. Use ICE_NONQ_VECS_VF when determining pf->num_msix_per_vf instead of
      the magic number "1". This vector is reserved for the OICR.
      
      All of the resource tracking functions were moved to avoid adding
      any forward declaration function prototypes.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      46c276ce
    • M
      ice: allow bigger VFs · 0ca469fb
      Mitch Williams 提交于
      Unlike the XL710 series, 800-series hardware can allocate more than 4
      MSI-X vectors per VF. This patch enables that functionality. We
      dynamically allocate vectors and queues depending on how many VFs are
      enabled. Allocating the maximum number of VFs replicates XL710
      behavior with 4 queues and 4 vectors. But allocating a smaller number
      of VFs will give you 16 queues and 16 vectors.
      Signed-off-by: NMitch Williams <mitch.a.williams@intel.com>
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      0ca469fb
  2. 20 2月, 2020 3 次提交
    • J
      ice: add backslash-n to strings · af23635a
      Jesse Brandeburg 提交于
      There were several strings found without line feeds, fix
      them by adding a line feed, as is typical.  Without this
      lotsofmessagescanbejumbledtogether.
      
      This patch has known checkpatch warnings from long lines
      for the NL_* messages, because checkpatch doesn't know
      how to ignore them.
      Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      af23635a
    • P
      ice: update malicious driver detection event handling · 9d5c5a52
      Paul Greenwalt 提交于
      Update the PF VFs MDD event message to rate limit once per second and
      report the total number Rx|Tx event count. Add support to print pending
      MDD events that occur during the rate limit. The use of net_ratelimit did
      not allow for per VF Rx|Tx granularity.
      
      Additional PF MDD log messages are guarded by netif_msg_[rx|tx]_err().
      
      Since VF RX MDD events disable the queue, add ethtool private flag
      mdd-auto-reset-vf to configure VF reset to re-enable the queue.
      
      Disable anti-spoof detection interrupt to prevent spurious events
      during a function reset.
      
      To avoid race condition do not make PF MDD register reads conditional
      on global MDD result.
      Signed-off-by: NPaul Greenwalt <paul.greenwalt@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      9d5c5a52
    • B
      ice: Wait for VF to be reset/ready before configuration · c54d209c
      Brett Creeley 提交于
      The configuration/command below is failing when the VF in the xml
      file is already bound to the host iavf driver.
      
      pci_0000_af_0_0.xml:
      
      <interface type='hostdev' managed='yes'>
      <source>
      <address type='pci' domain='0x0000' bus='0xaf' slot='0x0' function='0x0'/>
      </source>
      <mac address='00:de:ad:00:11:01'/>
      </interface>
      
      > virsh attach-device domain_name pci_0000_af_0_0.xml
      error: Failed to attach device from pci_0000_af_0_0.xml
      error: Cannot set interface MAC/vlanid to 00:de:ad:00:11:01/0 for
      	ifname ens1f1 vf 0: Device or resource busy
      
      This is failing because the VF has not been completely removed/reset
      after being unbound (via the virsh command above) from the host iavf
      driver and ice_set_vf_mac() checks if the VF is disabled before waiting
      for the reset to finish.
      
      Fix this by waiting for the VF remove/reset process to happen before
      checking if the VF is disabled. Also, since many functions for VF
      administration on the PF were more or less calling the same 3 functions
      (ice_wait_on_vf_reset(), ice_is_vf_disabled(), and ice_check_vf_init())
      move these into the helper function ice_check_vf_ready_for_cfg(). Then
      call this function in any flow that attempts to configure/query a VF
      from the PF.
      
      Lastly, increase the maximum wait time in ice_wait_on_vf_reset() to
      800ms, and modify/add the #define(s) that determine the wait time.
      This was done for robustness because in rare/stress cases VF removal can
      take a max of ~800ms and previously the wait was a max of ~300ms.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      c54d209c
  3. 16 2月, 2020 9 次提交
    • B
      ice: Fix virtchnl_queue_select bitmap validation · 24e2e2a0
      Brett Creeley 提交于
      Currently in ice_vc_ena_qs_msg() we are incorrectly validating the
      virtchnl queue select bitmaps. The virtchnl_queue_select rx_queues and
      tx_queue bitmap is being compared against ICE_MAX_BASE_QS_PER_VF, but
      the problem is that these bitmaps can have a value greater than
      ICE_MAX_BASE_QS_PER_VF. Fix this by comparing the bitmaps against
      BIT(ICE_MAX_BASE_QS_PER_VF).
      
      Also, add the function ice_vc_validate_vqs_bitmaps() that checks to see
      if both virtchnl_queue_select bitmaps are empty along with checking that
      the bitmaps only have valid bits set. This function can then be used in
      both the queue enable and disable flows.
      
      Arkady Gilinksky's patch on the intel-wired-lan mailing list
      ("i40e/iavf: Fix msg interface between VF and PF") made me
      aware of this issue.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      24e2e2a0
    • B
      ice: Fix and refactor Rx queue disable for VFs · e1fe6926
      Brett Creeley 提交于
      Currently when a VF driver sends the PF a request to disable Rx queues
      we will disable them one at a time, even if the VF driver sent us a
      batch of queues to disable. This is causing issues where the Rx queue
      disable times out with LFC enabled. This can be improved by detecting
      when the VF is trying to disable all of its queues.
      
      Also remove the variable num_qs_ena from the ice_vf structure as it was
      only used to see if there were no Rx and no Tx queues active. Instead
      add a function that checks if both the vf->rxq_ena and vf->txq_ena
      bitmaps are empty.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      e1fe6926
    • B
      ice: Handle LAN overflow event for VF queues · 2309ae38
      Brett Creeley 提交于
      Currently we are not handling LAN overflow events. There can be cases
      where LAN overflow events occur on VF queues, especially with Link Flow
      Control (LFC) enabled on the controlling PF. In order to recover from
      the LAN overflow event caused by a VF we need to determine if the queue
      belongs to a VF and reset that VF accordingly.
      
      The struct ice_aqc_event_lan_overflow returns a copy of the GLDCB_RTCTQ
      register, which tells us what the queue index is in the global/device
      space. The global queue index needs to first be converted to a PF space
      queue index and then it can be used to find if a VF owns it.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      2309ae38
    • B
      ice: Add support to enable/disable all Rx queues before waiting · 13a6233b
      Brett Creeley 提交于
      Currently when we enable/disable all Rx queues we do the following
      sequence for each Rx queue and then move to the next queue.
      
      1. Enable/Disable the Rx queue via register write.
      2. Read the configuration register to determine if the Rx queue was
      enabled/disabled successfully.
      
      In some cases enabling/disabling queue 0 fails because of step 2 above.
      Fix this by doing step 1 for all of the Rx queues and then step 2 for
      all of the Rx queues.
      
      Also, there are cases where we enable/disable a single queue (i.e.
      SR-IOV and XDP) so add a new function that does step 1 and 2 above with
      a read flush in between.
      
      This change also required a single Rx queue to be enabled/disabled with
      and without waiting for the change to propagate through hardware. Fix
      this by adding a boolean wait flag to the necessary functions.
      
      Also, add the keywords "one" and "all" to distinguish between
      enabling/disabling a single Rx queue and all Rx queues respectively.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      13a6233b
    • B
      ice: Only allow tagged bcast/mcast traffic for VF in port VLAN · 72634bc2
      Brett Creeley 提交于
      Currently the VF can see other's broadcast and multicast traffic because
      it always has a VLAN filter for VLAN 0. Fix this by removing/adding the
      VF's VLAN 0 filter when a port VLAN is added/removed respectively.
      
      This required a few changes.
      
      1. Move where we add VLAN 0 by default for the VF into
      ice_alloc_vsi_res() because this is when we determine if a port VLAN is
      present for load and reset.
      
      2. Moved where we kill the old port VLAN filter in
      ice_set_vf_port_vlan() to the very end of the function because it allows
      us to save the old port VLAN configuration upon any failure case.
      
      3. During adding/removing of a port VLAN via ice_set_vf_port_vlan() we
      also need to remove/add the VLAN 0 filter rule respectively.
      
      4. Improve log messages.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      72634bc2
    • B
      ice: Fix Port VLAN priority bits · 61c9ce86
      Brett Creeley 提交于
      Currently when configuring a port VLAN for a VF we are only shifting the
      QoS bits by 12. This is incorrect. Fix this by getting rid of the ICE
      specific VLAN defines and use the kernel VLAN defines instead.
      
      Also, don't assign a value to vlanprio until the VLAN ID and QoS
      parameters have been validated.
      
      Also, there are many places we do (le16_to_cpu(vsi->info.pvid) &
      VLAN_VID_MASK). Instead do (vf->port_vlan_info & VLAN_VID_MASK) because
      we always save what's stored in vsi->info.pvid to vf->port_vlan_info in
      the CPU's endianness.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      61c9ce86
    • B
      ice: Add helper to determine if VF link is up · 0b6c6a8b
      Brett Creeley 提交于
      The check for vf->link_up is incorrect because this field is only valid if
      vf->link_forced is true. Fix this by adding the helper ice_is_vf_link_up()
      to determine if the VF's link is up.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      0b6c6a8b
    • B
      ice: Refactor port vlan configuration for the VF · b093841f
      Brett Creeley 提交于
      Currently ice_vsi_manage_pvid() calls
      ice_vsi_[set|kill]_pvid_fill_ctxt() when enabling/disabling a port VLAN
      on a VSI respectively. These two functions have some duplication so just
      move their unique pieces inline in ice_vsi_manage_pvid() and then the
      duplicate code can be reused for both the enabling/disabling paths.
      
      Before this patch the info.pvid field was not being written
      correctly via ice_vsi_kill_pvid_fill_ctxt() so it was being hard coded
      to 0 in ice_set_vf_port_vlan(). Fix this by setting the info.pvid field
      to 0 before calling ice_vsi_update() in ice_vsi_manage_pvid().
      
      We currently use vf->port_vlan_id to keep track of the port VLAN
      ID and QoS, which is a bit misleading. Fix this by renaming it to
      vf->port_vlan_info. Also change the name of the argument for
      ice_vsi_manage_pvid() from vid to pvid_info.
      
      In ice_vsi_manage_pvid() only save the fields that were modified
      in the VSI properties structure on success instead of the entire thing.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      b093841f
    • B
      ice: Add initial support for QinQ · 42f3efef
      Brett Creeley 提交于
      Allow support for S-Tag + C-Tag VLAN traffic by disabling pruning when
      there are no 0x8100 VLAN interfaces currently created on top of the PF.
      When an 0x8100 VLAN interface is configured, enable pruning and only
      support single and double C-Tag VLAN traffic. If all of the 0x8100
      interfaces that were created on top of the PF are removed via
      ethtool -K <iface> rx-vlan-filter off or via ip tools, then disable
      pruning and allow S-Tag + C-Tag traffic again.
      
      Add VLAN 0 filter by default for the PF. This is because a bridge
      sets the default_pvid to 1, sends the request down to
      ice_vlan_rx_add_vid(), and we never get the request to add VLAN 0 via
      the 8021q module which causes all untagged traffic to be dropped.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      42f3efef
  4. 13 2月, 2020 3 次提交
  5. 04 1月, 2020 4 次提交
    • B
      ice: Enable ip link show on the PF to display VF unicast MAC(s) · ed4c068d
      Brett Creeley 提交于
      Currently when there are SR-IOV VF(s) and the user does "ip link show <pf
      interface>" the VF unicast MAC addresses all show 00:00:00:00:00:00
      if the unicast MAC was set via VIRTCHNL (i.e. not administratively set
      by the host PF).
      
      This is misleading to the host administrator. Fix this by setting the
      VF's dflt_lan_addr.addr when the VF's unicast MAC address is
      configured via VIRTCHNL. There are a couple cases where we don't allow
      the dflt_lan_addr.addr field to be written. First, If the VF's
      pf_set_mac field is true and the VF is not trusted, then we don't allow
      the dflt_lan_addr.addr to be modified. Second, if the
      dflt_lan_addr.addr has already been set (i.e. via VIRTCHNL).
      
      Also a small refactor was done to separate the flow for add and delete
      MAC addresses in order to simplify the logic for error conditions
      and set/clear the VF's dflt_lan_addr.addr field.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      ed4c068d
    • B
      ice: Fix VF link state when it's IFLA_VF_LINK_STATE_AUTO · 26a91525
      Brett Creeley 提交于
      Currently the flow for ice_set_vf_link_state() is not configuring link
      the same as all other VF link configuration flows. Fix this by only
      setting the necessary VF members in ice_set_vf_link_state() and then
      call ice_vc_notify_link_state() to actually configure link for the
      VF. This made ice_set_pfe_link_forced() unnecessary, so it was
      deleted. Also, this commonizes the link flows for the VF to all call
      ice_vc_notify_link_state().
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      26a91525
    • B
      ice: Add ice_for_each_vf() macro · 005881bc
      Brett Creeley 提交于
      Currently we do "for (i = 0; i < pf->num_alloc_vfs; i++)" all over the
      place. Many other places use macros to contain this repeated for loop,
      So create the macro ice_for_each_vf(pf, i) that does the same thing.
      
      There were a couple places we were using one loop variable and a VF
      iterator, which were changed to using a local variable within the
      ice_for_each_vf() macro.
      
      Also in ice_alloc_vfs() we were setting pf->num_alloc_vfs after doing
      "for (i = 0; i < num_alloc_vfs; i++)". Instead assign pf->num_alloc_vfs
      right after allocating memory for the pf->vf array.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      005881bc
    • B
      ice: Fix VF spoofchk · cd6d6b83
      Brett Creeley 提交于
      There are many things wrong with the function
      ice_set_vf_spoofchk().
      
      1. The VSI being modified is the PF VSI, not the VF VSI.
      2. We are enabling Rx VLAN pruning instead of Tx VLAN anti-spoof.
      3. The spoofchk setting for each VF is not initialized correctly
         or re-initialized correctly on reset.
      
      To fix [1] we need to make sure we are modifying the VF VSI.
      This is done by using the vf->lan_vsi_idx to index into the PF's
      VSI array.
      
      To fix [2] replace setting Rx VLAN pruning in ice_set_vf_spoofchk()
      with setting Tx VLAN anti-spoof.
      
      To Fix [3] we need to make sure the initial VSI settings match what
      is done in ice_set_vf_spoofchk() for spoofchk=on. Also make sure
      this also works for VF reset. This was done by modifying ice_vsi_init()
      to account for the current spoofchk state of the VF VSI.
      
      Because of these changes, Tx VLAN anti-spoof needs to be removed
      from ice_cfg_vlan_pruning(). This is okay for the VF because this
      is now controlled from the admin enabling/disabling spoofchk. For the
      PF, Tx VLAN anti-spoof should not be set. This change requires us to
      call ice_set_vf_spoofchk() when configuring promiscuous mode for
      the VF which requires ice_set_vf_spoofchk() to move in order to prevent
      a forward declaration prototype.
      
      Also, add VLAN 0 by default when allocating a VF since the PF is unaware
      if the guest OS is running the 8021q module. Without this, MDD events will
      trigger on untagged traffic because spoofcheck is enabled by default. Due
      to this change, ignore add/delete messages for VLAN 0 from VIRTCHNL since
      this is added/deleted during VF initialization/teardown respectively and
      should not be modified.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      cd6d6b83
  6. 23 11月, 2019 8 次提交
  7. 09 11月, 2019 2 次提交
  8. 07 11月, 2019 3 次提交
  9. 05 11月, 2019 1 次提交
  10. 13 9月, 2019 1 次提交
    • T
      ice: Enable DDP package download · 462acf6a
      Tony Nguyen 提交于
      Attempt to request an optional device-specific DDP package file
      (one with the PCIe Device Serial Number in its name so that different DDP
      package files can be used on different devices). If the optional package
      file exists, download it to the device. If not, download the default
      package file.
      
      Log an appropriate message based on whether or not a DDP package
      file exists and the return code from the attempt to download it to the
      device.  If the download fails and there is not already a package file on
      the device, go into "Safe Mode" where some features are not supported.
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      462acf6a
  11. 05 9月, 2019 3 次提交