1. 29 5月, 2020 10 次提交
  2. 28 5月, 2020 1 次提交
  3. 23 5月, 2020 4 次提交
  4. 22 5月, 2020 4 次提交
  5. 11 3月, 2020 4 次提交
  6. 20 2月, 2020 3 次提交
    • J
      ice: add backslash-n to strings · af23635a
      Jesse Brandeburg 提交于
      There were several strings found without line feeds, fix
      them by adding a line feed, as is typical.  Without this
      lotsofmessagescanbejumbledtogether.
      
      This patch has known checkpatch warnings from long lines
      for the NL_* messages, because checkpatch doesn't know
      how to ignore them.
      Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      af23635a
    • P
      ice: update malicious driver detection event handling · 9d5c5a52
      Paul Greenwalt 提交于
      Update the PF VFs MDD event message to rate limit once per second and
      report the total number Rx|Tx event count. Add support to print pending
      MDD events that occur during the rate limit. The use of net_ratelimit did
      not allow for per VF Rx|Tx granularity.
      
      Additional PF MDD log messages are guarded by netif_msg_[rx|tx]_err().
      
      Since VF RX MDD events disable the queue, add ethtool private flag
      mdd-auto-reset-vf to configure VF reset to re-enable the queue.
      
      Disable anti-spoof detection interrupt to prevent spurious events
      during a function reset.
      
      To avoid race condition do not make PF MDD register reads conditional
      on global MDD result.
      Signed-off-by: NPaul Greenwalt <paul.greenwalt@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      9d5c5a52
    • B
      ice: Wait for VF to be reset/ready before configuration · c54d209c
      Brett Creeley 提交于
      The configuration/command below is failing when the VF in the xml
      file is already bound to the host iavf driver.
      
      pci_0000_af_0_0.xml:
      
      <interface type='hostdev' managed='yes'>
      <source>
      <address type='pci' domain='0x0000' bus='0xaf' slot='0x0' function='0x0'/>
      </source>
      <mac address='00:de:ad:00:11:01'/>
      </interface>
      
      > virsh attach-device domain_name pci_0000_af_0_0.xml
      error: Failed to attach device from pci_0000_af_0_0.xml
      error: Cannot set interface MAC/vlanid to 00:de:ad:00:11:01/0 for
      	ifname ens1f1 vf 0: Device or resource busy
      
      This is failing because the VF has not been completely removed/reset
      after being unbound (via the virsh command above) from the host iavf
      driver and ice_set_vf_mac() checks if the VF is disabled before waiting
      for the reset to finish.
      
      Fix this by waiting for the VF remove/reset process to happen before
      checking if the VF is disabled. Also, since many functions for VF
      administration on the PF were more or less calling the same 3 functions
      (ice_wait_on_vf_reset(), ice_is_vf_disabled(), and ice_check_vf_init())
      move these into the helper function ice_check_vf_ready_for_cfg(). Then
      call this function in any flow that attempts to configure/query a VF
      from the PF.
      
      Lastly, increase the maximum wait time in ice_wait_on_vf_reset() to
      800ms, and modify/add the #define(s) that determine the wait time.
      This was done for robustness because in rare/stress cases VF removal can
      take a max of ~800ms and previously the wait was a max of ~300ms.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      c54d209c
  7. 16 2月, 2020 9 次提交
    • B
      ice: Fix virtchnl_queue_select bitmap validation · 24e2e2a0
      Brett Creeley 提交于
      Currently in ice_vc_ena_qs_msg() we are incorrectly validating the
      virtchnl queue select bitmaps. The virtchnl_queue_select rx_queues and
      tx_queue bitmap is being compared against ICE_MAX_BASE_QS_PER_VF, but
      the problem is that these bitmaps can have a value greater than
      ICE_MAX_BASE_QS_PER_VF. Fix this by comparing the bitmaps against
      BIT(ICE_MAX_BASE_QS_PER_VF).
      
      Also, add the function ice_vc_validate_vqs_bitmaps() that checks to see
      if both virtchnl_queue_select bitmaps are empty along with checking that
      the bitmaps only have valid bits set. This function can then be used in
      both the queue enable and disable flows.
      
      Arkady Gilinksky's patch on the intel-wired-lan mailing list
      ("i40e/iavf: Fix msg interface between VF and PF") made me
      aware of this issue.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      24e2e2a0
    • B
      ice: Fix and refactor Rx queue disable for VFs · e1fe6926
      Brett Creeley 提交于
      Currently when a VF driver sends the PF a request to disable Rx queues
      we will disable them one at a time, even if the VF driver sent us a
      batch of queues to disable. This is causing issues where the Rx queue
      disable times out with LFC enabled. This can be improved by detecting
      when the VF is trying to disable all of its queues.
      
      Also remove the variable num_qs_ena from the ice_vf structure as it was
      only used to see if there were no Rx and no Tx queues active. Instead
      add a function that checks if both the vf->rxq_ena and vf->txq_ena
      bitmaps are empty.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      e1fe6926
    • B
      ice: Handle LAN overflow event for VF queues · 2309ae38
      Brett Creeley 提交于
      Currently we are not handling LAN overflow events. There can be cases
      where LAN overflow events occur on VF queues, especially with Link Flow
      Control (LFC) enabled on the controlling PF. In order to recover from
      the LAN overflow event caused by a VF we need to determine if the queue
      belongs to a VF and reset that VF accordingly.
      
      The struct ice_aqc_event_lan_overflow returns a copy of the GLDCB_RTCTQ
      register, which tells us what the queue index is in the global/device
      space. The global queue index needs to first be converted to a PF space
      queue index and then it can be used to find if a VF owns it.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      2309ae38
    • B
      ice: Add support to enable/disable all Rx queues before waiting · 13a6233b
      Brett Creeley 提交于
      Currently when we enable/disable all Rx queues we do the following
      sequence for each Rx queue and then move to the next queue.
      
      1. Enable/Disable the Rx queue via register write.
      2. Read the configuration register to determine if the Rx queue was
      enabled/disabled successfully.
      
      In some cases enabling/disabling queue 0 fails because of step 2 above.
      Fix this by doing step 1 for all of the Rx queues and then step 2 for
      all of the Rx queues.
      
      Also, there are cases where we enable/disable a single queue (i.e.
      SR-IOV and XDP) so add a new function that does step 1 and 2 above with
      a read flush in between.
      
      This change also required a single Rx queue to be enabled/disabled with
      and without waiting for the change to propagate through hardware. Fix
      this by adding a boolean wait flag to the necessary functions.
      
      Also, add the keywords "one" and "all" to distinguish between
      enabling/disabling a single Rx queue and all Rx queues respectively.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      13a6233b
    • B
      ice: Only allow tagged bcast/mcast traffic for VF in port VLAN · 72634bc2
      Brett Creeley 提交于
      Currently the VF can see other's broadcast and multicast traffic because
      it always has a VLAN filter for VLAN 0. Fix this by removing/adding the
      VF's VLAN 0 filter when a port VLAN is added/removed respectively.
      
      This required a few changes.
      
      1. Move where we add VLAN 0 by default for the VF into
      ice_alloc_vsi_res() because this is when we determine if a port VLAN is
      present for load and reset.
      
      2. Moved where we kill the old port VLAN filter in
      ice_set_vf_port_vlan() to the very end of the function because it allows
      us to save the old port VLAN configuration upon any failure case.
      
      3. During adding/removing of a port VLAN via ice_set_vf_port_vlan() we
      also need to remove/add the VLAN 0 filter rule respectively.
      
      4. Improve log messages.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      72634bc2
    • B
      ice: Fix Port VLAN priority bits · 61c9ce86
      Brett Creeley 提交于
      Currently when configuring a port VLAN for a VF we are only shifting the
      QoS bits by 12. This is incorrect. Fix this by getting rid of the ICE
      specific VLAN defines and use the kernel VLAN defines instead.
      
      Also, don't assign a value to vlanprio until the VLAN ID and QoS
      parameters have been validated.
      
      Also, there are many places we do (le16_to_cpu(vsi->info.pvid) &
      VLAN_VID_MASK). Instead do (vf->port_vlan_info & VLAN_VID_MASK) because
      we always save what's stored in vsi->info.pvid to vf->port_vlan_info in
      the CPU's endianness.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      61c9ce86
    • B
      ice: Add helper to determine if VF link is up · 0b6c6a8b
      Brett Creeley 提交于
      The check for vf->link_up is incorrect because this field is only valid if
      vf->link_forced is true. Fix this by adding the helper ice_is_vf_link_up()
      to determine if the VF's link is up.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      0b6c6a8b
    • B
      ice: Refactor port vlan configuration for the VF · b093841f
      Brett Creeley 提交于
      Currently ice_vsi_manage_pvid() calls
      ice_vsi_[set|kill]_pvid_fill_ctxt() when enabling/disabling a port VLAN
      on a VSI respectively. These two functions have some duplication so just
      move their unique pieces inline in ice_vsi_manage_pvid() and then the
      duplicate code can be reused for both the enabling/disabling paths.
      
      Before this patch the info.pvid field was not being written
      correctly via ice_vsi_kill_pvid_fill_ctxt() so it was being hard coded
      to 0 in ice_set_vf_port_vlan(). Fix this by setting the info.pvid field
      to 0 before calling ice_vsi_update() in ice_vsi_manage_pvid().
      
      We currently use vf->port_vlan_id to keep track of the port VLAN
      ID and QoS, which is a bit misleading. Fix this by renaming it to
      vf->port_vlan_info. Also change the name of the argument for
      ice_vsi_manage_pvid() from vid to pvid_info.
      
      In ice_vsi_manage_pvid() only save the fields that were modified
      in the VSI properties structure on success instead of the entire thing.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      b093841f
    • B
      ice: Add initial support for QinQ · 42f3efef
      Brett Creeley 提交于
      Allow support for S-Tag + C-Tag VLAN traffic by disabling pruning when
      there are no 0x8100 VLAN interfaces currently created on top of the PF.
      When an 0x8100 VLAN interface is configured, enable pruning and only
      support single and double C-Tag VLAN traffic. If all of the 0x8100
      interfaces that were created on top of the PF are removed via
      ethtool -K <iface> rx-vlan-filter off or via ip tools, then disable
      pruning and allow S-Tag + C-Tag traffic again.
      
      Add VLAN 0 filter by default for the PF. This is because a bridge
      sets the default_pvid to 1, sends the request down to
      ice_vlan_rx_add_vid(), and we never get the request to add VLAN 0 via
      the 8021q module which causes all untagged traffic to be dropped.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      42f3efef
  8. 13 2月, 2020 3 次提交
  9. 04 1月, 2020 2 次提交