1. 10 10月, 2020 2 次提交
    • J
      ice: refactor devlink_port to be per-VSI · 48d40025
      Jacob Keller 提交于
      Currently, the devlink_port structure is stored within the ice_pf. This
      made sense because we create a single devlink_port for each PF. This
      setup does not mesh with the abstractions in the driver very well, and
      led to a flow where we accidentally call devlink_port_unregister twice
      during error cleanup.
      
      In particular, if devlink_port_register or devlink_port_unregister are
      called twice, this leads to a kernel panic. This appears to occur during
      some possible flows while cleaning up from a failure during driver
      probe.
      
      If register_netdev fails, then we will call devlink_port_unregister in
      ice_cfg_netdev as it cleans up. Later, we again call
      devlink_port_unregister since we assume that we must cleanup the port
      that is associated with the PF structure.
      
      This occurs because we cleanup the devlink_port for the main PF even
      though it was not allocated. We allocated the port within a per-VSI
      function for managing the main netdev, but did not release the port when
      cleaning up that VSI, the allocation and destruction are not aligned.
      
      Instead of attempting to manage the devlink_port as part of the PF
      structure, manage it as part of the PF VSI. Doing this has advantages,
      as we can match the de-allocation of the devlink_port with the
      unregister_netdev associated with the main PF VSI.
      
      Moving the port to the VSI is preferable as it paves the way for
      handling devlink ports allocated for other purposes such as SR-IOV VFs.
      
      Since we're changing up how we allocate the devlink_port, also change
      the indexing. Originally, we indexed the port using the PF id number.
      This came from an old goal of sharing a devlink for each physical
      function. Managing devlink instances across multiple function drivers is
      not workable. Instead, lets set the port number to the logical port
      number returned by firmware and set the index using the VSI index
      (sometimes referred to as VSI handle).
      Signed-off-by: NJacob Keller <jacob.e.keller@intel.com>
      Tested-by: NAaron Brown <aaron.f.brown@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      48d40025
    • B
      ice: remove repeated words · ac382a09
      Bruce Allan 提交于
      A new test in checkpatch detects repeated words; cleanup all pre-existing
      occurrences of those now.
      Signed-off-by: NBruce Allan <bruce.w.allan@intel.com>
      Tested-by: NAaron Brown <aaron.f.brown@intel.com>
      Co-developed-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      ac382a09
  2. 30 9月, 2020 1 次提交
  3. 29 9月, 2020 1 次提交
  4. 25 9月, 2020 2 次提交
  5. 01 9月, 2020 1 次提交
  6. 01 8月, 2020 3 次提交
  7. 29 7月, 2020 7 次提交
    • M
      ice: cleanup VSI on probe fail · 78116e97
      Marcin Szycik 提交于
      As part of ice_setup_pf_sw() a PF VSI is setup; release the VSI in case of
      failure.
      Signed-off-by: NMarcin Szycik <marcin.szycik@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      78116e97
    • B
      ice: Allow all VLANs in safe mode · cd1f56f4
      Brett Creeley 提交于
      Currently the PF VSI's context parameters are left in a bad state when
      going into safe mode. This is causing VLAN traffic to not pass. Fix this
      by configuring the PF VSI to allow all VLAN tagged traffic.
      
      Also, remove redundant comment explaining the safe mode flow in
      ice_probe().
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      cd1f56f4
    • N
      ice: restore VF MSI-X state during PCI reset · a54a0b24
      Nick Nunley 提交于
      During a PCI FLR the MSI-X Enable flag in the VF PCI MSI-X capability
      register will be cleared. This can lead to issues when a VF is
      assigned to a VM because in these cases the VF driver receives no
      indication of the PF PCI error/reset and additionally it is incapable
      of restoring the cleared flag in the hypervisor configuration space
      without fully reinitializing the driver interrupt functionality.
      
      Since the VF driver is unable to easily resolve this condition on its own,
      restore the VF MSI-X flag during the PF PCI reset handling.
      Signed-off-by: NNick Nunley <nicholas.d.nunley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      a54a0b24
    • D
      ice: fix link event handling timing · 0ce6c34a
      Dave Ertman 提交于
      When the driver experiences a link event (especially link up)
      there can be multiple events generated. Some of these are
      link fault and still have a state of DOWN set.  The problem
      happens when the link comes UP during the PF driver handling
      one of the LINK DOWN events.  The status of the link is updated
      and is now seen as UP, so when the actual LINK UP event comes,
      the port information has already been updated to be seen as UP,
      even though none of the UP activities have been completed.
      
      After the link information has been updated in the link
      handler and evaluated for MEDIA PRESENT, if the state
      of the link has been changed to UP, treat the DOWN event
      as an UP event since the link is now UP.
      Signed-off-by: NDave Ertman <david.m.ertman@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      0ce6c34a
    • D
      ice: Fix link broken after GLOBR reset · b767ca65
      Dave Ertman 提交于
      After a GLOBR, the link was broken so that a link
      up situation was being seen as a link down.
      
      The problem was that the rebuild process was updating
      the port_info link status without doing any of the
      other things that need to be done when link changes.
      
      This was causing the port_info struct to have current
      "UP" information so that any further UP interrupts
      were skipped as redundant.
      
      The rebuild flow should *not* be updating the port_info
      struct link information, so eliminate this and leave
      it to the link event handling code.
      Signed-off-by: NDave Ertman <david.m.ertman@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      b767ca65
    • D
      ice: Implement LFC workaround · 7d9c9b79
      Dave Ertman 提交于
      There is a bug where the LFC settings are not being preserved
      through a link event.  The registers in question are the ones
      that are touched (and restored) when a set_local_mib AQ command
      is performed.
      
      On a link-up event, make sure that a set_local_mib is being
      performed.
      
      Move the function ice_aq_set_lldp_mib() from the DCB specific
      ice_dcb.c to ice_common.c so that the driver always has access
      to this AQ command.
      Signed-off-by: NDave Ertman <david.m.ertman@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      7d9c9b79
    • J
      ice: implement device flash update via devlink · d69ea414
      Jacob Keller 提交于
      Use the newly added pldmfw library to implement device flash update for
      the Intel ice networking device driver. This support uses the devlink
      flash update interface.
      
      The main parts of the flash include the Option ROM, the netlist module,
      and the main NVM data. The PLDM firmware file contains modules for each
      of these components.
      
      Using the pldmfw library, the provided firmware file will be scanned for
      the three major components, "fw.undi" for the Option ROM, "fw.mgmt" for
      the main NVM module containing the primary device firmware, and
      "fw.netlist" containing the netlist module.
      
      The flash is separated into two banks, the active bank containing the
      running firmware, and the inactive bank which we use for update. Each
      module is updated in a staged process. First, the inactive bank is
      erased, preparing the device for update. Second, the contents of the
      component are copied to the inactive portion of the flash. After all
      components are updated, the driver signals the device to switch the
      active bank during the next EMP reset (which would usually occur during
      the next reboot).
      
      Although the firmware AdminQ interface does report an immediate status
      for each command, the NVM erase and NVM write commands receive status
      asynchronously. The driver must not continue writing until previous
      erase and write commands have finished. The real status of the NVM
      commands is returned over the receive AdminQ. Implement a simple
      interface that uses a wait queue so that the main update thread can
      sleep until the completion status is reported by firmware. For erasing
      the inactive banks, this can take quite a while in practice.
      
      To help visualize the process to the devlink application and other
      applications based on the devlink netlink interface, status is reported
      via the devlink_flash_update_status_notify. While we do report status
      after each 4k block when writing, there is no real status we can report
      during erasing. We simply must wait for the complete module erasure to
      finish.
      
      With this implementation, basic flash update for the ice hardware is
      supported.
      Signed-off-by: NJacob Keller <jacob.e.keller@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d69ea414
  8. 26 7月, 2020 1 次提交
  9. 24 7月, 2020 5 次提交
  10. 08 7月, 2020 1 次提交
  11. 26 6月, 2020 1 次提交
    • J
      net/intel: remove driver versions from Intel drivers · 34a2a3b8
      Jeff Kirsher 提交于
      As with other networking drivers, remove the unnecessary driver version
      from the Intel drivers. The ethtool driver information and module version
      will then report the kernel version instead.
      
      For ixgbe, i40e and ice drivers, the driver passes the driver version to
      the firmware to confirm that we are up and running.  So we now pass the
      value of UTS_RELEASE to the firmware.  This adminq call is required per
      the HAS document.  The Device then sends an indication to the BMC that the
      PF driver is present. This is done using Host NC Driver Status Indication
      in NC-SI Get Link command or via the Host Network Controller Driver Status
      Change AEN.
      
      What the BMC may do with this information is implementation-dependent, but
      this is a standard NC-SI 1.1 command we honor per the HAS.
      
      CC: Bruce Allan <bruce.w.allan@intel.com>
      CC: Jesse Brandeburg <jesse.brandeburg@intel.com>
      CC: Alek Loktionov <aleksandr.loktionov@intel.com>
      CC: Kevin Liedtke <kevin.d.liedtke@intel.com>
      CC: Aaron Rowden <aaron.f.rowden@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      Co-developed-by: NJacob Keller <jacob.e.keller@intel.com>
      Tested-by: NAaron Brown <aaron.f.brown@intel.com>
      34a2a3b8
  12. 19 6月, 2020 1 次提交
  13. 31 5月, 2020 2 次提交
  14. 29 5月, 2020 4 次提交
    • B
      ice: Refactor VF reset · 12bb018c
      Brett Creeley 提交于
      Currently VF VSI are being reset twice during a PFR or greater. This is
      causing reset, specifically resetting all VFs, to take too long. This is
      causing various issues with VF drivers not being able to gracefully
      handle the VF reset timeout. Fix this by refactoring how VF reset is
      handled for the case mentioned previously and for the VFR/VFLR case.
      
      The refactor was done by doing the following:
      
      1. Removing the call to ice_vsi_rebuild_by_type for
         ICE_VSI_VF VSI, which was causing the initial VSI rebuild.
      
      2. Adding functions for pre/post VSI rebuild functions that can be called
         in both the reset all VFs case and reset individual VF case.
      
      3. Adding VSI rebuild functions that are specific for the reset all VFs
         case and adding functions that are specific for the reset individual
         VF case.
      
      4. Calling the pre-rebuild function, then the specific VSI rebuild
         function based on the reset type, and then calling the post-rebuild
         function to handle VF resets.
      
      This patch series makes some assumptions about how VSI are handling by
      FW during reset:
      
      1. During a PFR or greater all VSI in FW will be cleared.
      2. During a VFR/VFLR the VSI rebuild responsibility is in the hands of
         the PF software.
      3. There is code in the ice_reset_all_vfs() case to amortize operations
         if possible. This was left intact.
      4. PF software should not be replaying VSI based filters that were added
         other than host configured, PF software configured, or the VF's
         default/LAA MAC. This is the VF drivers job after it has been reset.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      12bb018c
    • J
      ice: fix kernel BUG if register_netdev fails · c2b313b7
      Jacob Keller 提交于
      If register_netdev() fails, the driver will attempt to cleanup the
      q_vectors and inadvertently trigger a kernel BUG due to a NULL pointer
      dereference.
      
      This occurs because cleaning up q_vectors attempts to call
      netif_napi_del on napi_structs which were never initialized.
      
      Resolve this by releasing the netdev in ice_cfg_netdev and setting
      vsi->netdev to NULL. This ensures that after ice_cfg_netdev fails the
      state is rewound to match as if ice_cfg_netdev was never called.
      Signed-off-by: NJacob Keller <jacob.e.keller@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      c2b313b7
    • J
      ice: fix potential double free in probe unrolling · bc3a0241
      Jacob Keller 提交于
      If ice_init_interrupt_scheme fails, ice_probe will jump to clearing up
      the interrupts. This can lead to some static analysis tools such as the
      compiler sanitizers complaining about double free problems.
      
      Since ice_init_interrupt_scheme already unrolls internally on failure,
      there is no need to call ice_clear_interrupt_scheme when it fails. Add
      a new unroll label and use that instead.
      Signed-off-by: NJacob Keller <jacob.e.keller@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      bc3a0241
    • A
      ice: Poll for reset completion when DDP load fails · 9918f2d2
      Anirudh Venkataramanan 提交于
      There are certain cases where the DDP load fails and the FW issues a
      core reset. For these cases, wait for reset to complete before
      proceeding with reset of the driver init.
      Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      9918f2d2
  15. 28 5月, 2020 6 次提交
  16. 23 5月, 2020 2 次提交