1. 06 2月, 2021 3 次提交
    • B
      ice: remove dead code · 12aae8f1
      Bruce Allan 提交于
      The check for a NULL pf pointer is moot since the earlier declaration and
      assignment of struct device *dev already de-referenced the pointer.  Also,
      the only caller of ice_set_dflt_mib() already ensures pf is not NULL.
      
      Cc: Dave Ertman <david.m.ertman@intel.com>
      Reported-by: Nkernel test robot <lkp@intel.com>
      Reported-by: NDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: NBruce Allan <bruce.w.allan@intel.com>
      Tested-by: NTony Brelinski <tonyx.brelinski@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      12aae8f1
    • G
      ice: Replace one-element array with flexible-array member · e94c0df9
      Gustavo A. R. Silva 提交于
      There is a regular need in the kernel to provide a way to declare having
      a dynamically sized set of trailing elements in a structure. Kernel code
      should always use “flexible array members”[1] for these cases. The older
      style of one-element or zero-length arrays should no longer be used[2].
      
      Refactor the code according to the use of a flexible-array member in
      struct ice_res_tracker, instead of a one-element array and use the
      struct_size() helper to calculate the size for the allocations.
      
      Also, notice that the code below suggests that, currently, two too many
      bytes are being allocated with devm_kzalloc(), as the total number of
      entries (pf->irq_tracker->num_entries) for pf->irq_tracker->list[] is
      _vectors_ and sizeof(*pf->irq_tracker) also includes the size of the
      one-element array _list_ in struct ice_res_tracker.
      
      drivers/net/ethernet/intel/ice/ice_main.c:3511:
      3511         /* populate SW interrupts pool with number of OS granted IRQs. */
      3512         pf->num_avail_sw_msix = (u16)vectors;
      3513         pf->irq_tracker->num_entries = (u16)vectors;
      3514         pf->irq_tracker->end = pf->irq_tracker->num_entries;
      
      With this change, the right amount of dynamic memory is now allocated
      because, contrary to one-element arrays which occupy at least as much
      space as a single object of the type, flexible-array members don't
      occupy such space in the containing structure.
      
      [1] https://en.wikipedia.org/wiki/Flexible_array_member
      [2] https://www.kernel.org/doc/html/v5.9-rc1/process/deprecated.html#zero-length-and-one-element-arraysBuilt-tested-by: Nkernel test robot <lkp@intel.com>
      Signed-off-by: NGustavo A. R. Silva <gustavoars@kernel.org>
      Tested-by: NTony Brelinski <tonyx.brelinski@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      e94c0df9
    • J
      ice: display stored netlist versions via devlink info · e120a9ab
      Jacob Keller 提交于
      Add a function to read the inactive netlist bank for version
      information. To support this, refactor how we read the netlist version
      data. Instead of using the firmware AQ interface with a module ID, read
      from the flash as a flat NVM, using ice_read_flash_module.
      
      This change requires a slight adjustment to the offset values used, as
      reading from the flat NVM includes the type field (which was stripped by
      firmware previously). Cleanup the macro names and move them to
      ice_type.h. For clarity in how we calculate the offsets and so that
      programmers can easily map the offset value to the data sheet, use
      a wrapper macro to account for the offset adjustments.
      
      Use the newly added ice_get_inactive_netlist_ver function to extract the
      version data from the pending netlist module update. Add the stored
      variants of "fw.netlist", and "fw.netlist.build" to the info version map
      array.
      
      With this change, we now report the "fw.netlist" and "fw.netlist.build"
      versions into the stored section of the devlink info report. As with the
      main NVM module versions, if there is no pending update, we report the
      currently active values as stored.
      Signed-off-by: NJacob Keller <jacob.e.keller@intel.com>
      Tested-by: NTony Brelinski <tonyx.brelinski@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      e120a9ab
  2. 27 1月, 2021 2 次提交
    • B
      ice: Fix MSI-X vector fallback logic · f3fe97f6
      Brett Creeley 提交于
      The current MSI-X enablement logic tries to enable best-case MSI-X
      vectors and if that fails we only support a bare-minimum set. This
      includes a single MSI-X for 1 Tx and 1 Rx queue and a single MSI-X
      for the OICR interrupt. Unfortunately, the driver fails to load when we
      don't get as many MSI-X as requested for a couple reasons.
      
      First, the code to allocate MSI-X in the driver tries to allocate
      num_online_cpus() MSI-X for LAN traffic without caring about the number
      of MSI-X actually enabled/requested from the kernel for LAN traffic.
      So, when calling ice_get_res() for the PF VSI, it returns failure
      because the number of available vectors is less than requested. Fix
      this by not allowing the PF VSI to allocation  more than
      pf->num_lan_msix MSI-X vectors and pf->num_lan_msix Rx/Tx queues.
      Limiting the number of queues is done because we don't want more than
      1 Tx/Rx queue per interrupt due to performance conerns.
      
      Second, the driver assigns pf->num_lan_msix = 2, to account for LAN
      traffic and the OICR. However, pf->num_lan_msix is only meant for LAN
      MSI-X. This is causing a failure when the PF VSI tries to
      allocate/reserve the minimum pf->num_lan_msix because the OICR MSI-X has
      already been reserved, so there may not be enough MSI-X vectors left.
      Fix this by setting pf->num_lan_msix = 1 for the failure case. Then the
      ICE_MIN_MSIX accounts for the LAN MSI-X and the OICR MSI-X needed for
      the failure case.
      
      Update the related defines used in ice_ena_msix_range() to align with
      the above behavior and remove the unused RDMA defines because RDMA is
      currently not supported. Also, remove the now incorrect comment.
      
      Fixes: 152b978a ("ice: Rework ice_ena_msix_range")
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NTony Brelinski <tonyx.brelinski@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      f3fe97f6
    • N
      ice: update dev_addr in ice_set_mac_address even if HW filter exists · 13ed5e8a
      Nick Nunley 提交于
      Fix the driver to copy the MAC address configured in ndo_set_mac_address
      into dev_addr, even if the MAC filter already exists in HW. In some
      situations (e.g. bonding) the netdev's dev_addr could have been modified
      outside of the driver, with no change to the HW filter, so the driver
      cannot assume that they match.
      
      Fixes: 757976ab ("ice: Fix check for removing/adding mac filters")
      Signed-off-by: NNick Nunley <nicholas.d.nunley@intel.com>
      Tested-by: NTony Brelinski <tonyx.brelinski@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      13ed5e8a
  3. 08 1月, 2021 1 次提交
  4. 10 12月, 2020 2 次提交
  5. 10 10月, 2020 3 次提交
    • J
      ice: add additional debug logging for firmware update · 1e8249cc
      Jacob Keller 提交于
      While debugging a recent failure to update the flash of an ice device,
      I found it helpful to add additional logging which helped determine the
      root cause of the problem being a timeout issue.
      
      Add some extra dev_dbg() logging messages which can be enabled using the
      dynamic debug facility, including one for ice_aq_wait_for_event that
      will use jiffies to capture a rough estimate of how long we waited for
      the completion of a firmware command.
      Signed-off-by: NJacob Keller <jacob.e.keller@intel.com>
      Tested-by: NBrijesh Behera <brijeshx.behera@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      1e8249cc
    • J
      ice: refactor devlink_port to be per-VSI · 48d40025
      Jacob Keller 提交于
      Currently, the devlink_port structure is stored within the ice_pf. This
      made sense because we create a single devlink_port for each PF. This
      setup does not mesh with the abstractions in the driver very well, and
      led to a flow where we accidentally call devlink_port_unregister twice
      during error cleanup.
      
      In particular, if devlink_port_register or devlink_port_unregister are
      called twice, this leads to a kernel panic. This appears to occur during
      some possible flows while cleaning up from a failure during driver
      probe.
      
      If register_netdev fails, then we will call devlink_port_unregister in
      ice_cfg_netdev as it cleans up. Later, we again call
      devlink_port_unregister since we assume that we must cleanup the port
      that is associated with the PF structure.
      
      This occurs because we cleanup the devlink_port for the main PF even
      though it was not allocated. We allocated the port within a per-VSI
      function for managing the main netdev, but did not release the port when
      cleaning up that VSI, the allocation and destruction are not aligned.
      
      Instead of attempting to manage the devlink_port as part of the PF
      structure, manage it as part of the PF VSI. Doing this has advantages,
      as we can match the de-allocation of the devlink_port with the
      unregister_netdev associated with the main PF VSI.
      
      Moving the port to the VSI is preferable as it paves the way for
      handling devlink ports allocated for other purposes such as SR-IOV VFs.
      
      Since we're changing up how we allocate the devlink_port, also change
      the indexing. Originally, we indexed the port using the PF id number.
      This came from an old goal of sharing a devlink for each physical
      function. Managing devlink instances across multiple function drivers is
      not workable. Instead, lets set the port number to the logical port
      number returned by firmware and set the index using the VSI index
      (sometimes referred to as VSI handle).
      Signed-off-by: NJacob Keller <jacob.e.keller@intel.com>
      Tested-by: NAaron Brown <aaron.f.brown@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      48d40025
    • B
      ice: remove repeated words · ac382a09
      Bruce Allan 提交于
      A new test in checkpatch detects repeated words; cleanup all pre-existing
      occurrences of those now.
      Signed-off-by: NBruce Allan <bruce.w.allan@intel.com>
      Tested-by: NAaron Brown <aaron.f.brown@intel.com>
      Co-developed-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      ac382a09
  6. 30 9月, 2020 1 次提交
  7. 29 9月, 2020 1 次提交
  8. 25 9月, 2020 2 次提交
  9. 01 9月, 2020 1 次提交
  10. 01 8月, 2020 3 次提交
  11. 29 7月, 2020 7 次提交
    • M
      ice: cleanup VSI on probe fail · 78116e97
      Marcin Szycik 提交于
      As part of ice_setup_pf_sw() a PF VSI is setup; release the VSI in case of
      failure.
      Signed-off-by: NMarcin Szycik <marcin.szycik@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      78116e97
    • B
      ice: Allow all VLANs in safe mode · cd1f56f4
      Brett Creeley 提交于
      Currently the PF VSI's context parameters are left in a bad state when
      going into safe mode. This is causing VLAN traffic to not pass. Fix this
      by configuring the PF VSI to allow all VLAN tagged traffic.
      
      Also, remove redundant comment explaining the safe mode flow in
      ice_probe().
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      cd1f56f4
    • N
      ice: restore VF MSI-X state during PCI reset · a54a0b24
      Nick Nunley 提交于
      During a PCI FLR the MSI-X Enable flag in the VF PCI MSI-X capability
      register will be cleared. This can lead to issues when a VF is
      assigned to a VM because in these cases the VF driver receives no
      indication of the PF PCI error/reset and additionally it is incapable
      of restoring the cleared flag in the hypervisor configuration space
      without fully reinitializing the driver interrupt functionality.
      
      Since the VF driver is unable to easily resolve this condition on its own,
      restore the VF MSI-X flag during the PF PCI reset handling.
      Signed-off-by: NNick Nunley <nicholas.d.nunley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      a54a0b24
    • D
      ice: fix link event handling timing · 0ce6c34a
      Dave Ertman 提交于
      When the driver experiences a link event (especially link up)
      there can be multiple events generated. Some of these are
      link fault and still have a state of DOWN set.  The problem
      happens when the link comes UP during the PF driver handling
      one of the LINK DOWN events.  The status of the link is updated
      and is now seen as UP, so when the actual LINK UP event comes,
      the port information has already been updated to be seen as UP,
      even though none of the UP activities have been completed.
      
      After the link information has been updated in the link
      handler and evaluated for MEDIA PRESENT, if the state
      of the link has been changed to UP, treat the DOWN event
      as an UP event since the link is now UP.
      Signed-off-by: NDave Ertman <david.m.ertman@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      0ce6c34a
    • D
      ice: Fix link broken after GLOBR reset · b767ca65
      Dave Ertman 提交于
      After a GLOBR, the link was broken so that a link
      up situation was being seen as a link down.
      
      The problem was that the rebuild process was updating
      the port_info link status without doing any of the
      other things that need to be done when link changes.
      
      This was causing the port_info struct to have current
      "UP" information so that any further UP interrupts
      were skipped as redundant.
      
      The rebuild flow should *not* be updating the port_info
      struct link information, so eliminate this and leave
      it to the link event handling code.
      Signed-off-by: NDave Ertman <david.m.ertman@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      b767ca65
    • D
      ice: Implement LFC workaround · 7d9c9b79
      Dave Ertman 提交于
      There is a bug where the LFC settings are not being preserved
      through a link event.  The registers in question are the ones
      that are touched (and restored) when a set_local_mib AQ command
      is performed.
      
      On a link-up event, make sure that a set_local_mib is being
      performed.
      
      Move the function ice_aq_set_lldp_mib() from the DCB specific
      ice_dcb.c to ice_common.c so that the driver always has access
      to this AQ command.
      Signed-off-by: NDave Ertman <david.m.ertman@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      7d9c9b79
    • J
      ice: implement device flash update via devlink · d69ea414
      Jacob Keller 提交于
      Use the newly added pldmfw library to implement device flash update for
      the Intel ice networking device driver. This support uses the devlink
      flash update interface.
      
      The main parts of the flash include the Option ROM, the netlist module,
      and the main NVM data. The PLDM firmware file contains modules for each
      of these components.
      
      Using the pldmfw library, the provided firmware file will be scanned for
      the three major components, "fw.undi" for the Option ROM, "fw.mgmt" for
      the main NVM module containing the primary device firmware, and
      "fw.netlist" containing the netlist module.
      
      The flash is separated into two banks, the active bank containing the
      running firmware, and the inactive bank which we use for update. Each
      module is updated in a staged process. First, the inactive bank is
      erased, preparing the device for update. Second, the contents of the
      component are copied to the inactive portion of the flash. After all
      components are updated, the driver signals the device to switch the
      active bank during the next EMP reset (which would usually occur during
      the next reboot).
      
      Although the firmware AdminQ interface does report an immediate status
      for each command, the NVM erase and NVM write commands receive status
      asynchronously. The driver must not continue writing until previous
      erase and write commands have finished. The real status of the NVM
      commands is returned over the receive AdminQ. Implement a simple
      interface that uses a wait queue so that the main update thread can
      sleep until the completion status is reported by firmware. For erasing
      the inactive banks, this can take quite a while in practice.
      
      To help visualize the process to the devlink application and other
      applications based on the devlink netlink interface, status is reported
      via the devlink_flash_update_status_notify. While we do report status
      after each 4k block when writing, there is no real status we can report
      during erasing. We simply must wait for the complete module erasure to
      finish.
      
      With this implementation, basic flash update for the ice hardware is
      supported.
      Signed-off-by: NJacob Keller <jacob.e.keller@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d69ea414
  12. 26 7月, 2020 1 次提交
  13. 24 7月, 2020 5 次提交
  14. 08 7月, 2020 1 次提交
  15. 26 6月, 2020 1 次提交
    • J
      net/intel: remove driver versions from Intel drivers · 34a2a3b8
      Jeff Kirsher 提交于
      As with other networking drivers, remove the unnecessary driver version
      from the Intel drivers. The ethtool driver information and module version
      will then report the kernel version instead.
      
      For ixgbe, i40e and ice drivers, the driver passes the driver version to
      the firmware to confirm that we are up and running.  So we now pass the
      value of UTS_RELEASE to the firmware.  This adminq call is required per
      the HAS document.  The Device then sends an indication to the BMC that the
      PF driver is present. This is done using Host NC Driver Status Indication
      in NC-SI Get Link command or via the Host Network Controller Driver Status
      Change AEN.
      
      What the BMC may do with this information is implementation-dependent, but
      this is a standard NC-SI 1.1 command we honor per the HAS.
      
      CC: Bruce Allan <bruce.w.allan@intel.com>
      CC: Jesse Brandeburg <jesse.brandeburg@intel.com>
      CC: Alek Loktionov <aleksandr.loktionov@intel.com>
      CC: Kevin Liedtke <kevin.d.liedtke@intel.com>
      CC: Aaron Rowden <aaron.f.rowden@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      Co-developed-by: NJacob Keller <jacob.e.keller@intel.com>
      Tested-by: NAaron Brown <aaron.f.brown@intel.com>
      34a2a3b8
  16. 19 6月, 2020 1 次提交
  17. 31 5月, 2020 2 次提交
  18. 29 5月, 2020 3 次提交
    • B
      ice: Refactor VF reset · 12bb018c
      Brett Creeley 提交于
      Currently VF VSI are being reset twice during a PFR or greater. This is
      causing reset, specifically resetting all VFs, to take too long. This is
      causing various issues with VF drivers not being able to gracefully
      handle the VF reset timeout. Fix this by refactoring how VF reset is
      handled for the case mentioned previously and for the VFR/VFLR case.
      
      The refactor was done by doing the following:
      
      1. Removing the call to ice_vsi_rebuild_by_type for
         ICE_VSI_VF VSI, which was causing the initial VSI rebuild.
      
      2. Adding functions for pre/post VSI rebuild functions that can be called
         in both the reset all VFs case and reset individual VF case.
      
      3. Adding VSI rebuild functions that are specific for the reset all VFs
         case and adding functions that are specific for the reset individual
         VF case.
      
      4. Calling the pre-rebuild function, then the specific VSI rebuild
         function based on the reset type, and then calling the post-rebuild
         function to handle VF resets.
      
      This patch series makes some assumptions about how VSI are handling by
      FW during reset:
      
      1. During a PFR or greater all VSI in FW will be cleared.
      2. During a VFR/VFLR the VSI rebuild responsibility is in the hands of
         the PF software.
      3. There is code in the ice_reset_all_vfs() case to amortize operations
         if possible. This was left intact.
      4. PF software should not be replaying VSI based filters that were added
         other than host configured, PF software configured, or the VF's
         default/LAA MAC. This is the VF drivers job after it has been reset.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      12bb018c
    • J
      ice: fix kernel BUG if register_netdev fails · c2b313b7
      Jacob Keller 提交于
      If register_netdev() fails, the driver will attempt to cleanup the
      q_vectors and inadvertently trigger a kernel BUG due to a NULL pointer
      dereference.
      
      This occurs because cleaning up q_vectors attempts to call
      netif_napi_del on napi_structs which were never initialized.
      
      Resolve this by releasing the netdev in ice_cfg_netdev and setting
      vsi->netdev to NULL. This ensures that after ice_cfg_netdev fails the
      state is rewound to match as if ice_cfg_netdev was never called.
      Signed-off-by: NJacob Keller <jacob.e.keller@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      c2b313b7
    • J
      ice: fix potential double free in probe unrolling · bc3a0241
      Jacob Keller 提交于
      If ice_init_interrupt_scheme fails, ice_probe will jump to clearing up
      the interrupts. This can lead to some static analysis tools such as the
      compiler sanitizers complaining about double free problems.
      
      Since ice_init_interrupt_scheme already unrolls internally on failure,
      there is no need to call ice_clear_interrupt_scheme when it fails. Add
      a new unroll label and use that instead.
      Signed-off-by: NJacob Keller <jacob.e.keller@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      bc3a0241