1. 24 7月, 2020 2 次提交
  2. 02 7月, 2020 1 次提交
  3. 26 6月, 2020 1 次提交
    • J
      net/intel: remove driver versions from Intel drivers · 34a2a3b8
      Jeff Kirsher 提交于
      As with other networking drivers, remove the unnecessary driver version
      from the Intel drivers. The ethtool driver information and module version
      will then report the kernel version instead.
      
      For ixgbe, i40e and ice drivers, the driver passes the driver version to
      the firmware to confirm that we are up and running.  So we now pass the
      value of UTS_RELEASE to the firmware.  This adminq call is required per
      the HAS document.  The Device then sends an indication to the BMC that the
      PF driver is present. This is done using Host NC Driver Status Indication
      in NC-SI Get Link command or via the Host Network Controller Driver Status
      Change AEN.
      
      What the BMC may do with this information is implementation-dependent, but
      this is a standard NC-SI 1.1 command we honor per the HAS.
      
      CC: Bruce Allan <bruce.w.allan@intel.com>
      CC: Jesse Brandeburg <jesse.brandeburg@intel.com>
      CC: Alek Loktionov <aleksandr.loktionov@intel.com>
      CC: Kevin Liedtke <kevin.d.liedtke@intel.com>
      CC: Aaron Rowden <aaron.f.rowden@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      Co-developed-by: NJacob Keller <jacob.e.keller@intel.com>
      Tested-by: NAaron Brown <aaron.f.brown@intel.com>
      34a2a3b8
  4. 23 5月, 2020 5 次提交
  5. 22 5月, 2020 5 次提交
  6. 27 3月, 2020 1 次提交
  7. 21 3月, 2020 1 次提交
    • J
      ice: enable initial devlink support · 1adf7ead
      Jacob Keller 提交于
      Begin implementing support for the devlink interface with the ice
      driver.
      
      The pf structure is currently memory managed through devres, via
      a devm_alloc. To mimic this behavior, after allocating the devlink
      pointer, use devm_add_action to add a teardown action for releasing the
      devlink memory on exit.
      
      The ice hardware is a multi-function PCIe device. Thus, each physical
      function will get its own devlink instance. This means that each
      function will be treated independently, with its own parameters and
      configuration. This is done because the ice driver loads a separate
      instance for each function.
      
      Due to this, the implementation does not enable devlink to manage
      device-wide resources or configuration, as each physical function will
      be treated independently. This is done for simplicity, as managing
      a devlink instance across multiple driver instances would significantly
      increase the complexity for minimal gain.
      Signed-off-by: NJacob Keller <jacob.e.keller@intel.com>
      Reviewed-by: NJesse Brandeburg <jesse.brandeburg@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      1adf7ead
  8. 11 3月, 2020 4 次提交
  9. 20 2月, 2020 1 次提交
  10. 04 1月, 2020 3 次提交
    • K
      ice: Add a boundary check in ice_xsk_umem() · 65bb559b
      Krzysztof Kazimierczak 提交于
      In ice_xsk_umem(), variable qid which is later used as an array index,
      is not validated for a possible boundary exceedance. Because of that,
      a calling function might receive an invalid address, which causes
      general protection fault when dereferenced.
      
      To address this, add a boundary check to see if qid is greater than the
      size of a UMEM array. Also, don't let user change vsi->num_xsk_umems
      just by trying to setup a second UMEM if its value is already set up
      (i.e. UMEM region has already been allocated for this VSI).
      
      While at it, make sure that ring->zca.free pointer is always zeroed out
      if there is no UMEM on a specified ring.
      Signed-off-by: NKrzysztof Kazimierczak <krzysztof.kazimierczak@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      65bb559b
    • B
      ice: Add code to keep track of current dflt_vsi · fc0f39bc
      Brett Creeley 提交于
      We can't have more than one default VSI so prevent another VSI from
      overwriting the current dflt_vsi. This was achieved by adding the
      following functions:
      
      ice_is_dflt_vsi_in_use()
      - Used to check if the default VSI is already being used.
      
      ice_is_vsi_dflt_vsi()
      - Used to check if VSI passed in is in fact the default VSI.
      
      ice_set_dflt_vsi()
      - Used to set the default VSI via a switch rule
      
      ice_clear_dflt_vsi()
      - Used to clear the default VSI via a switch rule.
      
      Also, there was no need to introduce any locking because all mailbox
      events and synchronization of switch filters for the PF happen in the
      service task.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      fc0f39bc
    • B
      ice: Fix VF spoofchk · cd6d6b83
      Brett Creeley 提交于
      There are many things wrong with the function
      ice_set_vf_spoofchk().
      
      1. The VSI being modified is the PF VSI, not the VF VSI.
      2. We are enabling Rx VLAN pruning instead of Tx VLAN anti-spoof.
      3. The spoofchk setting for each VF is not initialized correctly
         or re-initialized correctly on reset.
      
      To fix [1] we need to make sure we are modifying the VF VSI.
      This is done by using the vf->lan_vsi_idx to index into the PF's
      VSI array.
      
      To fix [2] replace setting Rx VLAN pruning in ice_set_vf_spoofchk()
      with setting Tx VLAN anti-spoof.
      
      To Fix [3] we need to make sure the initial VSI settings match what
      is done in ice_set_vf_spoofchk() for spoofchk=on. Also make sure
      this also works for VF reset. This was done by modifying ice_vsi_init()
      to account for the current spoofchk state of the VF VSI.
      
      Because of these changes, Tx VLAN anti-spoof needs to be removed
      from ice_cfg_vlan_pruning(). This is okay for the VF because this
      is now controlled from the admin enabling/disabling spoofchk. For the
      PF, Tx VLAN anti-spoof should not be set. This change requires us to
      call ice_set_vf_spoofchk() when configuring promiscuous mode for
      the VF which requires ice_set_vf_spoofchk() to move in order to prevent
      a forward declaration prototype.
      
      Also, add VLAN 0 by default when allocating a VF since the PF is unaware
      if the guest OS is running the 8021q module. Without this, MDD events will
      trigger on untagged traffic because spoofcheck is enabled by default. Due
      to this change, ignore add/delete messages for VLAN 0 from VIRTCHNL since
      this is added/deleted during VF initialization/teardown respectively and
      should not be modified.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      cd6d6b83
  11. 23 11月, 2019 2 次提交
  12. 09 11月, 2019 2 次提交
  13. 05 11月, 2019 4 次提交
  14. 13 9月, 2019 3 次提交
  15. 05 9月, 2019 4 次提交
  16. 27 8月, 2019 1 次提交
    • A
      ice: Alloc queue management bitmaps and arrays dynamically · 78b5713a
      Anirudh Venkataramanan 提交于
      The total number of queues available on the device is divided between
      multiple physical functions (PF) in the firmware and provided to the
      driver when it gets function capabilities from the firmware. Thus
      each PF knows how many Tx/Rx queues it has. These queues are then
      doled out to different VSIs (for LAN traffic, SR-IOV VF traffic, etc.)
      
      To track usage of these queues at the PF level, the driver uses two
      bitmaps avail_txqs and avail_rxqs. At the VSI level (i.e. struct ice_vsi
      instances) the driver uses two arrays txq_map and rxq_map, to track
      ownership of VSIs' queues in avail_txqs and avail_rxqs respectively.
      
      The aforementioned bitmaps and arrays should be allocated dynamically,
      because the number of queues supported by a PF is only available once
      function capabilities have been queried. The current static allocation
      consumes way more memory than required.
      
      This patch removes the DECLARE_BITMAP for avail_txqs and avail_rxqs
      and instead uses bitmap_zalloc to allocate the bitmaps during init.
      Similarly txq_map and rxq_map are now allocated in ice_vsi_alloc_arrays.
      As a result ICE_MAX_TXQS and ICE_MAX_RXQS defines are no longer needed.
      Also as txq_map and rxq_map are now allocated and freed, some code
      reordering was required in ice_vsi_rebuild for correct functioning.
      Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      78b5713a