1. 05 9月, 2019 3 次提交
  2. 27 8月, 2019 1 次提交
    • A
      ice: Alloc queue management bitmaps and arrays dynamically · 78b5713a
      Anirudh Venkataramanan 提交于
      The total number of queues available on the device is divided between
      multiple physical functions (PF) in the firmware and provided to the
      driver when it gets function capabilities from the firmware. Thus
      each PF knows how many Tx/Rx queues it has. These queues are then
      doled out to different VSIs (for LAN traffic, SR-IOV VF traffic, etc.)
      
      To track usage of these queues at the PF level, the driver uses two
      bitmaps avail_txqs and avail_rxqs. At the VSI level (i.e. struct ice_vsi
      instances) the driver uses two arrays txq_map and rxq_map, to track
      ownership of VSIs' queues in avail_txqs and avail_rxqs respectively.
      
      The aforementioned bitmaps and arrays should be allocated dynamically,
      because the number of queues supported by a PF is only available once
      function capabilities have been queried. The current static allocation
      consumes way more memory than required.
      
      This patch removes the DECLARE_BITMAP for avail_txqs and avail_rxqs
      and instead uses bitmap_zalloc to allocate the bitmaps during init.
      Similarly txq_map and rxq_map are now allocated in ice_vsi_alloc_arrays.
      As a result ICE_MAX_TXQS and ICE_MAX_RXQS defines are no longer needed.
      Also as txq_map and rxq_map are now allocated and freed, some code
      reordering was required in ice_vsi_rebuild for correct functioning.
      Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      78b5713a
  3. 24 8月, 2019 2 次提交
  4. 21 8月, 2019 3 次提交
  5. 01 8月, 2019 2 次提交
  6. 31 5月, 2019 1 次提交
  7. 29 5月, 2019 3 次提交
    • B
      ice: Refactor interrupt tracking · cbe66bfe
      Brett Creeley 提交于
      Currently we have two MSI-x (IRQ) trackers, one for OS requested MSI-x
      entries (sw_irq_tracker) and one for hardware MSI-x vectors
      (hw_irq_tracker). Generally the sw_irq_tracker has less entries than the
      hw_irq_tracker because the hw_irq_tracker has entries equal to the max
      allowed MSI-x per PF and the sw_irq_tracker is mainly the minimum (non
      SR-IOV portion of the vectors, kernel granted IRQs). All of the non
      SR-IOV portions of the driver (i.e. LAN queues, RDMA queues, OICR, etc.)
      take at least one of each type of tracker resource. SR-IOV only grabs
      entries from the hw_irq_tracker. There are a few issues with this approach
      that can be seen when doing any kind of device reconfiguration (i.e.
      ethtool -L, SR-IOV, etc.). One of them being, any time the driver creates
      an ice_q_vector and associates it to a LAN queue pair it will grab and
      use one entry from the hw_irq_tracker and one from the sw_irq_tracker.
      If the indices on these does not match it will cause a Tx timeout, which
      will cause a reset and then the indices will match up again and traffic
      will resume. The mismatched indices come from the trackers not being the
      same size and/or the search_hint in the two trackers not being equal.
      Another reason for the refactor is the co-existence of features with
      SR-IOV. If SR-IOV is enabled and the interrupts are taken from the end
      of the sw_irq_tracker then other features can no longer use this space
      because the hardware has now given the remaining interrupts to SR-IOV.
      
      This patch reworks how we track MSI-x vectors by removing the
      hw_irq_tracker completely and instead MSI-x resources needed for SR-IOV
      are determined all at once instead of per VF. This can be done because
      when creating VFs we know how many are wanted and how many MSI-x vectors
      each VF needs. This also allows us to start using MSI-x resources from
      the end of the PF's allowed MSI-x vectors so we are less likely to use
      entries needed for other features (i.e. RDMA, L2 Offload, etc).
      
      This patch also reworks the ice_res_tracker structure by removing the
      search_hint and adding a new member - "end". Instead of having a
      search_hint we will always search from 0. The new member, "end", will be
      used to manipulate the end of the ice_res_tracker (specifically
      sw_irq_tracker) during runtime based on MSI-x vectors needed by SR-IOV.
      In the normal case, the end of ice_res_tracker will be equal to the
      ice_res_tracker's num_entries.
      
      The sriov_base_vector member was added to the PF structure. It is used
      to represent the starting MSI-x index of all the needed MSI-x vectors
      for all SR-IOV VFs. Depending on how many MSI-x are needed, SR-IOV may
      have to take resources from the sw_irq_tracker. This is done by setting
      the sw_irq_tracker->end equal to the pf->sriov_base_vector. When all
      SR-IOV VFs are removed then the sw_irq_tracker->end is reset back to
      sw_irq_tracker->num_entries. The sriov_base_vector, along with the VF's
      number of MSI-x (pf->num_vf_msix), vf_id, and the base MSI-x index on
      the PF (pf->hw.func_caps.common_cap.msix_vector_first_id), is used to
      calculate the first HW absolute MSI-x index for each VF, which is used
      to write to the VPINT_ALLOC[_PCI] and GLINT_VECT2FUNC registers to
      program the VFs MSI-x PCI configuration bits. Also, the sriov_base_vector
      is used along with VF's num_vf_msix, vf_id, and q_vector->v_idx to
      determine the MSI-x register index (used for writing to GLINT_DYN_CTL)
      within the PF's space.
      
      Interrupt changes removed any references to hw_base_vector, hw_oicr_idx,
      and hw_irq_tracker. Only sw_base_vector, sw_oicr_idx, and sw_irq_tracker
      variables remain. Change all of these by removing the "sw_" prefix to
      help avoid confusion with these variables and their use.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      cbe66bfe
    • A
      ice: Add handler for ethtool selftest · 0e674aeb
      Anirudh Venkataramanan 提交于
      This patch adds a handler for ethtool selftest. Selftest includes
      testing link, interrupts, eeprom, registers and packet loopback.
      Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      0e674aeb
    • B
      ice: Set minimum default Rx descriptor count to 512 · 1aec6e1b
      Brett Creeley 提交于
      Currently we set the default number of Rx descriptors per
      queue to the system's page size divided by the number of bytes per
      descriptor. For 4K page size systems this is resulting in 128 Rx
      descriptors per queue. This is causing more dropped packets than desired
      in the default configuration. Fix this by setting the minimum default
      Rx descriptor count per queue to 512.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      1aec6e1b
  8. 24 5月, 2019 3 次提交
  9. 05 5月, 2019 2 次提交
  10. 02 5月, 2019 2 次提交
  11. 18 4月, 2019 5 次提交
  12. 27 3月, 2019 2 次提交
  13. 22 3月, 2019 2 次提交
  14. 20 3月, 2019 1 次提交
  15. 26 2月, 2019 1 次提交
  16. 16 1月, 2019 4 次提交
  17. 21 11月, 2018 2 次提交
  18. 14 11月, 2018 1 次提交