1. 21 8月, 2019 1 次提交
    • A
      ice: Restructure VFs initialization flows · d82dd83d
      Akeem G Abodunrin 提交于
      This patch restructures how VFs are configured, and resources allocated.
      Instead of freeing resources that were never allocated, and resetting
      empty VFs that have never been created - the new flow will just allocate
      resources for number of requested VFs based on the availability.
      
      During VFs initialization process, global interrupt is disabled, and
      rearmed after getting MSIX vectors for VFs. This allows immediate mailbox
      communications, instead of delaying it till later and VFs.
      PF communications resulted to using polling instead of actual interrupt.
      The issue manifested when creating higher number of VFs (128 VFs) per PF.
      Signed-off-by: NAkeem G Abodunrin <akeem.g.abodunrin@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      d82dd83d
  2. 01 8月, 2019 2 次提交
  3. 31 5月, 2019 2 次提交
  4. 30 5月, 2019 2 次提交
    • A
      ice: Add support for virtchnl_vector_map.[rxq|txq]_map · 047e52c0
      Anirudh Venkataramanan 提交于
      Add support for virtchnl_vector_map.[rxq|txq]_map to use bitmap to
      associate indicated queues with the specified vector. This support is
      needed since the Windows AVF driver calls VIRTCHNL_OP_CONFIG_IRQ_MAP for
      each vector and used the bitmap to indicate the associated queues.
      
      Updated ice_vc_dis_qs_msg to not subtract one from
      virtchnl_irq_map_info.num_vectors, and changed the VSI vector index to
      the vector id. This change supports the Windows AVF driver which maps
      one vector at a time and sets num_vectors to one. Using vectors_id to
      index the vector array .
      
      Add check for vector_id zero, and return VIRTCHNL_STATUS_ERR_PARAM
      if vector_id is zero and there are rings associated with that vector.
      Vector_id zero is for the OICR.
      Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      047e52c0
    • B
      ice: Use GLINT_DYN_CTL to disable VF's interrupts · 72ecb896
      Brett Creeley 提交于
      Currently in ice_free_vf_res() we are writing to the VFINT_DYN_CTLN
      register in the PF's function space to disable all VF's interrupts. This
      is incorrect because this register is only for use in the VF's function
      space. This becomes obvious when seeing that the valid indices used for
      the VFINT_DYN_CTLN register is from 0-63, which is the maximum number of
      interrupts for a VF (not including the OICR interrupt). Fix this by
      writing to the GLINT_DYN_CTL register for each VF. We can do this
      because we keep track of each VF's first_vector_idx inside of the PF's
      function space and the number of interrupts given to each VF.
      
      Also in ice_free_vfs() we were disabling Rx/Tx queues after calling
      pci_disable_sriov(). One part of disabling the Tx queues causes the PF
      driver to trigger a software interrupt, which causes the VF's napi
      routine to run. This doesn't currently work because pci_disable_sriov()
      causes iavf_remove() to be called which disables interrupts. Fix this by
      disabling Rx/Tx queues prior to pci_disable_sriov().
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      72ecb896
  5. 29 5月, 2019 1 次提交
    • B
      ice: Refactor interrupt tracking · cbe66bfe
      Brett Creeley 提交于
      Currently we have two MSI-x (IRQ) trackers, one for OS requested MSI-x
      entries (sw_irq_tracker) and one for hardware MSI-x vectors
      (hw_irq_tracker). Generally the sw_irq_tracker has less entries than the
      hw_irq_tracker because the hw_irq_tracker has entries equal to the max
      allowed MSI-x per PF and the sw_irq_tracker is mainly the minimum (non
      SR-IOV portion of the vectors, kernel granted IRQs). All of the non
      SR-IOV portions of the driver (i.e. LAN queues, RDMA queues, OICR, etc.)
      take at least one of each type of tracker resource. SR-IOV only grabs
      entries from the hw_irq_tracker. There are a few issues with this approach
      that can be seen when doing any kind of device reconfiguration (i.e.
      ethtool -L, SR-IOV, etc.). One of them being, any time the driver creates
      an ice_q_vector and associates it to a LAN queue pair it will grab and
      use one entry from the hw_irq_tracker and one from the sw_irq_tracker.
      If the indices on these does not match it will cause a Tx timeout, which
      will cause a reset and then the indices will match up again and traffic
      will resume. The mismatched indices come from the trackers not being the
      same size and/or the search_hint in the two trackers not being equal.
      Another reason for the refactor is the co-existence of features with
      SR-IOV. If SR-IOV is enabled and the interrupts are taken from the end
      of the sw_irq_tracker then other features can no longer use this space
      because the hardware has now given the remaining interrupts to SR-IOV.
      
      This patch reworks how we track MSI-x vectors by removing the
      hw_irq_tracker completely and instead MSI-x resources needed for SR-IOV
      are determined all at once instead of per VF. This can be done because
      when creating VFs we know how many are wanted and how many MSI-x vectors
      each VF needs. This also allows us to start using MSI-x resources from
      the end of the PF's allowed MSI-x vectors so we are less likely to use
      entries needed for other features (i.e. RDMA, L2 Offload, etc).
      
      This patch also reworks the ice_res_tracker structure by removing the
      search_hint and adding a new member - "end". Instead of having a
      search_hint we will always search from 0. The new member, "end", will be
      used to manipulate the end of the ice_res_tracker (specifically
      sw_irq_tracker) during runtime based on MSI-x vectors needed by SR-IOV.
      In the normal case, the end of ice_res_tracker will be equal to the
      ice_res_tracker's num_entries.
      
      The sriov_base_vector member was added to the PF structure. It is used
      to represent the starting MSI-x index of all the needed MSI-x vectors
      for all SR-IOV VFs. Depending on how many MSI-x are needed, SR-IOV may
      have to take resources from the sw_irq_tracker. This is done by setting
      the sw_irq_tracker->end equal to the pf->sriov_base_vector. When all
      SR-IOV VFs are removed then the sw_irq_tracker->end is reset back to
      sw_irq_tracker->num_entries. The sriov_base_vector, along with the VF's
      number of MSI-x (pf->num_vf_msix), vf_id, and the base MSI-x index on
      the PF (pf->hw.func_caps.common_cap.msix_vector_first_id), is used to
      calculate the first HW absolute MSI-x index for each VF, which is used
      to write to the VPINT_ALLOC[_PCI] and GLINT_VECT2FUNC registers to
      program the VFs MSI-x PCI configuration bits. Also, the sriov_base_vector
      is used along with VF's num_vf_msix, vf_id, and q_vector->v_idx to
      determine the MSI-x register index (used for writing to GLINT_DYN_CTL)
      within the PF's space.
      
      Interrupt changes removed any references to hw_base_vector, hw_oicr_idx,
      and hw_irq_tracker. Only sw_base_vector, sw_oicr_idx, and sw_irq_tracker
      variables remain. Change all of these by removing the "sw_" prefix to
      help avoid confusion with these variables and their use.
      Signed-off-by: NBrett Creeley <brett.creeley@intel.com>
      Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      cbe66bfe
  6. 24 5月, 2019 3 次提交
  7. 05 5月, 2019 2 次提交
  8. 02 5月, 2019 3 次提交
  9. 18 4月, 2019 2 次提交
  10. 27 3月, 2019 4 次提交
  11. 26 3月, 2019 1 次提交
  12. 25 3月, 2019 1 次提交
  13. 22 3月, 2019 5 次提交
  14. 20 3月, 2019 4 次提交
  15. 26 2月, 2019 3 次提交
  16. 16 1月, 2019 1 次提交
  17. 21 11月, 2018 1 次提交
  18. 14 11月, 2018 1 次提交
  19. 07 11月, 2018 1 次提交