1. 25 8月, 2021 1 次提交
  2. 06 7月, 2021 1 次提交
  3. 17 6月, 2021 1 次提交
  4. 01 5月, 2021 1 次提交
  5. 29 4月, 2021 2 次提交
  6. 04 4月, 2021 1 次提交
    • L
      PCI/IOV: Add sysfs MSI-X vector assignment interface · c3d5c2d9
      Leon Romanovsky 提交于
      A typical cloud provider SR-IOV use case is to create many VFs for use by
      guest VMs. The VFs may not be assigned to a VM until a customer requests a
      VM of a certain size, e.g., number of CPUs. A VF may need MSI-X vectors
      proportional to the number of CPUs in the VM, but there is no standard way
      to change the number of MSI-X vectors supported by a VF.
      
      Some Mellanox ConnectX devices support dynamic assignment of MSI-X vectors
      to SR-IOV VFs. This can be done by the PF driver after VFs are enabled,
      and it can be done without affecting VFs that are already in use. The
      hardware supports a limited pool of MSI-X vectors that can be assigned to
      the PF or to individual VFs.  This is device-specific behavior that
      requires support in the PF driver.
      
      Add a read-only "sriov_vf_total_msix" sysfs file for the PF and a writable
      "sriov_vf_msix_count" file for each VF. Management software may use these
      to learn how many MSI-X vectors are available and to dynamically assign
      them to VFs before the VFs are passed through to a VM.
      
      If the PF driver implements the ->sriov_get_vf_total_msix() callback,
      "sriov_vf_total_msix" contains the total number of MSI-X vectors available
      for distribution among VFs.
      
      If no driver is bound to the VF, writing "N" to "sriov_vf_msix_count" uses
      the PF driver ->sriov_set_msix_vec_count() callback to assign "N" MSI-X
      vectors to the VF.  When a VF driver subsequently reads the MSI-X Message
      Control register, it will see the new Table Size "N".
      
      Link: https://lore.kernel.org/linux-pci/20210314124256.70253-2-leon@kernel.orgAcked-by: NBjorn Helgaas <bhelgaas@google.com>
      Signed-off-by: NLeon Romanovsky <leonro@nvidia.com>
      c3d5c2d9
  7. 12 3月, 2021 1 次提交
  8. 28 1月, 2021 1 次提交
  9. 15 1月, 2021 1 次提交
  10. 11 12月, 2020 2 次提交
  11. 06 12月, 2020 2 次提交
  12. 05 12月, 2020 4 次提交
  13. 21 11月, 2020 2 次提交
  14. 01 10月, 2020 2 次提交
  15. 30 9月, 2020 1 次提交
  16. 04 8月, 2020 1 次提交
  17. 23 7月, 2020 1 次提交
  18. 11 7月, 2020 1 次提交
  19. 08 7月, 2020 1 次提交
  20. 29 3月, 2020 6 次提交
  21. 11 3月, 2020 3 次提交
    • Y
      PCI: Add PCIE_LNKCAP2_SLS2SPEED() macro · 757bfaa2
      Yicong Yang 提交于
      Add PCIE_LNKCAP2_SLS2SPEED macro for transforming raw Link Capabilities 2
      values to the pci_bus_speed. This is next to PCIE_SPEED2MBS_ENC() to make
      it easier to update both places when adding support for new speeds.
      
      Link: https://lore.kernel.org/r/1581937984-40353-10-git-send-email-yangyicong@hisilicon.comSigned-off-by: NYicong Yang <yangyicong@hisilicon.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      757bfaa2
    • B
      PCI: Use pci_speed_string() for all PCI/PCI-X/PCIe strings · 6348a34d
      Bjorn Helgaas 提交于
      Previously some PCI speed strings came from pci_speed_string(), some came
      from the PCIe-specific PCIE_SPEED2STR(), and some came from a PCIe-specific
      switch statement.  These methods were inconsistent:
      
        pci_speed_string()     PCIE_SPEED2STR()     switch
        ------------------     ----------------     ------
        33 MHz PCI
        ...
        2.5 GT/s PCIe          2.5 GT/s             2.5 GT/s
        5.0 GT/s PCIe          5 GT/s               5 GT/s
        8.0 GT/s PCIe          8 GT/s               8 GT/s
        16.0 GT/s PCIe         16 GT/s              16 GT/s
        32.0 GT/s PCIe         32 GT/s              32 GT/s
      
      Standardize on pci_speed_string() as the single source of these strings.
      
      Note that this adds ".0" and "PCIe" to some messages, including sysfs
      "max_link_speed" files, a brcmstb "link up" message, and the link status
      dmesg logging, e.g.,
      
        nvme 0000:01:00.0: 16.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x4 link at 0000:00:01.1 (capable of 31.504 Gb/s with 8.0 GT/s PCIe x4 link)
      
      I think it's better to standardize on a single version of the speed text.
      Previously we had strings like this:
      
        /sys/bus/pci/slots/0/cur_bus_speed: 8.0 GT/s PCIe
        /sys/bus/pci/slots/0/max_bus_speed: 8.0 GT/s PCIe
        /sys/devices/pci0000:00/0000:00:1c.0/current_link_speed: 8 GT/s
        /sys/devices/pci0000:00/0000:00:1c.0/max_link_speed: 8 GT/s
      
      This changes the latter two to match the slots files:
      
        /sys/devices/pci0000:00/0000:00:1c.0/current_link_speed: 8.0 GT/s PCIe
        /sys/devices/pci0000:00/0000:00:1c.0/max_link_speed: 8.0 GT/s PCIe
      
      Based-on-patch by: Yicong Yang <yangyicong@hisilicon.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      6348a34d
    • B
      PCI: Add pci_speed_string() · e56faff5
      Bjorn Helgaas 提交于
      Add pci_speed_string() to return a text description of the supplied bus or
      link speed.  The slot code previously used the private
      pci_bus_speed_strings[] array for this purpose, but adding this interface
      will enable us to consolidate similar code elsewhere.
      
      Export pcie_link_speed[] and pci_speed_string() so they can be used by
      modules.
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      e56faff5
  22. 29 2月, 2020 1 次提交
  23. 19 12月, 2019 1 次提交
  24. 22 11月, 2019 2 次提交