1. 04 11月, 2022 1 次提交
  2. 05 10月, 2022 1 次提交
  3. 27 9月, 2022 1 次提交
  4. 13 9月, 2022 2 次提交
  5. 12 7月, 2022 1 次提交
  6. 25 4月, 2022 1 次提交
  7. 20 10月, 2021 1 次提交
    • M
      PCI: Re-enable Downstream Port LTR after reset or hotplug · e1b0d0bb
      Mingchuang Qiao 提交于
      Per PCIe r5.0, sec 7.5.3.16, Downstream Ports must disable LTR if the link
      goes down (the Port goes DL_Down status).  This is a problem because the
      Downstream Port's dev->ltr_path is still set, so we think LTR is still
      enabled, and we enable LTR in the Endpoint.  When it sends LTR messages,
      they cause Unsupported Request errors at the Downstream Port.
      
      This happens in the reset path, where we may enable LTR in
      pci_restore_pcie_state() even though the Downstream Port disabled LTR
      because the reset caused a link down event.
      
      It also happens in the hot-remove and hot-add path, where we may enable LTR
      in pci_configure_ltr() even though the Downstream Port disabled LTR when
      the hot-remove took the link down.
      
      In these two scenarios, check the upstream bridge and restore its LTR
      enable if appropriate.
      
      The Unsupported Request may be logged by AER as follows:
      
        pcieport 0000:00:1d.0: AER: Uncorrected (Non-Fatal) error received: id=00e8
        pcieport 0000:00:1d.0: PCIe Bus Error: severity=Uncorrected (Non-Fatal), type=Transaction Layer, id=00e8(Requester ID)
        pcieport 0000:00:1d.0:   device [8086:9d18] error status/mask=00100000/00010000
        pcieport 0000:00:1d.0:    [20] Unsupported Request    (First)
      
      In addition, if LTR is not configured correctly, the link cannot enter the
      L1.2 state, which prevents some machines from entering the S0ix low power
      state.
      
      [bhelgaas: commit log]
      Link: https://lore.kernel.org/r/20211012075614.54576-1-mingchuang.qiao@mediatek.comReported-by: NUtkarsh H Patel <utkarsh.h.patel@intel.com>
      Signed-off-by: NMingchuang Qiao <mingchuang.qiao@mediatek.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Reviewed-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      e1b0d0bb
  8. 05 10月, 2021 1 次提交
  9. 30 9月, 2021 1 次提交
    • R
      PCI: ACPI: PM: Do not use pci_platform_pm_ops for ACPI · d97c5d4c
      Rafael J. Wysocki 提交于
      Using struct pci_platform_pm_ops for ACPI adds unnecessary
      indirection to the interactions between the PCI core and ACPI PM,
      which is also subject to retpolines.
      
      Moreover, it is not particularly clear from the current code that,
      as far as PCI PM is concerned, "platform" really means just ACPI
      except for the special casess when Intel MID PCI PM is used or when
      ACPI support is disabled (through the kernel config or command line,
      or because there are no usable ACPI tables on the system).
      
      To address the above, rework the PCI PM code to invoke ACPI PM
      functions directly as needed and drop the acpi_pci_platform_pm
      object that is not necessary any more.
      
      Accordingly, update some of the ACPI PM functions in question to do
      extra checks in case the ACPI support is disabled (which previously
      was taken care of by avoiding to set the pci_platform_ops pointer
      in those cases).
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Tested-by: NFerry Toth <fntoth@gmail.com>
      d97c5d4c
  10. 27 9月, 2021 1 次提交
    • R
      PCI: PM: Do not use pci_platform_pm_ops for Intel MID PM · d5b0d883
      Rafael J. Wysocki 提交于
      There are only two users of struct pci_platform_pm_ops in the tree,
      one of which is Intel MID PM and the other one is ACPI.  They are
      mutually exclusive and the MID PM should take precedence when they
      both are enabled, but whether or not this really is the case hinges
      on the specific ordering of arch_initcall() calls made by them.
      
      The struct pci_platform_pm_ops abstraction is not really necessary
      for just these two users, but it adds complexity and overhead because
      of retoplines involved in using all of the function pointers in there.
      It also makes following the code a bit more difficult than it would
      be otherwise.
      
      Moreover, Intel MID PCI PM doesn't even implement the majority of the
      function pointers in struct pci_platform_pm_ops in a meaningful way,
      so switch over the PCI core to calling the relevant MID PM routines,
      mid_pci_set_power_state() and mid_pci_set_power_state(), directly as
      needed and drop mid_pci_platform_pm.
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Tested-by: NFerry Toth <fntoth@gmail.com>
      d5b0d883
  11. 25 8月, 2021 1 次提交
  12. 21 8月, 2021 1 次提交
  13. 19 8月, 2021 4 次提交
  14. 18 8月, 2021 1 次提交
  15. 06 7月, 2021 1 次提交
  16. 17 6月, 2021 1 次提交
  17. 01 5月, 2021 1 次提交
  18. 29 4月, 2021 2 次提交
  19. 04 4月, 2021 1 次提交
    • L
      PCI/IOV: Add sysfs MSI-X vector assignment interface · c3d5c2d9
      Leon Romanovsky 提交于
      A typical cloud provider SR-IOV use case is to create many VFs for use by
      guest VMs. The VFs may not be assigned to a VM until a customer requests a
      VM of a certain size, e.g., number of CPUs. A VF may need MSI-X vectors
      proportional to the number of CPUs in the VM, but there is no standard way
      to change the number of MSI-X vectors supported by a VF.
      
      Some Mellanox ConnectX devices support dynamic assignment of MSI-X vectors
      to SR-IOV VFs. This can be done by the PF driver after VFs are enabled,
      and it can be done without affecting VFs that are already in use. The
      hardware supports a limited pool of MSI-X vectors that can be assigned to
      the PF or to individual VFs.  This is device-specific behavior that
      requires support in the PF driver.
      
      Add a read-only "sriov_vf_total_msix" sysfs file for the PF and a writable
      "sriov_vf_msix_count" file for each VF. Management software may use these
      to learn how many MSI-X vectors are available and to dynamically assign
      them to VFs before the VFs are passed through to a VM.
      
      If the PF driver implements the ->sriov_get_vf_total_msix() callback,
      "sriov_vf_total_msix" contains the total number of MSI-X vectors available
      for distribution among VFs.
      
      If no driver is bound to the VF, writing "N" to "sriov_vf_msix_count" uses
      the PF driver ->sriov_set_msix_vec_count() callback to assign "N" MSI-X
      vectors to the VF.  When a VF driver subsequently reads the MSI-X Message
      Control register, it will see the new Table Size "N".
      
      Link: https://lore.kernel.org/linux-pci/20210314124256.70253-2-leon@kernel.orgAcked-by: NBjorn Helgaas <bhelgaas@google.com>
      Signed-off-by: NLeon Romanovsky <leonro@nvidia.com>
      c3d5c2d9
  20. 12 3月, 2021 1 次提交
  21. 28 1月, 2021 1 次提交
  22. 15 1月, 2021 1 次提交
  23. 11 12月, 2020 2 次提交
  24. 06 12月, 2020 2 次提交
  25. 05 12月, 2020 4 次提交
  26. 21 11月, 2020 2 次提交
  27. 01 10月, 2020 2 次提交
  28. 30 9月, 2020 1 次提交