1. 16 5月, 2020 2 次提交
    • M
      PCI/PM: Assume ports without DLL Link Active train links in 100 ms · ec411e02
      Mika Westerberg 提交于
      Kai-Heng Feng reported that it takes a long time (> 1 s) to resume
      Thunderbolt-connected devices from both runtime suspend and system sleep
      (s2idle).
      
      This was because some Downstream Ports that support > 5 GT/s do not also
      support Data Link Layer Link Active reporting.  Per PCIe r5.0 sec 6.6.1:
      
        With a Downstream Port that supports Link speeds greater than 5.0 GT/s,
        software must wait a minimum of 100 ms after Link training completes
        before sending a Configuration Request to the device immediately below
        that Port. Software can determine when Link training completes by polling
        the Data Link Layer Link Active bit or by setting up an associated
        interrupt (see Section 6.7.3.3).
      
      Sec 7.5.3.6 requires such Ports to support DLL Link Active reporting, but
      at least the Intel JHL6240 Thunderbolt 3 Bridge [8086:15c0] and the Intel
      JHL7540 Thunderbolt 3 Bridge [8086:15ea] do not.
      
      Previously we tried to wait for Link training to complete, but since there
      was no DLL Link Active reporting, all we could do was wait the worst-case
      1000 ms, then another 100 ms.
      
      Instead of using the supported speeds to determine whether to wait for Link
      training, check whether the port supports DLL Link Active reporting.  The
      Ports in question do not, so we'll wait only the 100 ms required for Ports
      that support Link speeds <= 5 GT/s.
      
      This of course assumes these Ports always train the Link within 100 ms even
      if they are operating at > 5 GT/s, which is not required by the spec.
      
      [bhelgaas: commit log, comment]
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=206837
      Link: https://lore.kernel.org/r/20200514133043.27429-1-mika.westerberg@linux.intel.comReported-by: NKai-Heng Feng <kai.heng.feng@canonical.com>
      Tested-by: NKai-Heng Feng <kai.heng.feng@canonical.com>
      Signed-off-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      ec411e02
    • B
      PCI/PM: Adjust pcie_wait_for_link_delay() for caller delay · f044baaf
      Bjorn Helgaas 提交于
      The caller of pcie_wait_for_link_delay() specifies the time to wait after
      the link becomes active.  When the downstream port doesn't support link
      active reporting, obviously we can't tell when the link becomes active, so
      we waited the worst-case time (1000 ms) plus 100 ms, ignoring the delay
      from the caller.
      
      Instead, wait for 1000 ms + the delay from the caller.
      
      Fixes: 4827d638 ("PCI/PM: Add pcie_wait_for_link_delay()")
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      f044baaf
  2. 25 4月, 2020 1 次提交
  3. 29 3月, 2020 1 次提交
  4. 11 3月, 2020 2 次提交
    • Y
      PCI: Add PCIE_LNKCAP2_SLS2SPEED() macro · 757bfaa2
      Yicong Yang 提交于
      Add PCIE_LNKCAP2_SLS2SPEED macro for transforming raw Link Capabilities 2
      values to the pci_bus_speed. This is next to PCIE_SPEED2MBS_ENC() to make
      it easier to update both places when adding support for new speeds.
      
      Link: https://lore.kernel.org/r/1581937984-40353-10-git-send-email-yangyicong@hisilicon.comSigned-off-by: NYicong Yang <yangyicong@hisilicon.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      757bfaa2
    • B
      PCI: Use pci_speed_string() for all PCI/PCI-X/PCIe strings · 6348a34d
      Bjorn Helgaas 提交于
      Previously some PCI speed strings came from pci_speed_string(), some came
      from the PCIe-specific PCIE_SPEED2STR(), and some came from a PCIe-specific
      switch statement.  These methods were inconsistent:
      
        pci_speed_string()     PCIE_SPEED2STR()     switch
        ------------------     ----------------     ------
        33 MHz PCI
        ...
        2.5 GT/s PCIe          2.5 GT/s             2.5 GT/s
        5.0 GT/s PCIe          5 GT/s               5 GT/s
        8.0 GT/s PCIe          8 GT/s               8 GT/s
        16.0 GT/s PCIe         16 GT/s              16 GT/s
        32.0 GT/s PCIe         32 GT/s              32 GT/s
      
      Standardize on pci_speed_string() as the single source of these strings.
      
      Note that this adds ".0" and "PCIe" to some messages, including sysfs
      "max_link_speed" files, a brcmstb "link up" message, and the link status
      dmesg logging, e.g.,
      
        nvme 0000:01:00.0: 16.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x4 link at 0000:00:01.1 (capable of 31.504 Gb/s with 8.0 GT/s PCIe x4 link)
      
      I think it's better to standardize on a single version of the speed text.
      Previously we had strings like this:
      
        /sys/bus/pci/slots/0/cur_bus_speed: 8.0 GT/s PCIe
        /sys/bus/pci/slots/0/max_bus_speed: 8.0 GT/s PCIe
        /sys/devices/pci0000:00/0000:00:1c.0/current_link_speed: 8 GT/s
        /sys/devices/pci0000:00/0000:00:1c.0/max_link_speed: 8 GT/s
      
      This changes the latter two to match the slots files:
      
        /sys/devices/pci0000:00/0000:00:1c.0/current_link_speed: 8.0 GT/s PCIe
        /sys/devices/pci0000:00/0000:00:1c.0/max_link_speed: 8.0 GT/s PCIe
      
      Based-on-patch by: Yicong Yang <yangyicong@hisilicon.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      6348a34d
  5. 06 3月, 2020 1 次提交
  6. 05 3月, 2020 1 次提交
  7. 25 1月, 2020 1 次提交
  8. 14 1月, 2020 1 次提交
  9. 06 1月, 2020 1 次提交
  10. 23 12月, 2019 1 次提交
  11. 19 12月, 2019 2 次提交
  12. 21 11月, 2019 14 次提交
  13. 23 10月, 2019 1 次提交
  14. 21 10月, 2019 1 次提交
  15. 19 10月, 2019 1 次提交
  16. 16 10月, 2019 1 次提交
  17. 14 10月, 2019 1 次提交
  18. 07 9月, 2019 1 次提交
  19. 06 9月, 2019 4 次提交
  20. 09 8月, 2019 2 次提交