1. 11 12月, 2020 2 次提交
  2. 31 10月, 2020 1 次提交
  3. 01 10月, 2020 3 次提交
  4. 30 9月, 2020 1 次提交
  5. 18 9月, 2020 2 次提交
    • L
      PCI: Simplify pci_dev_reset_slot_function() · 10791141
      Lukas Wunner 提交于
      pci_dev_reset_slot_function() refuses to reset a hotplug slot if it is
      shared by multiple pci_devs.  That's the case if and only if the slot is
      occupied by a multifunction device.
      
      Simplify the function to check the device's multifunction flag instead
      of iterating over the devices on the bus.  (Iterating over the devices
      requires holding pci_bus_sem, which the function erroneously does not
      acquire.)
      
      Link: https://lore.kernel.org/r/c6aab5af096f7b1b3db57f6335cebba8f0fcca89.1595330431.git.lukas@wunner.deSigned-off-by: NLukas Wunner <lukas@wunner.de>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Cc: Alex Williamson <alex.williamson@redhat.com>
      10791141
    • L
      PCI: pciehp: Reduce noisiness on hot removal · 8a614499
      Lukas Wunner 提交于
      When a PCIe card is hot-removed, the Presence Detect State and Data Link
      Layer Link Active bits often do not clear simultaneously.  I've seen delays
      of up to 244 msec between the two events with Thunderbolt.
      
      After pciehp has brought down the slot in response to the first event, the
      other bit may still be set.  It's not discernible whether it's set because
      a new card is already in the slot or if it will soon clear.  So pciehp
      tries to bring up the slot and in the latter case fails with a bunch of
      messages, some of them at KERN_ERR severity.  If the slot is no longer
      occupied, the messages are false positives and annoy users.
      
      Stuart Hayes reports the following splat on hot removal:
      
        KERN_INFO pcieport 0000:3c:06.0: pciehp: Slot(180): Link Up
        KERN_INFO pcieport 0000:3c:06.0: pciehp: Timeout waiting for Presence Detect
        KERN_ERR  pcieport 0000:3c:06.0: pciehp: link training error: status 0x0001
        KERN_ERR  pcieport 0000:3c:06.0: pciehp: Failed to check link status
      
      Dongdong Liu complains about a similar splat:
      
        KERN_INFO pciehp 0000:80:10.0:pcie004: Slot(36): Link Down
        KERN_INFO iommu: Removing device 0000:87:00.0 from group 12
        KERN_INFO pciehp 0000:80:10.0:pcie004: Slot(36): Card present
        KERN_INFO pcieport 0000:80:10.0: Data Link Layer Link Active not set in 1000 msec
        KERN_ERR  pciehp 0000:80:10.0:pcie004: Failed to check link status
      
      Users are particularly irritated to see a bringup attempt even though the
      slot was explicitly brought down via sysfs.  In a perfect world, we could
      avoid this by setting Link Disable on slot bringdown and re-enabling it
      upon a Presence Detect State change.  In reality however, there are broken
      hotplug ports which hardwire Presence Detect to zero, see 80696f99
      ("PCI: pciehp: Tolerate Presence Detect hardwired to zero").  Conversely,
      PCIe r1.0 hotplug ports hardwire Link Active to zero because Link Active
      Reporting wasn't specified before PCIe r1.1.  On unplug, some ports first
      clear Presence then Link (see Stuart Hayes' splat) whereas others use the
      inverse order (see Dongdong Liu's splat).  To top it off, there are hotplug
      ports which flap the Presence and Link bits on slot bringup, see
      6c35a1ac ("PCI: pciehp: Tolerate initially unstable link").
      
      pciehp is designed to work with all of these variants.  Surplus attempts at
      slot bringup are a lesser evil than not being able to bring up slots at
      all.  Although we could try to perfect the behavior for specific hotplug
      controllers, we'd risk breaking others or increasing code complexity.
      
      But we can certainly minimize annoyance by emitting only a single message
      with KERN_INFO severity if bringup is unsuccessful:
      
      * Drop the "Timeout waiting for Presence Detect" message in
        pcie_wait_for_presence().  The sole caller of that function,
        pciehp_check_link_status(), ignores the timeout and carries on.  It emits
        error messages of its own and I don't think this particular message adds
        much value.
      
      * There's a single error condition in pciehp_check_link_status() which
        does not emit a message.  Adding one allows dropping the "Failed to check
        link status" message emitted by board_added() if
        pciehp_check_link_status() returns a non-zero integer.
      
      * Tone down all messages in pciehp_check_link_status() to KERN_INFO
        severity and rephrase them to look as innocuous as possible.  To this
        end, move the message emitted by pcie_wait_for_link_delay() to its
        callers.
      
      As a result, Stuart Hayes' splat becomes:
      
        KERN_INFO pcieport 0000:3c:06.0: pciehp: Slot(180): Link Up
        KERN_INFO pcieport 0000:3c:06.0: pciehp: Slot(180): Cannot train link: status 0x0001
      
      Dongdong Liu's splat becomes:
      
        KERN_INFO pciehp 0000:80:10.0:pcie004: Slot(36): Card present
        KERN_INFO pciehp 0000:80:10.0:pcie004: Slot(36): No link
      
      The messages now merely serve as information that presence or link bits
      were set a little longer than expected.  Bringup failures which are not
      false positives are still reported, albeit no longer at KERN_ERR severity.
      
      Link: https://lore.kernel.org/linux-pci/20200310182100.102987-1-stuart.w.hayes@gmail.com/
      Link: https://lore.kernel.org/linux-pci/1547649064-19019-1-git-send-email-liudongdong3@huawei.com/
      Link: https://lore.kernel.org/r/b45e46fd8a6aa6930aaac9d7718c2e4b787a4e5e.1595935071.git.lukas@wunner.deReported-by: NStuart Hayes <stuart.w.hayes@gmail.com>
      Reported-by: NDongdong Liu <liudongdong3@huawei.com>
      Signed-off-by: NLukas Wunner <lukas@wunner.de>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Reviewed-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      8a614499
  6. 17 9月, 2020 1 次提交
    • R
      PCI/ACS: Enable Translation Blocking for external devices · 76fc8e85
      Rajat Jain 提交于
      Translation Blocking is a required feature for Downstream Ports (Root
      Ports or Switch Downstream Ports) that implement ACS.  When enabled, the
      Port checks the Address Type (AT) of each upstream Memory Request it
      receives.
      
      The default AT (00b) means "untranslated" and the IOMMU can decide whether
      to treat the address as I/O virtual or physical.
      
      If AT is not the default, i.e., if the Memory Request contains an
      already-translated (physical) address, the Port blocks the request and
      reports an ACS error.
      
      When enabling ACS, enable Translation Blocking for external-facing ports
      and untrusted (external) devices.  This is to help prevent attacks from
      external devices that initiate DMA with physical addresses that bypass the
      IOMMU.
      
      [bhelgaas: commit log, simplify setting bit and drop warning; TB is
      required for Downstream Ports with ACS, so we should never see the warning]
      Link: https://lore.kernel.org/r/20200707224604.3737893-4-rajatja@google.comSigned-off-by: NRajat Jain <rajatja@google.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      76fc8e85
  7. 02 9月, 2020 1 次提交
  8. 01 9月, 2020 1 次提交
  9. 24 8月, 2020 1 次提交
  10. 23 7月, 2020 1 次提交
  11. 22 7月, 2020 1 次提交
  12. 11 7月, 2020 2 次提交
  13. 27 6月, 2020 1 次提交
    • B
      PCI: Convert PCIe capability PCIBIOS errors to errno · d20df83b
      Bolarinwa Olayemi Saheed 提交于
      The PCI config accessors (pci_read_config_word(), et al) return
      PCIBIOS_SUCCESSFUL (zero) or positive error values like
      PCIBIOS_FUNC_NOT_SUPPORTED.
      
      The PCIe capability accessors (pcie_capability_read_word(), et al)
      similarly return PCIBIOS errors, but some callers assume they return
      generic errno values like -EINVAL.
      
      For example, the Myri-10G probe function returns a positive PCIBIOS error
      if the pcie_capability_clear_and_set_word() in pcie_set_readrq() fails:
      
        myri10ge_probe
          status = pcie_set_readrq
            return pcie_capability_clear_and_set_word
          if (status)
            return status
      
      A positive return from a PCI driver probe function would cause a "Driver
      probe function unexpectedly returned" warning from local_pci_probe()
      instead of the desired probe failure.
      
      Convert PCIBIOS errors to generic errno for all callers of:
      
        pcie_capability_read_word
        pcie_capability_read_dword
        pcie_capability_write_word
        pcie_capability_write_dword
        pcie_capability_set_word
        pcie_capability_set_dword
        pcie_capability_clear_word
        pcie_capability_clear_dword
        pcie_capability_clear_and_set_word
        pcie_capability_clear_and_set_dword
      
      that check the return code for anything other than zero.
      
      [bhelgaas: commit log, squash together]
      Suggested-by: NBjorn Helgaas <bjorn@helgaas.com>
      Link: https://lore.kernel.org/r/20200615073225.24061-1-refactormyself@gmail.comSigned-off-by: NBolarinwa Olayemi Saheed <refactormyself@gmail.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      d20df83b
  14. 16 5月, 2020 2 次提交
    • M
      PCI/PM: Assume ports without DLL Link Active train links in 100 ms · ec411e02
      Mika Westerberg 提交于
      Kai-Heng Feng reported that it takes a long time (> 1 s) to resume
      Thunderbolt-connected devices from both runtime suspend and system sleep
      (s2idle).
      
      This was because some Downstream Ports that support > 5 GT/s do not also
      support Data Link Layer Link Active reporting.  Per PCIe r5.0 sec 6.6.1:
      
        With a Downstream Port that supports Link speeds greater than 5.0 GT/s,
        software must wait a minimum of 100 ms after Link training completes
        before sending a Configuration Request to the device immediately below
        that Port. Software can determine when Link training completes by polling
        the Data Link Layer Link Active bit or by setting up an associated
        interrupt (see Section 6.7.3.3).
      
      Sec 7.5.3.6 requires such Ports to support DLL Link Active reporting, but
      at least the Intel JHL6240 Thunderbolt 3 Bridge [8086:15c0] and the Intel
      JHL7540 Thunderbolt 3 Bridge [8086:15ea] do not.
      
      Previously we tried to wait for Link training to complete, but since there
      was no DLL Link Active reporting, all we could do was wait the worst-case
      1000 ms, then another 100 ms.
      
      Instead of using the supported speeds to determine whether to wait for Link
      training, check whether the port supports DLL Link Active reporting.  The
      Ports in question do not, so we'll wait only the 100 ms required for Ports
      that support Link speeds <= 5 GT/s.
      
      This of course assumes these Ports always train the Link within 100 ms even
      if they are operating at > 5 GT/s, which is not required by the spec.
      
      [bhelgaas: commit log, comment]
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=206837
      Link: https://lore.kernel.org/r/20200514133043.27429-1-mika.westerberg@linux.intel.comReported-by: NKai-Heng Feng <kai.heng.feng@canonical.com>
      Tested-by: NKai-Heng Feng <kai.heng.feng@canonical.com>
      Signed-off-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      ec411e02
    • B
      PCI/PM: Adjust pcie_wait_for_link_delay() for caller delay · f044baaf
      Bjorn Helgaas 提交于
      The caller of pcie_wait_for_link_delay() specifies the time to wait after
      the link becomes active.  When the downstream port doesn't support link
      active reporting, obviously we can't tell when the link becomes active, so
      we waited the worst-case time (1000 ms) plus 100 ms, ignoring the delay
      from the caller.
      
      Instead, wait for 1000 ms + the delay from the caller.
      
      Fixes: 4827d638 ("PCI/PM: Add pcie_wait_for_link_delay()")
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      f044baaf
  15. 15 5月, 2020 1 次提交
  16. 12 5月, 2020 1 次提交
    • G
      PCI: Replace zero-length array with flexible-array · 914a1951
      Gustavo A. R. Silva 提交于
      The current codebase makes use of the zero-length array language extension
      to the C90 standard, but the preferred mechanism to declare variable-length
      types such as these as a flexible array member [1][2], introduced in C99:
      
        struct foo {
          int stuff;
          struct boo array[];
        };
      
      By making use of the mechanism above, we will get a compiler warning in
      case the flexible array does not occur last in the structure, which will
      help us prevent some kind of undefined behavior bugs from being
      inadvertently introduced[3] to the codebase from now on.
      
      Also, notice that dynamic memory allocations won't be affected by this
      change:
      
        Flexible array members have incomplete type, and so the sizeof operator
        may not be applied. As a quirk of the original implementation of
        zero-length arrays, sizeof evaluates to zero. [1]
      
      sizeof(flexible-array-member) triggers a warning because flexible array
      members have incomplete type [1]. There are some instances of code in which
      the sizeof() operator is being incorrectly/erroneously applied to
      zero-length arrays, and the result is zero. Such instances may be hiding
      some bugs. So, this work (flexible-array member conversions) will also help
      to get completely rid of those sorts of issues.
      
      This issue was found with the help of Coccinelle.
      
      [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
      [2] https://github.com/KSPP/linux/issues/21
      [3] commit 76497732 ("cxgb3/l2t: Fix undefined behaviour")
      
      Link: https://lore.kernel.org/r/20200507190544.GA15633@embeddedorSigned-off-by: NGustavo A. R. Silva <gustavoars@kernel.org>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      914a1951
  17. 25 4月, 2020 1 次提交
  18. 29 3月, 2020 1 次提交
  19. 11 3月, 2020 2 次提交
    • Y
      PCI: Add PCIE_LNKCAP2_SLS2SPEED() macro · 757bfaa2
      Yicong Yang 提交于
      Add PCIE_LNKCAP2_SLS2SPEED macro for transforming raw Link Capabilities 2
      values to the pci_bus_speed. This is next to PCIE_SPEED2MBS_ENC() to make
      it easier to update both places when adding support for new speeds.
      
      Link: https://lore.kernel.org/r/1581937984-40353-10-git-send-email-yangyicong@hisilicon.comSigned-off-by: NYicong Yang <yangyicong@hisilicon.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      757bfaa2
    • B
      PCI: Use pci_speed_string() for all PCI/PCI-X/PCIe strings · 6348a34d
      Bjorn Helgaas 提交于
      Previously some PCI speed strings came from pci_speed_string(), some came
      from the PCIe-specific PCIE_SPEED2STR(), and some came from a PCIe-specific
      switch statement.  These methods were inconsistent:
      
        pci_speed_string()     PCIE_SPEED2STR()     switch
        ------------------     ----------------     ------
        33 MHz PCI
        ...
        2.5 GT/s PCIe          2.5 GT/s             2.5 GT/s
        5.0 GT/s PCIe          5 GT/s               5 GT/s
        8.0 GT/s PCIe          8 GT/s               8 GT/s
        16.0 GT/s PCIe         16 GT/s              16 GT/s
        32.0 GT/s PCIe         32 GT/s              32 GT/s
      
      Standardize on pci_speed_string() as the single source of these strings.
      
      Note that this adds ".0" and "PCIe" to some messages, including sysfs
      "max_link_speed" files, a brcmstb "link up" message, and the link status
      dmesg logging, e.g.,
      
        nvme 0000:01:00.0: 16.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x4 link at 0000:00:01.1 (capable of 31.504 Gb/s with 8.0 GT/s PCIe x4 link)
      
      I think it's better to standardize on a single version of the speed text.
      Previously we had strings like this:
      
        /sys/bus/pci/slots/0/cur_bus_speed: 8.0 GT/s PCIe
        /sys/bus/pci/slots/0/max_bus_speed: 8.0 GT/s PCIe
        /sys/devices/pci0000:00/0000:00:1c.0/current_link_speed: 8 GT/s
        /sys/devices/pci0000:00/0000:00:1c.0/max_link_speed: 8 GT/s
      
      This changes the latter two to match the slots files:
      
        /sys/devices/pci0000:00/0000:00:1c.0/current_link_speed: 8.0 GT/s PCIe
        /sys/devices/pci0000:00/0000:00:1c.0/max_link_speed: 8.0 GT/s PCIe
      
      Based-on-patch by: Yicong Yang <yangyicong@hisilicon.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      6348a34d
  20. 06 3月, 2020 1 次提交
  21. 05 3月, 2020 1 次提交
  22. 25 1月, 2020 1 次提交
  23. 14 1月, 2020 1 次提交
  24. 06 1月, 2020 1 次提交
  25. 23 12月, 2019 1 次提交
  26. 19 12月, 2019 2 次提交
  27. 21 11月, 2019 6 次提交
    • K
      PCI: Remove unused includes and superfluous struct declaration · bbd8810d
      Krzysztof Wilczynski 提交于
      Remove <linux/pci.h> and <linux/msi.h> from being included directly as part
      of the include/linux/of_pci.h, and remove superfluous declaration of struct
      of_phandle_args.
      
      Move users of include <linux/of_pci.h> to include <linux/pci.h> and
      <linux/msi.h> directly rather than rely on both being included transitively
      through <linux/of_pci.h>.
      
      Link: https://lore.kernel.org/r/20190903113059.2901-1-kw@linux.comSigned-off-by: NKrzysztof Wilczynski <kw@linux.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Reviewed-by: NRob Herring <robh@kernel.org>
      bbd8810d
    • V
      PCI/PM: Move pci_dev_wait() definition earlier · bae26849
      Vidya Sagar 提交于
      Move the definition of pci_dev_wait() above pci_power_up() so that it can
      be called from the latter with no change in functionality.  This is a pure
      code move with no functional change.
      
      Link: https://lore.kernel.org/r/20191120051743.23124-1-vidyas@nvidia.comSigned-off-by: NVidya Sagar <vidyas@nvidia.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      bae26849
    • M
      PCI/PM: Add missing link delays required by the PCIe spec · ad9001f2
      Mika Westerberg 提交于
      Currently Linux does not follow PCIe spec regarding the required delays
      after reset. A concrete example is a Thunderbolt add-in-card that consists
      of a PCIe switch and two PCIe endpoints:
      
        +-1b.0-[01-6b]----00.0-[02-6b]--+-00.0-[03]----00.0 TBT controller
                                        +-01.0-[04-36]-- DS hotplug port
                                        +-02.0-[37]----00.0 xHCI controller
                                        \-04.0-[38-6b]-- DS hotplug port
      
      The root port (1b.0) and the PCIe switch downstream ports are all PCIe Gen3
      so they support 8GT/s link speeds.
      
      We wait for the PCIe hierarchy to enter D3cold (runtime):
      
        pcieport 0000:00:1b.0: power state changed by ACPI to D3cold
      
      When it wakes up from D3cold, according to the PCIe 5.0 section 5.8 the
      PCIe switch is put to reset and its power is re-applied. This means that we
      must follow the rules in PCIe 5.0 section 6.6.1.
      
      For the PCIe Gen3 ports we are dealing with here, the following applies:
      
        With a Downstream Port that supports Link speeds greater than 5.0 GT/s,
        software must wait a minimum of 100 ms after Link training completes
        before sending a Configuration Request to the device immediately below
        that Port. Software can determine when Link training completes by polling
        the Data Link Layer Link Active bit or by setting up an associated
        interrupt (see Section 6.7.3.3).
      
      Translating this into the above topology we would need to do this (DLLLA
      stands for Data Link Layer Link Active):
      
        0000:00:1b.0: wait for 100 ms after DLLLA is set before access to 0000:01:00.0
        0000:02:00.0: wait for 100 ms after DLLLA is set before access to 0000:03:00.0
        0000:02:02.0: wait for 100 ms after DLLLA is set before access to 0000:37:00.0
      
      I've instrumented the kernel with some additional logging so we can see the
      actual delays performed:
      
        pcieport 0000:00:1b.0: power state changed by ACPI to D0
        pcieport 0000:00:1b.0: waiting for D3cold delay of 100 ms
        pcieport 0000:00:1b.0: waiting for D3hot delay of 10 ms
        pcieport 0000:02:01.0: waiting for D3hot delay of 10 ms
        pcieport 0000:02:04.0: waiting for D3hot delay of 10 ms
      
      For the switch upstream port (01:00.0 reachable through 00:1b.0 root port)
      we wait for 100 ms but not taking into account the DLLLA requirement. We
      then wait 10 ms for D3hot -> D0 transition of the root port and the two
      downstream hotplug ports. This means that we deviate from what the spec
      requires.
      
      Performing the same check for system sleep (s2idle) transitions it turns
      out to be even worse. None of the mandatory delays are performed. If this
      would be S3 instead of s2idle then according to PCI FW spec 3.2 section
      4.6.8. there is a specific _DSM that allows the OS to skip the delays but
      this platform does not provide the _DSM and does not go to S3 anyway so no
      firmware is involved that could already handle these delays.
      
      On this particular platform these delays are not actually needed because
      there is an additional delay as part of the ACPI power resource that is
      used to turn on power to the hierarchy but since that additional delay is
      not required by any of standards (PCIe, ACPI) it is not present in the
      Intel Ice Lake, for example where missing the mandatory delays causes
      pciehp to start tearing down the stack too early (links are not yet
      trained). Below is an example how it looks like when this happens:
      
        pcieport 0000:83:04.0: pciehp: Slot(4): Card not present
        pcieport 0000:87:04.0: PME# disabled
        pcieport 0000:83:04.0: pciehp: pciehp_unconfigure_device: domain:bus:dev = 0000:86:00
        pcieport 0000:86:00.0: Refused to change power state, currently in D3
        pcieport 0000:86:00.0: restoring config space at offset 0x3c (was 0xffffffff, writing 0x201ff)
        pcieport 0000:86:00.0: restoring config space at offset 0x38 (was 0xffffffff, writing 0x0)
        ...
      
      There is also one reported case (see the bugzilla link below) where the
      missing delay causes xHCI on a Titan Ridge controller fail to runtime
      resume when USB-C dock is plugged. This does not involve pciehp but instead
      the PCI core fails to runtime resume the xHCI device:
      
        pcieport 0000:04:02.0: restoring config space at offset 0xc (was 0x10000, writing 0x10020)
        pcieport 0000:04:02.0: restoring config space at offset 0x4 (was 0x100000, writing 0x100406)
        xhci_hcd 0000:39:00.0: Refused to change power state, currently in D3
        xhci_hcd 0000:39:00.0: restoring config space at offset 0x3c (was 0xffffffff, writing 0x1ff)
        xhci_hcd 0000:39:00.0: restoring config space at offset 0x38 (was 0xffffffff, writing 0x0)
        ...
      
      Add a new function pci_bridge_wait_for_secondary_bus() that is called on
      PCI core resume and runtime resume paths accordingly if the bridge entered
      D3cold (and thus went through reset).
      
      This is second attempt to add the missing delays. The previous solution in
      c2bf1fc2 ("PCI: Add missing link delays required by the PCIe spec") was
      reverted because of two issues it caused:
      
        1. One system become unresponsive after S3 resume due to PME service
           spinning in pcie_pme_work_fn(). The root port in question reports that
           the xHCI sent PME but the xHCI device itself does not have PME status
           set. The PME status bit is never cleared in the root port resulting
           the indefinite loop in pcie_pme_work_fn().
      
        2. Slows down resume if the root/downstream port does not support Data
           Link Layer Active Reporting because pcie_wait_for_link_delay() waits
           1100 ms in that case.
      
      This version should avoid the above issues because we restrict the delay to
      happen only if the port went into D3cold.
      
      Link: https://lore.kernel.org/linux-pci/SL2P216MB01878BBCD75F21D882AEEA2880C60@SL2P216MB0187.KORP216.PROD.OUTLOOK.COM/
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=203885
      Link: https://lore.kernel.org/r/20191112091617.70282-3-mika.westerberg@linux.intel.comReported-by: NKai-Heng Feng <kai.heng.feng@canonical.com>
      Tested-by: NKai-Heng Feng <kai.heng.feng@canonical.com>
      Signed-off-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      ad9001f2
    • M
      PCI/PM: Add pcie_wait_for_link_delay() · 4827d638
      Mika Westerberg 提交于
      Add pcie_wait_for_link_delay().  Similar to pcie_wait_for_link() but allows
      passing custom activation delay in milliseconds.
      
      Link: https://lore.kernel.org/r/20191112091617.70282-2-mika.westerberg@linux.intel.comSigned-off-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Reviewed-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      4827d638
    • B
      PCI/PM: Return error when changing power state from D3cold · 327ccbbc
      Bjorn Helgaas 提交于
      pci_raw_set_power_state() uses the Power Management capability to change a
      device's power state.  The capability is in config space, which is
      accessible in D0, D1, D2, and D3hot, but not in D3cold.
      
      If we call pci_raw_set_power_state() on a device that's in D3cold, config
      reads fail and return ~0 data, which we erroneously interpreted as "the
      device is in D3hot", leading to messages like this:
      
        pcieport 0000:03:00.0: Refused to change power state, currently in D3
      
      The PCI_PM_CTRL has several RsvdP fields, so ~0 is never a valid register
      value.  If we get that value, print a more informative message and return
      an error.
      
      Changing the power state of a device from D3cold must be done by a platform
      power management method or some other non-config space mechanism.
      
      Link: https://lore.kernel.org/r/20190822200551.129039-4-helgaas@kernel.orgSigned-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Reviewed-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Reviewed-by: NKeith Busch <kbusch@kernel.org>
      Reviewed-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      327ccbbc
    • B
      PCI/PM: Decode D3cold power state correctly · e43f15ea
      Bjorn Helgaas 提交于
      Use pci_power_name() to print pci_power_t correctly.  This changes:
      
        "state 0" or "D0"   to   "D0"
        "state 1" or "D1"   to   "D1"
        "state 2" or "D2"   to   "D2"
        "state 3" or "D3"   to   "D3hot"
        "state 4" or "D4"   to   "D3cold"
      
      Changes dmesg logging only, no other functional change intended.
      
      Link: https://lore.kernel.org/r/20190822200551.129039-3-helgaas@kernel.orgSigned-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Reviewed-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Reviewed-by: NKeith Busch <kbusch@kernel.org>
      Reviewed-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      e43f15ea