1. 06 9月, 2019 2 次提交
  2. 09 7月, 2019 1 次提交
  3. 27 6月, 2019 1 次提交
    • R
      PCI: PM/ACPI: Refresh all stale power state data in pci_pm_complete() · b51033e0
      Rafael J. Wysocki 提交于
      In pci_pm_complete() there are checks to decide whether or not to
      resume devices that were left in runtime-suspend during the preceding
      system-wide transition into a sleep state.  They involve checking the
      current power state of the device and comparing it with the power
      state of it set before the preceding system-wide transition, but the
      platform component of the device's power state is not handled
      correctly in there.
      
      Namely, on platforms with ACPI, the device power state information
      needs to be updated with care, so that the reference counters of
      power resources used by the device (if any) are set to ensure that
      the refreshed power state of it will be maintained going forward.
      
      To that end, introduce a new ->refresh_state() platform PM callback
      for PCI devices, for asking the platform to refresh the device power
      state data and ensure that the corresponding power state will be
      maintained going forward, make it invoke acpi_device_update_power()
      (for devices with ACPI PM) on platforms with ACPI and make
      pci_pm_complete() use it, through a new pci_refresh_power_state()
      wrapper function.
      
      Fixes: a0d2a959 (PCI: Avoid unnecessary resume after direct-complete)
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Reviewed-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      b51033e0
  4. 18 6月, 2019 2 次提交
    • M
      PCI: Do not poll for PME if the device is in D3cold · 000dd531
      Mika Westerberg 提交于
      PME polling does not take into account that a device that is directly
      connected to the host bridge may go into D3cold as well. This leads to a
      situation where the PME poll thread reads from a config space of a
      device that is in D3cold and gets incorrect information because the
      config space is not accessible.
      
      Here is an example from Intel Ice Lake system where two PCIe root ports
      are in D3cold (I've instrumented the kernel to log the PMCSR register
      contents):
      
        [   62.971442] pcieport 0000:00:07.1: Check PME status, PMCSR=0xffff
        [   62.971504] pcieport 0000:00:07.0: Check PME status, PMCSR=0xffff
      
      Since 0xffff is interpreted so that PME is pending, the root ports will
      be runtime resumed. This repeats over and over again essentially
      blocking all runtime power management.
      
      Prevent this from happening by checking whether the device is in D3cold
      before its PME status is read.
      
      Fixes: 71a83bd7 ("PCI/PM: add runtime PM support to PCIe port")
      Signed-off-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      Reviewed-by: NLukas Wunner <lukas@wunner.de>
      Cc: 3.6+ <stable@vger.kernel.org> # v3.6+
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      000dd531
    • M
      PCI: Add missing link delays required by the PCIe spec · c2bf1fc2
      Mika Westerberg 提交于
      Currently Linux does not follow PCIe spec regarding the required delays
      after reset. A concrete example is a Thunderbolt add-in-card that
      consists of a PCIe switch and two PCIe endpoints:
      
        +-1b.0-[01-6b]----00.0-[02-6b]--+-00.0-[03]----00.0 TBT controller
                                        +-01.0-[04-36]-- DS hotplug port
                                        +-02.0-[37]----00.0 xHCI controller
                                        \-04.0-[38-6b]-- DS hotplug port
      
      The root port (1b.0) and the PCIe switch downstream ports are all PCIe
      gen3 so they support 8GT/s link speeds.
      
      We wait for the PCIe hierarchy to enter D3cold (runtime):
      
        pcieport 0000:00:1b.0: power state changed by ACPI to D3cold
      
      When it wakes up from D3cold, according to the PCIe 4.0 section 5.8 the
      PCIe switch is put to reset and its power is re-applied. This means that
      we must follow the rules in PCIe 4.0 section 6.6.1.
      
      For the PCIe gen3 ports we are dealing with here, the following applies:
      
        With a Downstream Port that supports Link speeds greater than 5.0
        GT/s, software must wait a minimum of 100 ms after Link training
        completes before sending a Configuration Request to the device
        immediately below that Port. Software can determine when Link training
        completes by polling the Data Link Layer Link Active bit or by setting
        up an associated interrupt (see Section 6.7.3.3).
      
      Translating this into the above topology we would need to do this (DLLLA
      stands for Data Link Layer Link Active):
      
        pcieport 0000:00:1b.0: wait for 100ms after DLLLA is set before access to 0000:01:00.0
        pcieport 0000:02:00.0: wait for 100ms after DLLLA is set before access to 0000:03:00.0
        pcieport 0000:02:02.0: wait for 100ms after DLLLA is set before access to 0000:37:00.0
      
      I've instrumented the kernel with additional logging so we can see the
      actual delays the kernel performs:
      
        pcieport 0000:00:1b.0: power state changed by ACPI to D0
        pcieport 0000:00:1b.0: waiting for D3cold delay of 100 ms
        pcieport 0000:00:1b.0: waking up bus
        pcieport 0000:00:1b.0: waiting for D3hot delay of 10 ms
        pcieport 0000:00:1b.0: restoring config space at offset 0x2c (was 0x60, writing 0x60)
        ...
        pcieport 0000:00:1b.0: PME# disabled
        pcieport 0000:01:00.0: restoring config space at offset 0x3c (was 0x1ff, writing 0x201ff)
        ...
        pcieport 0000:01:00.0: PME# disabled
        pcieport 0000:02:00.0: restoring config space at offset 0x3c (was 0x1ff, writing 0x201ff)
        ...
        pcieport 0000:02:00.0: PME# disabled
        pcieport 0000:02:01.0: restoring config space at offset 0x3c (was 0x1ff, writing 0x201ff)
        ...
        pcieport 0000:02:01.0: restoring config space at offset 0x4 (was 0x100000, writing 0x100407)
        pcieport 0000:02:01.0: PME# disabled
        pcieport 0000:02:02.0: restoring config space at offset 0x3c (was 0x1ff, writing 0x201ff)
        ...
        pcieport 0000:02:02.0: PME# disabled
        pcieport 0000:02:04.0: restoring config space at offset 0x3c (was 0x1ff, writing 0x201ff)
        ...
        pcieport 0000:02:04.0: PME# disabled
        pcieport 0000:02:01.0: PME# enabled
        pcieport 0000:02:01.0: waiting for D3hot delay of 10 ms
        pcieport 0000:02:04.0: PME# enabled
        pcieport 0000:02:04.0: waiting for D3hot delay of 10 ms
        thunderbolt 0000:03:00.0: restoring config space at offset 0x14 (was 0x0, writing 0x8a040000)
        ...
        thunderbolt 0000:03:00.0: PME# disabled
        xhci_hcd 0000:37:00.0: restoring config space at offset 0x10 (was 0x0, writing 0x73f00000)
        ...
        xhci_hcd 0000:37:00.0: PME# disabled
      
      For the switch upstream port (01:00.0) we wait for 100ms but not taking
      into account the DLLLA requirement. We then wait 10ms for D3hot -> D0
      transition of the root port and the two downstream hotplug ports. This
      means that we deviate from what the spec requires.
      
      Performing the same check for system sleep (s2idle) transitions we can
      see following when resuming from s2idle:
      
        pcieport 0000:00:1b.0: power state changed by ACPI to D0
        pcieport 0000:00:1b.0: restoring config space at offset 0x2c (was 0x60, writing 0x60)
        ...
        pcieport 0000:01:00.0: restoring config space at offset 0x3c (was 0x1ff, writing 0x201ff)
        ...
        pcieport 0000:02:02.0: restoring config space at offset 0x3c (was 0x1ff, writing 0x201ff)
        pcieport 0000:02:02.0: restoring config space at offset 0x2c (was 0x0, writing 0x0)
        pcieport 0000:02:01.0: restoring config space at offset 0x3c (was 0x1ff, writing 0x201ff)
        pcieport 0000:02:04.0: restoring config space at offset 0x3c (was 0x1ff, writing 0x201ff)
        pcieport 0000:02:02.0: restoring config space at offset 0x28 (was 0x0, writing 0x0)
        pcieport 0000:02:00.0: restoring config space at offset 0x3c (was 0x1ff, writing 0x201ff)
        pcieport 0000:02:02.0: restoring config space at offset 0x24 (was 0x10001, writing 0x1fff1)
        pcieport 0000:02:01.0: restoring config space at offset 0x2c (was 0x0, writing 0x60)
        pcieport 0000:02:02.0: restoring config space at offset 0x20 (was 0x0, writing 0x73f073f0)
        pcieport 0000:02:04.0: restoring config space at offset 0x2c (was 0x0, writing 0x60)
        pcieport 0000:02:01.0: restoring config space at offset 0x28 (was 0x0, writing 0x60)
        pcieport 0000:02:00.0: restoring config space at offset 0x2c (was 0x0, writing 0x0)
        pcieport 0000:02:02.0: restoring config space at offset 0x1c (was 0x101, writing 0x1f1)
        pcieport 0000:02:04.0: restoring config space at offset 0x28 (was 0x0, writing 0x60)
        pcieport 0000:02:01.0: restoring config space at offset 0x24 (was 0x10001, writing 0x1ff10001)
        pcieport 0000:02:00.0: restoring config space at offset 0x28 (was 0x0, writing 0x0)
        pcieport 0000:02:02.0: restoring config space at offset 0x18 (was 0x0, writing 0x373702)
        pcieport 0000:02:04.0: restoring config space at offset 0x24 (was 0x10001, writing 0x49f12001)
        pcieport 0000:02:01.0: restoring config space at offset 0x20 (was 0x0, writing 0x73e05c00)
        pcieport 0000:02:00.0: restoring config space at offset 0x24 (was 0x10001, writing 0x1fff1)
        pcieport 0000:02:04.0: restoring config space at offset 0x20 (was 0x0, writing 0x89f07400)
        pcieport 0000:02:01.0: restoring config space at offset 0x1c (was 0x101, writing 0x5151)
        pcieport 0000:02:00.0: restoring config space at offset 0x20 (was 0x0, writing 0x8a008a00)
        pcieport 0000:02:02.0: restoring config space at offset 0xc (was 0x10000, writing 0x10020)
        pcieport 0000:02:04.0: restoring config space at offset 0x1c (was 0x101, writing 0x6161)
        pcieport 0000:02:01.0: restoring config space at offset 0x18 (was 0x0, writing 0x360402)
        pcieport 0000:02:00.0: restoring config space at offset 0x1c (was 0x101, writing 0x1f1)
        pcieport 0000:02:04.0: restoring config space at offset 0x18 (was 0x0, writing 0x6b3802)
        pcieport 0000:02:02.0: restoring config space at offset 0x4 (was 0x100000, writing 0x100407)
        pcieport 0000:02:00.0: restoring config space at offset 0x18 (was 0x0, writing 0x30302)
        pcieport 0000:02:01.0: restoring config space at offset 0xc (was 0x10000, writing 0x10020)
        pcieport 0000:02:04.0: restoring config space at offset 0xc (was 0x10000, writing 0x10020)
        pcieport 0000:02:00.0: restoring config space at offset 0xc (was 0x10000, writing 0x10020)
        pcieport 0000:02:01.0: restoring config space at offset 0x4 (was 0x100000, writing 0x100407)
        pcieport 0000:02:04.0: restoring config space at offset 0x4 (was 0x100000, writing 0x100407)
        pcieport 0000:02:00.0: restoring config space at offset 0x4 (was 0x100000, writing 0x100407)
        xhci_hcd 0000:37:00.0: restoring config space at offset 0x10 (was 0x0, writing 0x73f00000)
        ...
        thunderbolt 0000:03:00.0: restoring config space at offset 0x14 (was 0x0, writing 0x8a040000)
      
      This is even worse. None of the mandatory delays are performed. If this
      would be S3 instead of s2idle then according to PCI FW spec 3.2 section
      4.6.8.  there is a specific _DSM that allows the OS to skip the delays
      but this platform does not provide the _DSM and does not go to S3 anyway
      so no firmware is involved that could already handle these delays.
      
      In this particular Intel Coffee Lake platform these delays are not
      actually needed because there is an additional delay as part of the ACPI
      power resource that is used to turn on power to the hierarchy but since
      that additional delay is not required by any of standards (PCIe, ACPI)
      it is not present in the Intel Ice Lake, for example where missing the
      mandatory delays causes pciehp to start tearing down the stack too early
      (links are not yet trained).
      
      For this reason, change the PCIe portdrv PM resume hooks so that they
      perform the mandatory delays before the downstream component gets
      resumed. We perform the delays before port services are resumed because
      otherwise pciehp might find that the link is not up (even if it is just
      training) and tears-down the hierarchy.
      Signed-off-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      c2bf1fc2
  5. 17 6月, 2019 2 次提交
    • R
      PCI: PM: Replace pci_dev_keep_suspended() with two functions · 0c7376ad
      Rafael J. Wysocki 提交于
      The code in pci_dev_keep_suspended() is relatively hard to follow due
      to the negative checks in it and in its callers and the function has
      a possible side-effect (disabling the PME) which doesn't really match
      its role.
      
      For this reason, move the PME disabling from pci_dev_keep_suspended()
      to a separate function and change the semantics (and name) of the
      rest of it, so that 'true' is returned when the device needs to be
      resumed (and not the other way around).  Change the callers of
      pci_dev_keep_suspended() accordingly.
      
      While at it, make the code flow in pci_pm_poweroff() reflect the
      pci_pm_suspend() more closely to avoid arbitrary differences between
      them.
      
      This is a cosmetic change with no intention to alter behavior.
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Reviewed-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      0c7376ad
    • R
      PCI: PM: Avoid resuming devices in D3hot during system suspend · 234f223d
      Rafael J. Wysocki 提交于
      The current code resumes devices in D3hot during system suspend if
      the target power state for them is D3cold, but that is not necessary
      in general.  It only is necessary to do that if the platform firmware
      requires the device to be resumed, but that should be covered by
      the platform_pci_need_resume() check anyway, so rework
      pci_dev_keep_suspended() to avoid returning 'false' for devices
      in D3hot which need not be resumed due to platform firmware
      requirements.
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Reviewed-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      234f223d
  6. 14 6月, 2019 1 次提交
  7. 09 5月, 2019 2 次提交
  8. 18 4月, 2019 1 次提交
  9. 14 4月, 2019 1 次提交
  10. 13 4月, 2019 1 次提交
  11. 06 3月, 2019 1 次提交
    • A
      PCI: Fix "try" semantics of bus and slot reset · ddefc033
      Alex Williamson 提交于
      The commit referenced below introduced device locking around save and
      restore of state for each device during a PCI bus "try" reset, making it
      decidely non-"try" and prone to deadlock in the event that a device is
      already locked.  Restore __pci_reset_bus() and __pci_reset_slot() to their
      advertised locking semantics by pushing the save and restore functions into
      the branch where the entire tree is already locked.  Extend the helper
      function names with "_locked" and update the comment to reflect this
      calling requirement.
      
      Fixes: b014e96d ("PCI: Protect pci_error_handlers->reset_notify() usage with device_lock()")
      Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Reviewed-by: NSinan Kaya <okaya@kernel.org>
      ddefc033
  12. 12 2月, 2019 1 次提交
    • B
      PCI/ASPM: Save LTR Capability for suspend/resume · dbbfadf2
      Bjorn Helgaas 提交于
      Latency Tolerance Reporting (LTR) allows Endpoints and Switch Upstream
      Ports to report their latency requirements to upstream components.  If ASPM
      L1 PM substates are enabled, the LTR information helps determine when a
      Link enters L1.2 [1].
      
      Software must set the maximum latency values in the LTR Capability based on
      characteristics of the platform, then set LTR Mechanism Enable in the
      Device Control 2 register in the PCIe Capability.  The device can then use
      LTR to report its latency tolerance.
      
      If the device reports a maximum latency value of zero, that means the
      device requires the highest possible performance and the ASPM L1.2 substate
      is effectively disabled.
      
      We put devices in D3 for suspend, and we assume their internal state is
      lost.  On resume, previously we did not restore the LTR Capability, but we
      did restore the LTR Mechanism Enable bit, so devices would request the
      highest possible performance and ASPM L1.2 wouldn't be used.
      
      [1] PCIe r4.0, sec 5.5.1
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=201469Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      dbbfadf2
  13. 11 2月, 2019 1 次提交
    • M
      PCI: Blacklist power management of Gigabyte X299 DESIGNARE EX PCIe ports · 85b0cae8
      Mika Westerberg 提交于
      Gigabyte X299 DESIGNARE EX motherboard has one PCIe root port that is
      connected to an Alpine Ridge Thunderbolt controller.  This port has slot
      implemented bit set in the config space but other than that it is not
      hotplug capable in the sense we are expecting in Linux (it has
      dev->is_hotplug_bridge set to 0):
      
        00:1c.4 PCI bridge: Intel Corporation 200 Series PCH PCI Express Root Port #5
          Bus: primary=00, secondary=05, subordinate=46, sec-latency=0
          Memory behind bridge: 78000000-8fffffff [size=384M]
          Prefetchable memory behind bridge: 00003800f8000000-00003800ffffffff [size=128M]
          ...
          Capabilities: [40] Express (v2) Root Port (Slot+), MSI 00
          ...
            SltCap: AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise-
      	      Slot #8, PowerLimit 25.000W; Interlock- NoCompl+
            SltCtl: Enable: AttnBtn- PwrFlt- MRL- PresDet- CmdCplt- HPIrq- LinkChg-
      	      Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
            SltSta: Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet- Interlock-
      	      Changed: MRL- PresDet+ LinkState+
      
      This system is using ACPI based hotplug to notify the OS that it needs to
      rescan the PCI bus (ACPI hotplug).
      
      If there is nothing connected in any of the Thunderbolt ports the root port
      will not have any runtime PM active children and is thus automatically
      runtime suspended pretty soon after boot by PCI PM core.  Now, when a
      device is connected the BIOS SMI handler responsible for enumerating newly
      added devices is not able to find anything because the port is in D3.
      
      Prevent this from happening by blacklisting PCI power management of this
      particular Gigabyte system.
      
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=202031Reported-by: NKedar A Dongre <kedar.a.dongre@intel.com>
      Signed-off-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Reviewed-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      85b0cae8
  14. 31 1月, 2019 1 次提交
  15. 22 1月, 2019 1 次提交
  16. 17 1月, 2019 1 次提交
    • L
      PCI: Fix __initdata issue with "pci=disable_acs_redir" parameter · d2fd6e81
      Logan Gunthorpe 提交于
      The disable_acs_redir parameter stores a pointer to the string passed to
      pci_setup().  However, the string passed to PCI setup is actually a
      temporary copy allocated in static __initdata memory.  After init, once the
      memory is freed, it is no longer valid to reference this pointer.
      
      This bug was noticed in v5.0-rc1 after a change in commit c5eb1190
      ("PCI / PM: Allow runtime PM without callback functions") caused
      pci_disable_acs_redir() to be called during shutdown which manifested
      as an unable to handle kernel paging request at:
      
        RIP: 0010:pci_enable_acs+0x3f/0x1e0
        Call Trace:
           pci_restore_state.part.44+0x159/0x3c0
           pci_restore_standard_config+0x33/0x40
           pci_pm_runtime_resume+0x2b/0xd0
           ? pci_restore_standard_config+0x40/0x40
           __rpm_callback+0xbc/0x1b0
           rpm_callback+0x1f/0x70
           ? pci_restore_standard_config+0x40/0x40
            rpm_resume+0x4f9/0x710
           ? pci_conf1_read+0xb6/0xf0
           ? pci_conf1_write+0xb2/0xe0
           __pm_runtime_resume+0x47/0x70
           pci_device_shutdown+0x1e/0x60
           device_shutdown+0x14a/0x1f0
           kernel_restart+0xe/0x50
           __do_sys_reboot+0x1ee/0x210
           ? __fput+0x144/0x1d0
           do_writev+0x5e/0xf0
           ? do_writev+0x5e/0xf0
           do_syscall_64+0x48/0xf0
           entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      It was also likely possible to trigger this bug when hotplugging PCI
      devices.
      
      To fix this, instead of storing a pointer, we use kstrdup() to copy the
      disable_acs_redir_param to its own buffer which will never be freed.
      
      Fixes: aaca43fd ("PCI: Add "pci=disable_acs_redir=" parameter for peer-to-peer support")
      Tested-by: NJarkko Nikula <jarkko.nikula@linux.intel.com>
      Signed-off-by: NLogan Gunthorpe <logang@deltatee.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Reviewed-by: NJarkko Nikula <jarkko.nikula@linux.intel.com>
      d2fd6e81
  17. 15 1月, 2019 1 次提交
  18. 01 12月, 2018 1 次提交
  19. 03 10月, 2018 3 次提交
  20. 29 9月, 2018 1 次提交
    • F
      PCI: Add support for Immediate Readiness · d6112f8d
      Felipe Balbi 提交于
      PCIe r4.0, sec 7.5.1.1.4 defines a new bit in the Status Register:
      
        Immediate Readiness – This optional bit, when Set, indicates the Function
        is guaranteed to be ready to successfully complete valid configuration
        accesses at any time following any reset that the host is capable of
        issuing Configuration Requests to this Function.
      
        When this bit is Set, for accesses to this Function, software is exempt
        from all requirements to delay configuration accesses following any type
        of reset, including but not limited to the timing requirements defined in
        Section 6.6.
      
      This means that all delays after a Conventional or Function Reset can be
      skipped.
      
      This patch reads such bit and caches its value in a flag inside struct
      pci_dev to be checked later if we should delay or can skip delays after a
      reset.  While at that, also move the explicit msleep(100) call from
      pcie_flr() and pci_af_flr() to pci_dev_wait().
      Signed-off-by: NFelipe Balbi <felipe.balbi@linux.intel.com>
      [bhelgaas: rename PCI_STATUS_IMMEDIATE to PCI_STATUS_IMM_READY]
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      d6112f8d
  21. 28 9月, 2018 1 次提交
    • D
      PCI: Reprogram bridge prefetch registers on resume · 08387454
      Daniel Drake 提交于
      On 38+ Intel-based ASUS products, the NVIDIA GPU becomes unusable after S3
      suspend/resume.  The affected products include multiple generations of
      NVIDIA GPUs and Intel SoCs.  After resume, nouveau logs many errors such
      as:
      
        fifo: fault 00 [READ] at 0000005555555000 engine 00 [GR] client 04
              [HUB/FE] reason 4a [] on channel -1 [007fa91000 unknown]
        DRM: failed to idle channel 0 [DRM]
      
      Similarly, the NVIDIA proprietary driver also fails after resume (black
      screen, 100% CPU usage in Xorg process).  We shipped a sample to NVIDIA for
      diagnosis, and their response indicated that it's a problem with the parent
      PCI bridge (on the Intel SoC), not the GPU.
      
      Runtime suspend/resume works fine, only S3 suspend is affected.
      
      We found a workaround: on resume, rewrite the Intel PCI bridge
      'Prefetchable Base Upper 32 Bits' register (PCI_PREF_BASE_UPPER32).  In the
      cases that I checked, this register has value 0 and we just have to rewrite
      that value.
      
      Linux already saves and restores PCI config space during suspend/resume,
      but this register was being skipped because upon resume, it already has
      value 0 (the correct, pre-suspend value).
      
      Intel appear to have previously acknowledged this behaviour and the
      requirement to rewrite this register:
      https://bugzilla.kernel.org/show_bug.cgi?id=116851#c23
      
      Based on that, rewrite the prefetch register values even when that appears
      unnecessary.
      
      We have confirmed this solution on all the affected models we have in-hands
      (X542UQ, UX533FD, X530UN, V272UN).
      
      Additionally, this solves an issue where r8169 MSI-X interrupts were broken
      after S3 suspend/resume on ASUS X441UAR.  This issue was recently worked
      around in commit 7bb05b85 ("r8169: don't use MSI-X on RTL8106e").  It
      also fixes the same issue on RTL6186evl/8111evl on an Aimfor-tech laptop
      that we had not yet patched.  I suspect it will also fix the issue that was
      worked around in commit 7c53a722 ("r8169: don't use MSI-X on
      RTL8168g").
      
      Thomas Martitz reports that this change also solves an issue where the AMD
      Radeon Polaris 10 GPU on the HP Zbook 14u G5 is unresponsive after S3
      suspend/resume.
      
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=201069Signed-off-by: NDaniel Drake <drake@endlessm.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Reviewed-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Reviewed-By: NPeter Wu <peter@lekensteyn.nl>
      CC: stable@vger.kernel.org
      08387454
  22. 22 9月, 2018 1 次提交
  23. 21 9月, 2018 1 次提交
  24. 19 9月, 2018 1 次提交
    • L
      PCI: hotplug: Constify hotplug_slot_ops · 81c4b5bf
      Lukas Wunner 提交于
      Hotplug drivers cannot declare their hotplug_slot_ops const, making them
      attractive targets for attackers, because upon registration of a hotplug
      slot, __pci_hp_initialize() writes to the "owner" and "mod_name" members
      in that struct.
      
      Fix by moving these members to struct hotplug_slot and constify every
      driver's hotplug_slot_ops except for pciehp.
      
      pciehp constructs its hotplug_slot_ops at runtime based on the PCIe
      port's capabilities, hence cannot declare them const.  It can be
      converted to __write_rarely once that's mainlined:
      http://www.openwall.com/lists/kernel-hardening/2016/11/16/3Signed-off-by: NLukas Wunner <lukas@wunner.de>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Reviewed-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: Tyrel Datwyler <tyreld@linux.vnet.ibm.com>  # drivers/pci/hotplug/rpa*
      Acked-by: Andy Shevchenko <andy.shevchenko@gmail.com> # drivers/platform/x86
      Cc: Len Brown <lenb@kernel.org>
      Cc: Scott Murray <scott@spiteful.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Oliver OHalloran <oliveroh@au1.ibm.com>
      Cc: Gavin Shan <gwshan@linux.vnet.ibm.com>
      Cc: Sebastian Ott <sebott@linux.vnet.ibm.com>
      Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
      Cc: Corentin Chary <corentin.chary@gmail.com>
      Cc: Darren Hart <dvhart@infradead.org>
      81c4b5bf
  25. 18 9月, 2018 2 次提交
  26. 12 9月, 2018 2 次提交
  27. 11 8月, 2018 1 次提交
    • A
      PCI: Check for PCIe Link downtraining · 2d1ce5ec
      Alexandru Gagniuc 提交于
      When both ends of a PCIe Link are capable of a higher bandwidth than is
      currently in use, the Link is said to be "downtrained".  A downtrained Link
      may indicate hardware or configuration problems in the system, but it's
      hard to identify such Links from userspace.
      
      Refactor pcie_print_link_status() so it continues to always print PCIe
      bandwidth information, as several NIC drivers desire.
      
      Add a new internal __pcie_print_link_status() to emit a message only when a
      device's bandwidth is constrained by the fabric and call it from the PCI
      core for all devices, which identifies all downtrained Links.  It also
      emits messages for a few cases that are technically not downtrained, such
      as a x4 device in an open-ended x1 slot.
      Signed-off-by: NAlexandru Gagniuc <mr.nuke.me@gmail.com>
      [bhelgaas: changelog, move __pcie_print_link_status() declaration to
      drivers/pci/, rename pcie_check_upstream_link() to
      pcie_report_downtraining()]
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      2d1ce5ec
  28. 10 8月, 2018 5 次提交