1. 07 12月, 2020 1 次提交
    • R
      PM: ACPI: PCI: Drop acpi_pm_set_bridge_wakeup() · 7482c5cb
      Rafael J. Wysocki 提交于
      The idea behind acpi_pm_set_bridge_wakeup() was to allow bridges to
      be reference counted for wakeup enabling, because they may be enabled
      to signal wakeup on behalf of their subordinate devices and that
      may happen for multiple times in a row, whereas for the other devices
      it only makes sense to enable wakeup signaling once.
      
      However, this becomes problematic if the bridge itself is suspended,
      because it is treated as a "regular" device in that case and the
      reference counting doesn't work.
      
      For instance, suppose that there are two devices below a bridge and
      they both can signal wakeup.  Every time one of them is suspended,
      wakeup signaling is enabled for the bridge, so when they both have
      been suspended, the bridge's wakeup reference counter value is 2.
      
      Say that the bridge is suspended subsequently and acpi_pci_wakeup()
      is called for it.  Because the bridge can signal wakeup, that
      function will invoke acpi_pm_set_device_wakeup() to configure it
      and __acpi_pm_set_device_wakeup() will be called with the last
      argument equal to 1.  This causes __acpi_device_wakeup_enable()
      invoked by it to omit the reference counting, because the reference
      counter of the target device (the bridge) is 2 at that time.
      
      Now say that the bridge resumes and one of the device below it
      resumes too, so the bridge's reference counter becomes 0 and
      wakeup signaling is disabled for it, but there is still the other
      suspended device which may need the bridge to signal wakeup on its
      behalf and that is not going to work.
      
      To address this scenario, use wakeup enable reference counting for
      all devices, not just for bridges, so drop the last argument from
      __acpi_device_wakeup_enable() and __acpi_pm_set_device_wakeup(),
      which causes acpi_pm_set_device_wakeup() and
      acpi_pm_set_bridge_wakeup() to become identical, so drop the latter
      and use the former instead of it everywhere.
      
      Fixes: 1ba51a7c ("ACPI / PCI / PM: Rework acpi_pci_propagate_wakeup()")
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Reviewed-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      Acked-by: NBjorn Helgaas <bhelgaas@google.com>
      Cc: 4.14+ <stable@vger.kernel.org> # 4.14+
      7482c5cb
  2. 02 12月, 2020 2 次提交
  3. 18 11月, 2020 2 次提交
  4. 14 10月, 2020 2 次提交
    • D
      x86/numa: add 'nohmat' option · 3b0d3101
      Dan Williams 提交于
      Disable parsing of the HMAT for debug, to workaround broken platform
      instances, or cases where it is otherwise not wanted.
      
      [rdunlap@infradead.org: fix build when CONFIG_ACPI is not set]
        Link: https://lkml.kernel.org/r/70e5ee34-9809-a997-7b49-499e4be61307@infradead.orgSigned-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NRandy Dunlap <rdunlap@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Ard Biesheuvel <ardb@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Ben Skeggs <bskeggs@redhat.com>
      Cc: Brice Goglin <Brice.Goglin@inria.fr>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: Dave Jiang <dave.jiang@intel.com>
      Cc: David Airlie <airlied@linux.ie>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Ira Weiny <ira.weiny@intel.com>
      Cc: Jason Gunthorpe <jgg@mellanox.com>
      Cc: Jeff Moyer <jmoyer@redhat.com>
      Cc: Jia He <justin.he@arm.com>
      Cc: Joao Martins <joao.m.martins@oracle.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Mike Rapoport <rppt@linux.ibm.com>
      Cc: Paul Mackerras <paulus@ozlabs.org>
      Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
      Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Vishal Verma <vishal.l.verma@intel.com>
      Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Hulk Robot <hulkci@huawei.com>
      Cc: Jason Yan <yanaijie@huawei.com>
      Cc: "Jérôme Glisse" <jglisse@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: kernel test robot <lkp@intel.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Stefano Stabellini <sstabellini@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Link: https://lkml.kernel.org/r/159643095540.4062302.732962081968036212.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3b0d3101
    • D
      x86/numa: cleanup configuration dependent command-line options · 2dd57d34
      Dan Williams 提交于
      Patch series "device-dax: Support sub-dividing soft-reserved ranges", v5.
      
      The device-dax facility allows an address range to be directly mapped
      through a chardev, or optionally hotplugged to the core kernel page
      allocator as System-RAM.  It is the mechanism for converting persistent
      memory (pmem) to be used as another volatile memory pool i.e.  the current
      Memory Tiering hot topic on linux-mm.
      
      In the case of pmem the nvdimm-namespace-label mechanism can sub-divide
      it, but that labeling mechanism is not available / applicable to
      soft-reserved ("EFI specific purpose") memory [3].  This series provides a
      sysfs-mechanism for the daxctl utility to enable provisioning of
      volatile-soft-reserved memory ranges.
      
      The motivations for this facility are:
      
      1/ Allow performance differentiated memory ranges to be split between
         kernel-managed and directly-accessed use cases.
      
      2/ Allow physical memory to be provisioned along performance relevant
         address boundaries. For example, divide a memory-side cache [4] along
         cache-color boundaries.
      
      3/ Parcel out soft-reserved memory to VMs using device-dax as a security
         / permissions boundary [5]. Specifically I have seen people (ab)using
         memmap=nn!ss (mark System-RAM as Persistent Memory) just to get the
         device-dax interface on custom address ranges. A follow-on for the VM
         use case is to teach device-dax to dynamically allocate 'struct page' at
         runtime to reduce the duplication of 'struct page' space in both the
         guest and the host kernel for the same physical pages.
      
      [2]: http://lore.kernel.org/r/20200713160837.13774-11-joao.m.martins@oracle.com
      [3]: http://lore.kernel.org/r/157309097008.1579826.12818463304589384434.stgit@dwillia2-desk3.amr.corp.intel.com
      [4]: http://lore.kernel.org/r/154899811738.3165233.12325692939590944259.stgit@dwillia2-desk3.amr.corp.intel.com
      [5]: http://lore.kernel.org/r/20200110190313.17144-1-joao.m.martins@oracle.com
      
      This patch (of 23):
      
      In preparation for adding a new numa= option clean up the existing ones to
      avoid ifdefs in numa_setup(), and provide feedback when the option is
      numa=fake= option is invalid due to kernel config.  The same does not need
      to be done for numa=noacpi, since the capability is already hard disabled
      at compile-time.
      Suggested-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ardb@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Ben Skeggs <bskeggs@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brice Goglin <Brice.Goglin@inria.fr>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Dave Jiang <dave.jiang@intel.com>
      Cc: David Airlie <airlied@linux.ie>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ira Weiny <ira.weiny@intel.com>
      Cc: Jason Gunthorpe <jgg@mellanox.com>
      Cc: Jeff Moyer <jmoyer@redhat.com>
      Cc: Jia He <justin.he@arm.com>
      Cc: Joao Martins <joao.m.martins@oracle.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Mike Rapoport <rppt@linux.ibm.com>
      Cc: Paul Mackerras <paulus@ozlabs.org>
      Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Vishal Verma <vishal.l.verma@intel.com>
      Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Hulk Robot <hulkci@huawei.com>
      Cc: Jason Yan <yanaijie@huawei.com>
      Cc: "Jérôme Glisse" <jglisse@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: kernel test robot <lkp@intel.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Stefano Stabellini <sstabellini@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Link: https://lkml.kernel.org/r/160106109960.30709.7379926726669669398.stgit@dwillia2-desk3.amr.corp.intel.com
      Link: https://lkml.kernel.org/r/159643094279.4062302.17779410714418721328.stgit@dwillia2-desk3.amr.corp.intel.com
      Link: https://lkml.kernel.org/r/159643094925.4062302.14979872973043772305.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2dd57d34
  5. 09 10月, 2020 6 次提交
  6. 30 9月, 2020 1 次提交
    • N
      ACPI / NUMA: Add stub function for pxm_to_node() · 4849bc77
      Nathan Chancellor 提交于
      After commit 01feba59 ("ACPI: Do not create new NUMA domains from
      ACPI static tables that are not SRAT"):
      
      $ scripts/config --file arch/x86/configs/x86_64_defconfig -d NUMA -e ACPI_NFIT
      
      $ make -skj"$(nproc)" distclean defconfig drivers/acpi/nfit/
      drivers/acpi/nfit/core.c: In function ‘acpi_nfit_register_region’:
      drivers/acpi/nfit/core.c:3010:27: error: implicit declaration of
      function ‘pxm_to_node’; did you mean ‘xa_to_node’?
      [-Werror=implicit-function-declaration]
       3010 |   ndr_desc->target_node = pxm_to_node(spa->proximity_domain);
            |                           ^~~~~~~~~~~
            |                           xa_to_node
      cc1: some warnings being treated as errors
      ...
      
      Add a stub function like acpi_map_pxm_to_node() had so that the build
      continues to work.
      
      Fixes: 01feba59 ("ACPI: Do not create new NUMA domains from ACPI static tables that are not SRAT")
      Signed-off-by: NNathan Chancellor <natechancellor@gmail.com>
      Acked-by: Randy Dunlap <rdunlap@infradead.org> # build-tested
      Acked-by: NJonathan Cameron <Jonathan.Cameron@huawei.com>
      Reviewed-by: NHanjun Guo <guohanjun@huawei.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      4849bc77
  7. 26 9月, 2020 1 次提交
  8. 16 9月, 2020 1 次提交
  9. 11 9月, 2020 2 次提交
    • R
      ACPI: OSL: Make ACPICA use logical addresses of GPE blocks · 85f94020
      Rafael J. Wysocki 提交于
      Define ACPI_GPE_USE_LOGICAL_ADDRESSES in aclinux.h and modify
      acpi_os_initialize() to store the logical addresses of the FADT GPE
      blocks 0 and 1 in acpi_gbl_xgpe0_block_logical_address and
      acpi_gbl_xgpe1_block_logical_address, respectively, so as to allow
      ACPICA to use them for accessing GPE registers in system memory,
      instead of using their physical addresses and looking up the
      corresponding logical addresses on every access attempt, which is
      inefficient.
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      85f94020
    • R
      ACPI: OSL: Change the type of acpi_os_map_generic_address() return value · 6915564d
      Rafael J. Wysocki 提交于
      Modify acpi_os_map_generic_address() to return the pointer returned
      by acpi_os_map_iomem() which represents the logical address
      corresponding to the struct acpi_generic_address argument passed to
      it or NULL if that address cannot be obtained (for example, the
      argument does not represent an address in system memory or it could
      not be mapped by the OS).
      
      Among other things, that will allow the ACPI OS layer to pass the
      logical addresses of the FADT GPE blocks 0 and 1 to ACPICA going
      forward.
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      6915564d
  10. 28 7月, 2020 1 次提交
    • L
      ACPI/IORT: Add an input ID to acpi_dma_configure() · b8e069a2
      Lorenzo Pieralisi 提交于
      Some HW devices are created as child devices of proprietary busses,
      that have a bus specific policy defining how the child devices
      wires representing the devices ID are translated into IOMMU and
      IRQ controllers device IDs.
      
      Current IORT code provides translations for:
      
      - PCI devices, where the device ID is well identified at bus level
        as the requester ID (RID)
      - Platform devices that are endpoint devices where the device ID is
        retrieved from the ACPI object IORT mappings (Named components single
        mappings). A platform device is represented in IORT as a named
        component node
      
      For devices that are child devices of proprietary busses the IORT
      firmware represents the bus node as a named component node in IORT
      and it is up to that named component node to define in/out bus
      specific ID translations for the bus child devices that are
      allocated and created in a bus specific manner.
      
      In order to make IORT ID translations available for proprietary
      bus child devices, the current ACPI (and IORT) code must be
      augmented to provide an additional ID parameter to acpi_dma_configure()
      representing the child devices input ID. This ID is bus specific
      and it is retrieved in bus specific code.
      
      By adding an ID parameter to acpi_dma_configure(), the IORT
      code can map the child device ID to an IOMMU stream ID through
      the IORT named component representing the bus in/out ID mappings.
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Hanjun Guo <guohanjun@huawei.com>
      Cc: Sudeep Holla <sudeep.holla@arm.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Link: https://lore.kernel.org/r/20200619082013.13661-6-lorenzo.pieralisi@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      b8e069a2
  11. 27 7月, 2020 3 次提交
    • B
      ACPICA: Update version to 20200717 · 2861ba7a
      Bob Moore 提交于
      ACPICA commit c1adb9a2a775df7a85df0103342ebf090e1b2016
      
      Version 20200717.
      
      Link: https://github.com/acpica/acpica/commit/c1adb9a2Signed-off-by: NBob Moore <robert.moore@intel.com>
      Signed-off-by: NErik Kaneda <erik.kaneda@intel.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      2861ba7a
    • G
      ACPICA: Replace one-element array with flexible-array · 10cfde5d
      Gustavo A. R. Silva 提交于
      ACPICA commit 7ba2f3d91a32f104765961fda0ed78b884ae193d
      
      The current codebase makes use of one-element arrays in the following
      form:
      
      struct something {
          int length;
          u8 data[1];
      };
      
      struct something *instance;
      
      instance = kmalloc(sizeof(*instance) + size, GFP_KERNEL);
      instance->length = size;
      memcpy(instance->data, source, size);
      
      but the preferred mechanism to declare variable-length types such as
      these ones is a flexible array member[1][2], introduced in C99:
      
      struct foo {
              int stuff;
              struct boo array[];
      };
      
      By making use of the mechanism above, we will get a compiler warning
      in case the flexible array does not occur last in the structure,
      which will help us prevent some kind of undefined behavior bugs from
      being inadvertently introduced[3] to the linux codebase from now on.
      
      This issue was found with the help of Coccinelle and audited _manually_.
      
      [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
      [2] https://github.com/KSPP/linux/issues/21
      [3] commit 76497732 ("cxgb3/l2t: Fix undefined behaviour")
      
      Link: https://github.com/acpica/acpica/commit/7ba2f3d9Signed-off-by: NGustavo A. R. Silva <gustavoars@kernel.org>
      Signed-off-by: NErik Kaneda <erik.kaneda@intel.com>
      Signed-off-by: NBob Moore <robert.moore@intel.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      10cfde5d
    • R
      ACPICA: Preserve memory opregion mappings · b8fcd0e5
      Rafael J. Wysocki 提交于
      The ACPICA's strategy with respect to the handling of memory mappings
      associated with memory operation regions is to avoid mapping the
      entire region at once which may be problematic at least in principle
      (for example, it may lead to conflicts with overlapping mappings
      having different attributes created by drivers).  It may also be
      wasteful, because memory opregions on some systems take up vast
      chunks of address space while the fields in those regions actually
      accessed by AML are sparsely distributed.
      
      For this reason, a one-page "window" is mapped for a given opregion
      on the first memory access through it and if that "window" does not
      cover an address range accessed through that opregion subsequently,
      it is unmapped and a new "window" is mapped to replace it.  Next,
      if the new "window" is not sufficient to acess memory through the
      opregion in question in the future, it will be replaced with yet
      another "window" and so on.  That may lead to a suboptimal sequence
      of memory mapping and unmapping operations, for example if two fields
      in one opregion separated from each other by a sufficiently wide
      chunk of unused address space are accessed in an alternating pattern.
      
      The situation may still be suboptimal if the deferred unmapping
      introduced previously is supported by the OS layer.  For instance,
      the alternating memory access pattern mentioned above may produce
      a relatively long list of mappings to release with substantial
      duplication among the entries in it, which could be avoided if
      acpi_ex_system_memory_space_handler() did not release the mapping
      used by it previously as soon as the current access was not covered
      by it.
      
      In order to improve that, modify acpi_ex_system_memory_space_handler()
      to preserve all of the memory mappings created by it until the memory
      regions associated with them go away.
      
      Accordingly, update acpi_ev_system_memory_region_setup() to unmap all
      memory associated with memory opregions that go away.
      Reported-by: NDan Williams <dan.j.williams@intel.com>
      Tested-by: NXiang Li <xiang.z.li@intel.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      b8fcd0e5
  12. 24 7月, 2020 1 次提交
  13. 05 6月, 2020 2 次提交
  14. 20 5月, 2020 1 次提交
    • J
      ACPI: APEI: Kick the memory_failure() queue for synchronous errors · 7f17b4a1
      James Morse 提交于
      memory_failure() offlines or repairs pages of memory that have been
      discovered to be corrupt. These may be detected by an external
      component, (e.g. the memory controller), and notified via an IRQ.
      In this case the work is queued as not all of memory_failure()s work
      can happen in IRQ context.
      
      If the error was detected as a result of user-space accessing a
      corrupt memory location the CPU may take an abort instead. On arm64
      this is a 'synchronous external abort', and on a firmware first
      system it is replayed using NOTIFY_SEA.
      
      This notification has NMI like properties, (it can interrupt
      IRQ-masked code), so the memory_failure() work is queued. If we
      return to user-space before the queued memory_failure() work is
      processed, we will take the fault again. This loop may cause platform
      firmware to exceed some threshold and reboot when Linux could have
      recovered from this error.
      
      For NMIlike notifications keep track of whether memory_failure() work
      was queued, and make task_work pending to flush out the queue.
      To save memory allocations, the task_work is allocated as part of
      the ghes_estatus_node, and free()ing it back to the pool is deferred.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Tested-by: NTyler Baicar <baicar@os.amperecomputing.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      7f17b4a1
  15. 09 5月, 2020 2 次提交
  16. 04 4月, 2020 1 次提交
    • Q
      x86: ACPI: fix CPU hotplug deadlock · 696ac2e3
      Qian Cai 提交于
      Similar to commit 0266d81e ("acpi/processor: Prevent cpu hotplug
      deadlock") except this is for acpi_processor_ffh_cstate_probe():
      
      "The problem is that the work is scheduled on the current CPU from the
      hotplug thread associated with that CPU.
      
      It's not required to invoke these functions via the workqueue because
      the hotplug thread runs on the target CPU already.
      
      Check whether current is a per cpu thread pinned on the target CPU and
      invoke the function directly to avoid the workqueue."
      
       WARNING: possible circular locking dependency detected
       ------------------------------------------------------
       cpuhp/1/15 is trying to acquire lock:
       ffffc90003447a28 ((work_completion)(&wfc.work)){+.+.}-{0:0}, at: __flush_work+0x4c6/0x630
      
       but task is already holding lock:
       ffffffffafa1c0e8 (cpuidle_lock){+.+.}-{3:3}, at: cpuidle_pause_and_lock+0x17/0x20
      
       which lock already depends on the new lock.
      
       the existing dependency chain (in reverse order) is:
      
       -> #1 (cpu_hotplug_lock){++++}-{0:0}:
       cpus_read_lock+0x3e/0xc0
       irq_calc_affinity_vectors+0x5f/0x91
       __pci_enable_msix_range+0x10f/0x9a0
       pci_alloc_irq_vectors_affinity+0x13e/0x1f0
       pci_alloc_irq_vectors_affinity at drivers/pci/msi.c:1208
       pqi_ctrl_init+0x72f/0x1618 [smartpqi]
       pqi_pci_probe.cold.63+0x882/0x892 [smartpqi]
       local_pci_probe+0x7a/0xc0
       work_for_cpu_fn+0x2e/0x50
       process_one_work+0x57e/0xb90
       worker_thread+0x363/0x5b0
       kthread+0x1f4/0x220
       ret_from_fork+0x27/0x50
      
       -> #0 ((work_completion)(&wfc.work)){+.+.}-{0:0}:
       __lock_acquire+0x2244/0x32a0
       lock_acquire+0x1a2/0x680
       __flush_work+0x4e6/0x630
       work_on_cpu+0x114/0x160
       acpi_processor_ffh_cstate_probe+0x129/0x250
       acpi_processor_evaluate_cst+0x4c8/0x580
       acpi_processor_get_power_info+0x86/0x740
       acpi_processor_hotplug+0xc3/0x140
       acpi_soft_cpu_online+0x102/0x1d0
       cpuhp_invoke_callback+0x197/0x1120
       cpuhp_thread_fun+0x252/0x2f0
       smpboot_thread_fn+0x255/0x440
       kthread+0x1f4/0x220
       ret_from_fork+0x27/0x50
      
       other info that might help us debug this:
      
       Chain exists of:
       (work_completion)(&wfc.work) --> cpuhp_state-up --> cpuidle_lock
      
       Possible unsafe locking scenario:
      
       CPU0                    CPU1
       ----                    ----
       lock(cpuidle_lock);
                               lock(cpuhp_state-up);
                               lock(cpuidle_lock);
       lock((work_completion)(&wfc.work));
      
       *** DEADLOCK ***
      
       3 locks held by cpuhp/1/15:
       #0: ffffffffaf51ab10 (cpu_hotplug_lock){++++}-{0:0}, at: cpuhp_thread_fun+0x69/0x2f0
       #1: ffffffffaf51ad40 (cpuhp_state-up){+.+.}-{0:0}, at: cpuhp_thread_fun+0x69/0x2f0
       #2: ffffffffafa1c0e8 (cpuidle_lock){+.+.}-{3:3}, at: cpuidle_pause_and_lock+0x17/0x20
      
       Call Trace:
       dump_stack+0xa0/0xea
       print_circular_bug.cold.52+0x147/0x14c
       check_noncircular+0x295/0x2d0
       __lock_acquire+0x2244/0x32a0
       lock_acquire+0x1a2/0x680
       __flush_work+0x4e6/0x630
       work_on_cpu+0x114/0x160
       acpi_processor_ffh_cstate_probe+0x129/0x250
       acpi_processor_evaluate_cst+0x4c8/0x580
       acpi_processor_get_power_info+0x86/0x740
       acpi_processor_hotplug+0xc3/0x140
       acpi_soft_cpu_online+0x102/0x1d0
       cpuhp_invoke_callback+0x197/0x1120
       cpuhp_thread_fun+0x252/0x2f0
       smpboot_thread_fn+0x255/0x440
       kthread+0x1f4/0x220
       ret_from_fork+0x27/0x50
      Signed-off-by: NQian Cai <cai@lca.pw>
      Tested-by: NBorislav Petkov <bp@suse.de>
      [ rjw: Subject ]
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      696ac2e3
  17. 30 3月, 2020 7 次提交
  18. 25 3月, 2020 1 次提交
  19. 21 3月, 2020 1 次提交
  20. 22 2月, 2020 1 次提交
  21. 16 2月, 2020 1 次提交