1. 04 6月, 2020 3 次提交
  2. 11 4月, 2020 11 次提交
  3. 09 4月, 2020 1 次提交
  4. 08 4月, 2020 19 次提交
  5. 07 4月, 2020 4 次提交
  6. 04 4月, 2020 1 次提交
    • Q
      x86: ACPI: fix CPU hotplug deadlock · 696ac2e3
      Qian Cai 提交于
      Similar to commit 0266d81e ("acpi/processor: Prevent cpu hotplug
      deadlock") except this is for acpi_processor_ffh_cstate_probe():
      
      "The problem is that the work is scheduled on the current CPU from the
      hotplug thread associated with that CPU.
      
      It's not required to invoke these functions via the workqueue because
      the hotplug thread runs on the target CPU already.
      
      Check whether current is a per cpu thread pinned on the target CPU and
      invoke the function directly to avoid the workqueue."
      
       WARNING: possible circular locking dependency detected
       ------------------------------------------------------
       cpuhp/1/15 is trying to acquire lock:
       ffffc90003447a28 ((work_completion)(&wfc.work)){+.+.}-{0:0}, at: __flush_work+0x4c6/0x630
      
       but task is already holding lock:
       ffffffffafa1c0e8 (cpuidle_lock){+.+.}-{3:3}, at: cpuidle_pause_and_lock+0x17/0x20
      
       which lock already depends on the new lock.
      
       the existing dependency chain (in reverse order) is:
      
       -> #1 (cpu_hotplug_lock){++++}-{0:0}:
       cpus_read_lock+0x3e/0xc0
       irq_calc_affinity_vectors+0x5f/0x91
       __pci_enable_msix_range+0x10f/0x9a0
       pci_alloc_irq_vectors_affinity+0x13e/0x1f0
       pci_alloc_irq_vectors_affinity at drivers/pci/msi.c:1208
       pqi_ctrl_init+0x72f/0x1618 [smartpqi]
       pqi_pci_probe.cold.63+0x882/0x892 [smartpqi]
       local_pci_probe+0x7a/0xc0
       work_for_cpu_fn+0x2e/0x50
       process_one_work+0x57e/0xb90
       worker_thread+0x363/0x5b0
       kthread+0x1f4/0x220
       ret_from_fork+0x27/0x50
      
       -> #0 ((work_completion)(&wfc.work)){+.+.}-{0:0}:
       __lock_acquire+0x2244/0x32a0
       lock_acquire+0x1a2/0x680
       __flush_work+0x4e6/0x630
       work_on_cpu+0x114/0x160
       acpi_processor_ffh_cstate_probe+0x129/0x250
       acpi_processor_evaluate_cst+0x4c8/0x580
       acpi_processor_get_power_info+0x86/0x740
       acpi_processor_hotplug+0xc3/0x140
       acpi_soft_cpu_online+0x102/0x1d0
       cpuhp_invoke_callback+0x197/0x1120
       cpuhp_thread_fun+0x252/0x2f0
       smpboot_thread_fn+0x255/0x440
       kthread+0x1f4/0x220
       ret_from_fork+0x27/0x50
      
       other info that might help us debug this:
      
       Chain exists of:
       (work_completion)(&wfc.work) --> cpuhp_state-up --> cpuidle_lock
      
       Possible unsafe locking scenario:
      
       CPU0                    CPU1
       ----                    ----
       lock(cpuidle_lock);
                               lock(cpuhp_state-up);
                               lock(cpuidle_lock);
       lock((work_completion)(&wfc.work));
      
       *** DEADLOCK ***
      
       3 locks held by cpuhp/1/15:
       #0: ffffffffaf51ab10 (cpu_hotplug_lock){++++}-{0:0}, at: cpuhp_thread_fun+0x69/0x2f0
       #1: ffffffffaf51ad40 (cpuhp_state-up){+.+.}-{0:0}, at: cpuhp_thread_fun+0x69/0x2f0
       #2: ffffffffafa1c0e8 (cpuidle_lock){+.+.}-{3:3}, at: cpuidle_pause_and_lock+0x17/0x20
      
       Call Trace:
       dump_stack+0xa0/0xea
       print_circular_bug.cold.52+0x147/0x14c
       check_noncircular+0x295/0x2d0
       __lock_acquire+0x2244/0x32a0
       lock_acquire+0x1a2/0x680
       __flush_work+0x4e6/0x630
       work_on_cpu+0x114/0x160
       acpi_processor_ffh_cstate_probe+0x129/0x250
       acpi_processor_evaluate_cst+0x4c8/0x580
       acpi_processor_get_power_info+0x86/0x740
       acpi_processor_hotplug+0xc3/0x140
       acpi_soft_cpu_online+0x102/0x1d0
       cpuhp_invoke_callback+0x197/0x1120
       cpuhp_thread_fun+0x252/0x2f0
       smpboot_thread_fn+0x255/0x440
       kthread+0x1f4/0x220
       ret_from_fork+0x27/0x50
      Signed-off-by: NQian Cai <cai@lca.pw>
      Tested-by: NBorislav Petkov <bp@suse.de>
      [ rjw: Subject ]
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      696ac2e3
  7. 03 4月, 2020 1 次提交
    • U
      KVM: SVM: Split svm_vcpu_run inline assembly to separate file · 199cd1d7
      Uros Bizjak 提交于
      The compiler (GCC) does not like the situation, where there is inline
      assembly block that clobbers all available machine registers in the
      middle of the function. This situation can be found in function
      svm_vcpu_run in file kvm/svm.c and results in many register spills and
      fills to/from stack frame.
      
      This patch fixes the issue with the same approach as was done for
      VMX some time ago. The big inline assembly is moved to a separate
      assembly .S file, taking into account all ABI requirements.
      
      There are two main benefits of the above approach:
      
      * elimination of several register spills and fills to/from stack
      frame, and consequently smaller function .text size. The binary size
      of svm_vcpu_run is lowered from 2019 to 1626 bytes.
      
      * more efficient access to a register save array. Currently, register
      save array is accessed as:
      
          7b00:    48 8b 98 28 02 00 00     mov    0x228(%rax),%rbx
          7b07:    48 8b 88 18 02 00 00     mov    0x218(%rax),%rcx
          7b0e:    48 8b 90 20 02 00 00     mov    0x220(%rax),%rdx
      
      and passing ia pointer to a register array as an argument to a function one gets:
      
        12:    48 8b 48 08              mov    0x8(%rax),%rcx
        16:    48 8b 50 10              mov    0x10(%rax),%rdx
        1a:    48 8b 58 18              mov    0x18(%rax),%rbx
      
      As a result, the total size, considering that the new function size is 229
      bytes, gets lowered by 164 bytes.
      Signed-off-by: NUros Bizjak <ubizjak@gmail.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      199cd1d7