1. 27 4月, 2018 1 次提交
    • N
      powerpc: Fix deadlock with multiple calls to smp_send_stop · 6029755e
      Nicholas Piggin 提交于
      smp_send_stop can lock up the IPI path for any subsequent calls,
      because the receiving CPUs spin in their handler function. This
      started becoming a problem with the addition of an smp_send_stop
      call in the reboot path, because panics can reboot after doing
      their own smp_send_stop.
      
      The NMI IPI variant was fixed with ac61c115 ("powerpc: Fix
      smp_send_stop NMI IPI handling"), which leaves the smp_call_function
      variant.
      
      This is fixed by having smp_send_stop only ever do the
      smp_call_function once. This is a bit less robust than the NMI IPI
      fix, because any other call to smp_call_function after smp_send_stop
      could deadlock, but that has always been the case, and it was not
      been a problem before.
      
      Fixes: f2748bdf ("powerpc/powernv: Always stop secondaries before reboot/shutdown")
      Reported-by: NAbdul Haleem <abdhalee@linux.vnet.ibm.com>
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      6029755e
  2. 25 4月, 2018 1 次提交
    • N
      powerpc: Fix smp_send_stop NMI IPI handling · ac61c115
      Nicholas Piggin 提交于
      The NMI IPI handler for a receiving CPU increments nmi_ipi_busy_count
      over the handler function call, which causes later smp_send_nmi_ipi()
      callers to spin until the call is finished.
      
      The stop_this_cpu() function never returns, so the busy count is never
      decremeted, which can cause the system to hang in some cases. For
      example panic() will call smp_send_stop() early on which calls
      stop_this_cpu() on other CPUs, then later in the reboot path,
      pnv_restart() will call smp_send_stop() again, which hangs.
      
      Fix this by adding a special case to the stop_this_cpu() handler to
      decrement the busy count, because it will never return.
      
      Now that the NMI/non-NMI versions of stop_this_cpu() are different,
      split them out into separate functions rather than doing #ifdef tricks
      to share the body between the two functions.
      
      Fixes: 6bed3237 ("powerpc: use NMI IPI for smp_send_stop")
      Reported-by: NAbdul Haleem <abdhalee@linux.vnet.ibm.com>
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      [mpe: Split out the functions, tweak change log a bit]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      ac61c115
  3. 03 4月, 2018 2 次提交
  4. 30 3月, 2018 1 次提交
  5. 16 1月, 2018 1 次提交
    • B
      powernv/kdump: Fix cases where the kdump kernel can get HMI's · 4145f358
      Balbir Singh 提交于
      Certain HMI's such as malfunction error propagate through
      all threads/core on the system. If a thread was offline
      prior to us crashing the system and jumping to the kdump
      kernel, bad things happen when it wakes up due to an HMI
      in the kdump kernel.
      
      There are several possible ways to solve this problem
      
      1. Put the offline cores in a state such that they are
      not woken up for machine check and HMI errors. This
      does not work, since we might need to wake up offline
      threads to handle TB errors
      2. Ignore HMI errors, setup HMEER to mask HMI errors,
      but this still leads the window open for any MCEs
      and masking them for the duration of the dump might
      be a concern
      3. Wake up offline CPUs, as in send them to
      crash_ipi_callback (not wake them up as in mark them
      online as seen by the hotplug). kexec does a
      wake_online_cpus() call, this patch does something
      similar, but instead sends an IPI and forces them to
      crash_ipi_callback()
      
      This patch takes approach #3.
      
      Care is taken to enable this only for powenv platforms
      via crash_wake_offline (a global value set at setup
      time). The crash code sends out IPI's to all CPU's
      which then move to crash_ipi_callback and kexec_smp_wait().
      Signed-off-by: NBalbir Singh <bsingharora@gmail.com>
      Reviewed-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      4145f358
  6. 31 8月, 2017 4 次提交
    • O
      powerpc/smp: Add Power9 scheduler topology · 96d91431
      Oliver O'Halloran 提交于
      In previous generations of Power processors each core had a private L2
      cache. The Power 9 processor has a slightly different design where the
      L2 cache is shared among pairs of cores rather than being completely
      private.
      
      Making the scheduler aware of this cache sharing allows the scheduler to
      make better migration decisions. For example, if two CPU heavy tasks
      share a core then one task can be migrated to the paired core to improve
      throughput. Under the existing three level topology the task could be
      migrated to any core on the same chip, while with the new topology it
      would be preferentially migrated to the paired core so it remains
      cache-hot.
      Signed-off-by: NOliver O'Halloran <oohall@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      96d91431
    • O
      powerpc/smp: Add cpu_l2_cache_map · 2a636a56
      Oliver O'Halloran 提交于
      We want to add an extra level to the CPU scheduler topology to account
      for cores which share a cache. To do this we need to build a cpumask
      for each CPU that indicates which CPUs share this cache to use as an
      input to the scheduler.
      Signed-off-by: NOliver O'Halloran <oohall@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      2a636a56
    • O
      powerpc/smp: Rework CPU topology construction · df52f671
      Oliver O'Halloran 提交于
      The CPU scheduler topology is constructed from a number of per-cpu
      cpumasks which describe which sets of logical CPUs are related in some
      fashion. Current code that handles constructing these masks when CPUs
      are hot(un)plugged can be simplified a bit by exploiting the fact that
      the scheduler requires higher levels of the toplogy (e.g package level
      groupings) to be supersets of the lower levels (e.g.  threas in a core).
      This patch reworks the cpumask construction to be simpler and easier to
      extend with extra topology levels.
      Signed-off-by: NOliver O'Halloran <oohall@gmail.com>
      [mpe: Fix CONFIG_HOTPLUG_CPU=n build]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      df52f671
    • O
      powerpc/smp: Use cpu_to_chip_id() to find core siblings · e3d8b67e
      Oliver O'Halloran 提交于
      When building the CPU scheduler topology the kernel uses the ibm,chipid
      property from the devicetree to group logical CPUs. Currently the DT
      search for this property is open-coded in smp.c and this functionality
      is a duplication of what's in cpu_to_chip_id() already. This patch
      removes the existing search in favor of that.
      
      It's worth mentioning that the semantics of the search are different
      in cpu_to_chip_id(). When there is no ibm,chipid in the CPUs node it
      will also search /cpus and / for the property, but this should not
      effect the output topology.
      Signed-off-by: NOliver O'Halloran <oohall@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      e3d8b67e
  7. 09 8月, 2017 1 次提交
  8. 28 7月, 2017 1 次提交
    • M
      powerpc/smp: Call smp_ops->setup_cpu() directly on the boot CPU · 7b7622bb
      Michael Ellerman 提交于
      In smp_cpus_done() we need to call smp_ops->setup_cpu() for the boot
      CPU, which means it has to run *on* the boot CPU.
      
      In the past we ensured it ran on the boot CPU by changing the CPU
      affinity mask of current directly. That was removed in commit
      6d11b87d ("powerpc/smp: Replace open coded task affinity logic"),
      and replaced with a work queue call.
      
      Unfortunately using a work queue leads to a lockdep warning, now that
      the CPU hotplug lock is a regular semaphore:
      
        ======================================================
        WARNING: possible circular locking dependency detected
        ...
        kworker/0:1/971 is trying to acquire lock:
         (cpu_hotplug_lock.rw_sem){++++++}, at: [<c000000000100974>] apply_workqueue_attrs+0x34/0xa0
      
        but task is already holding lock:
         ((&wfc.work)){+.+.+.}, at: [<c0000000000fdb2c>] process_one_work+0x25c/0x800
        ...
             CPU0                    CPU1
             ----                    ----
        lock((&wfc.work));
                                     lock(cpu_hotplug_lock.rw_sem);
                                     lock((&wfc.work));
        lock(cpu_hotplug_lock.rw_sem);
      
      Although the deadlock can't happen in practice, because
      smp_cpus_done() only runs in early boot before CPU hotplug is allowed,
      lockdep can't tell that.
      
      Luckily in commit 8fb12156 ("init: Pin init task to the boot CPU,
      initially") tglx changed the generic code to pin init to the boot CPU
      to begin with. The unpinning of init from the boot CPU happens in
      sched_init_smp(), which is called after smp_cpus_done().
      
      So smp_cpus_done() is always called on the boot CPU, which means we
      don't need the work queue call at all - and the lockdep warning goes
      away.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      7b7622bb
  9. 13 7月, 2017 1 次提交
  10. 02 7月, 2017 1 次提交
  11. 28 6月, 2017 2 次提交
  12. 23 5月, 2017 1 次提交
  13. 03 5月, 2017 1 次提交
  14. 28 4月, 2017 2 次提交
  15. 15 4月, 2017 1 次提交
    • T
      powerpc/smp: Replace open coded task affinity logic · 6d11b87d
      Thomas Gleixner 提交于
      Init task invokes smp_ops->setup_cpu() from smp_cpus_done(). Init task can
      run on any online CPU at this point, but the setup_cpu() callback requires
      to be invoked on the boot CPU. This is achieved by temporarily setting the
      affinity of the calling user space thread to the requested CPU and reset it
      to the original affinity afterwards.
      
      That's racy vs. CPU hotplug and concurrent affinity settings for that
      thread resulting in code executing on the wrong CPU and overwriting the
      new affinity setting.
      
      That's actually not a problem in this context as neither CPU hotplug nor
      affinity settings can happen, but the access to task_struct::cpus_allowed
      is about to restricted.
      
      Replace it with a call to work_on_cpu_safe() which achieves the same result.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NMichael Ellerman <mpe@ellerman.id.au>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Lai Jiangshan <jiangshanlai@gmail.com>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Len Brown <lenb@kernel.org>
      Link: http://lkml.kernel.org/r/20170412201042.518053336@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      6d11b87d
  16. 13 4月, 2017 2 次提交
  17. 07 4月, 2017 1 次提交
    • B
      powerpc/smp: Remove migrate_irq() custom implementation · a978e139
      Benjamin Herrenschmidt 提交于
      Some powerpc platforms use this to move IRQs away from a CPU being
      unplugged. This function has several bugs such as not taking the right
      locks or failing to NULL check pointers.
      
      There's a new generic function doing exactly the same thing without all
      the bugs, so let's use it instead.
      
      mpe: The obvious place for the select of GENERIC_IRQ_MIGRATION is on
      HOTPLUG_CPU, but that doesn't work. On some configs PM_SLEEP_SMP will
      select HOTPLUG_CPU even though its dependencies are not met, which means
      the select of GENERIC_IRQ_MIGRATION doesn't happen. That leads to the
      build breaking. Fix it by moving the select of GENERIC_IRQ_MIGRATION to
      SMP.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      a978e139
  18. 06 4月, 2017 1 次提交
  19. 03 3月, 2017 1 次提交
    • I
      sched/headers: Move task->mm handling methods to <linux/sched/mm.h> · 68e21be2
      Ingo Molnar 提交于
      Move the following task->mm helper APIs into a new header file,
      <linux/sched/mm.h>, to further reduce the size and complexity
      of <linux/sched.h>.
      
      Here are how the APIs are used in various kernel files:
      
        # mm_alloc():
        arch/arm/mach-rpc/ecard.c
        fs/exec.c
        include/linux/sched/mm.h
        kernel/fork.c
      
        # __mmdrop():
        arch/arc/include/asm/mmu_context.h
        include/linux/sched/mm.h
        kernel/fork.c
      
        # mmdrop():
        arch/arm/mach-rpc/ecard.c
        arch/m68k/sun3/mmu_emu.c
        arch/x86/mm/tlb.c
        drivers/gpu/drm/amd/amdkfd/kfd_process.c
        drivers/gpu/drm/i915/i915_gem_userptr.c
        drivers/infiniband/hw/hfi1/file_ops.c
        drivers/vfio/vfio_iommu_spapr_tce.c
        fs/exec.c
        fs/proc/base.c
        fs/proc/task_mmu.c
        fs/proc/task_nommu.c
        fs/userfaultfd.c
        include/linux/mmu_notifier.h
        include/linux/sched/mm.h
        kernel/fork.c
        kernel/futex.c
        kernel/sched/core.c
        mm/khugepaged.c
        mm/ksm.c
        mm/mmu_context.c
        mm/mmu_notifier.c
        mm/oom_kill.c
        virt/kvm/kvm_main.c
      
        # mmdrop_async_fn():
        include/linux/sched/mm.h
      
        # mmdrop_async():
        include/linux/sched/mm.h
        kernel/fork.c
      
        # mmget_not_zero():
        fs/userfaultfd.c
        include/linux/sched/mm.h
        mm/oom_kill.c
      
        # mmput():
        arch/arc/include/asm/mmu_context.h
        arch/arc/kernel/troubleshoot.c
        arch/frv/mm/mmu-context.c
        arch/powerpc/platforms/cell/spufs/context.c
        arch/sparc/include/asm/mmu_context_32.h
        drivers/android/binder.c
        drivers/gpu/drm/etnaviv/etnaviv_gem.c
        drivers/gpu/drm/i915/i915_gem_userptr.c
        drivers/infiniband/core/umem.c
        drivers/infiniband/core/umem_odp.c
        drivers/infiniband/core/uverbs_main.c
        drivers/infiniband/hw/mlx4/main.c
        drivers/infiniband/hw/mlx5/main.c
        drivers/infiniband/hw/usnic/usnic_uiom.c
        drivers/iommu/amd_iommu_v2.c
        drivers/iommu/intel-svm.c
        drivers/lguest/lguest_user.c
        drivers/misc/cxl/fault.c
        drivers/misc/mic/scif/scif_rma.c
        drivers/oprofile/buffer_sync.c
        drivers/vfio/vfio_iommu_type1.c
        drivers/vhost/vhost.c
        drivers/xen/gntdev.c
        fs/exec.c
        fs/proc/array.c
        fs/proc/base.c
        fs/proc/task_mmu.c
        fs/proc/task_nommu.c
        fs/userfaultfd.c
        include/linux/sched/mm.h
        kernel/cpuset.c
        kernel/events/core.c
        kernel/events/uprobes.c
        kernel/exit.c
        kernel/fork.c
        kernel/ptrace.c
        kernel/sys.c
        kernel/trace/trace_output.c
        kernel/tsacct.c
        mm/memcontrol.c
        mm/memory.c
        mm/mempolicy.c
        mm/migrate.c
        mm/mmu_notifier.c
        mm/nommu.c
        mm/oom_kill.c
        mm/process_vm_access.c
        mm/rmap.c
        mm/swapfile.c
        mm/util.c
        virt/kvm/async_pf.c
      
        # mmput_async():
        include/linux/sched/mm.h
        kernel/fork.c
        mm/oom_kill.c
      
        # get_task_mm():
        arch/arc/kernel/troubleshoot.c
        arch/powerpc/platforms/cell/spufs/context.c
        drivers/android/binder.c
        drivers/gpu/drm/etnaviv/etnaviv_gem.c
        drivers/infiniband/core/umem.c
        drivers/infiniband/core/umem_odp.c
        drivers/infiniband/hw/mlx4/main.c
        drivers/infiniband/hw/mlx5/main.c
        drivers/infiniband/hw/usnic/usnic_uiom.c
        drivers/iommu/amd_iommu_v2.c
        drivers/iommu/intel-svm.c
        drivers/lguest/lguest_user.c
        drivers/misc/cxl/fault.c
        drivers/misc/mic/scif/scif_rma.c
        drivers/oprofile/buffer_sync.c
        drivers/vfio/vfio_iommu_type1.c
        drivers/vhost/vhost.c
        drivers/xen/gntdev.c
        fs/proc/array.c
        fs/proc/base.c
        fs/proc/task_mmu.c
        include/linux/sched/mm.h
        kernel/cpuset.c
        kernel/events/core.c
        kernel/exit.c
        kernel/fork.c
        kernel/ptrace.c
        kernel/sys.c
        kernel/trace/trace_output.c
        kernel/tsacct.c
        mm/memcontrol.c
        mm/memory.c
        mm/mempolicy.c
        mm/migrate.c
        mm/mmu_notifier.c
        mm/nommu.c
        mm/util.c
      
        # mm_access():
        fs/proc/base.c
        include/linux/sched/mm.h
        kernel/fork.c
        mm/process_vm_access.c
      
        # mm_release():
        arch/arc/include/asm/mmu_context.h
        fs/exec.c
        include/linux/sched/mm.h
        include/uapi/linux/sched.h
        kernel/exit.c
        kernel/fork.c
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      68e21be2
  20. 02 3月, 2017 2 次提交
  21. 28 2月, 2017 1 次提交
  22. 30 11月, 2016 1 次提交
  23. 22 8月, 2016 1 次提交
    • B
      powerpc, hotplug: Avoid to touch non-existent cpumasks. · 19ab58d1
      Boqun Feng 提交于
      We observed a kernel oops when running a PPC guest with config NR_CPUS=4
      and qemu option "-smp cores=1,threads=8":
      
      [   30.634781] Unable to handle kernel paging request for data at
      address 0xc00000014192eb17
      [   30.636173] Faulting instruction address: 0xc00000000003e5cc
      [   30.637069] Oops: Kernel access of bad area, sig: 11 [#1]
      [   30.637877] SMP NR_CPUS=4 NUMA pSeries
      [   30.638471] Modules linked in:
      [   30.638949] CPU: 3 PID: 27 Comm: migration/3 Not tainted
      4.7.0-07963-g9714b26 #1
      [   30.640059] task: c00000001e29c600 task.stack: c00000001e2a8000
      [   30.640956] NIP: c00000000003e5cc LR: c00000000003e550 CTR:
      0000000000000000
      [   30.642001] REGS: c00000001e2ab8e0 TRAP: 0300   Not tainted
      (4.7.0-07963-g9714b26)
      [   30.643139] MSR: 8000000102803033 <SF,VEC,VSX,FP,ME,IR,DR,RI,LE,TM[E]>  CR: 22004084  XER: 00000000
      [   30.644583] CFAR: c000000000009e98 DAR: c00000014192eb17 DSISR: 40000000 SOFTE: 0
      GPR00: c00000000140a6b8 c00000001e2abb60 c0000000016dd300 0000000000000003
      GPR04: 0000000000000000 0000000000000004 c0000000016e5920 0000000000000008
      GPR08: 0000000000000004 c00000014192eb17 0000000000000000 0000000000000020
      GPR12: c00000000140a6c0 c00000000ffffc00 c0000000000d3ea8 c00000001e005680
      GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
      GPR20: 0000000000000000 c00000001e6b3a00 0000000000000000 0000000000000001
      GPR24: c00000001ff85138 c00000001ff85130 000000001eb6f000 0000000000000001
      GPR28: 0000000000000000 c0000000017014e0 0000000000000000 0000000000000018
      [   30.653882] NIP [c00000000003e5cc] __cpu_disable+0xcc/0x190
      [   30.654713] LR [c00000000003e550] __cpu_disable+0x50/0x190
      [   30.655528] Call Trace:
      [   30.655893] [c00000001e2abb60] [c00000000003e550] __cpu_disable+0x50/0x190 (unreliable)
      [   30.657280] [c00000001e2abbb0] [c0000000000aca0c] take_cpu_down+0x5c/0x100
      [   30.658365] [c00000001e2abc10] [c000000000163918] multi_cpu_stop+0x1a8/0x1e0
      [   30.659617] [c00000001e2abc60] [c000000000163cc0] cpu_stopper_thread+0xf0/0x1d0
      [   30.660737] [c00000001e2abd20] [c0000000000d8d70] smpboot_thread_fn+0x290/0x2a0
      [   30.661879] [c00000001e2abd80] [c0000000000d3fa8] kthread+0x108/0x130
      [   30.662876] [c00000001e2abe30] [c000000000009968] ret_from_kernel_thread+0x5c/0x74
      [   30.664017] Instruction dump:
      [   30.664477] 7bde1f24 38a00000 787f1f24 3b600001 39890008 7d204b78 7d05e214 7d0b07b4
      [   30.665642] 796b1f24 7d26582a 7d204a14 7d29f214 <7d4048a8> 7d4a3878 7d4049ad 40c2fff4
      [   30.666854] ---[ end trace 32643b7195717741 ]---
      
      The reason of this is that in __cpu_disable(), when we try to set the
      cpu_sibling_mask or cpu_core_mask of the sibling CPUs of the disabled
      one, we don't check whether the current configuration employs those
      sibling CPUs(hw threads). And if a CPU is not employed by a
      configuration, the percpu structures cpu_{sibling,core}_mask are not
      allocated, therefore accessing those cpumasks will result in problems as
      above.
      
      This patch fixes this problem by adding an addition check on whether the
      id is no less than nr_cpu_ids in the sibling CPU iteration code.
      Signed-off-by: NBoqun Feng <boqun.feng@gmail.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      19ab58d1
  24. 01 8月, 2016 1 次提交
  25. 21 6月, 2016 1 次提交
    • M
      powerpc: export cpu_to_core_id() · f8ab4810
      Mauricio Faria de Oliveira 提交于
      Export cpu_to_core_id(). This will be used by the lpfc driver.
      
      This enables topology_core_id() from <linux/topology.h> (defined
      to cpu_to_core_id() in arch/powerpc/include/asm/topology.h) to be
      used by (non-builtin) modules.
      
      That is arch-neutral, already used by eg, drivers/base/topology.c,
      but it is builtin (obj-y in Makefile) thus didn't need the export.
      
      Since the module uses topology_core_id() and this is defined to
      cpu_to_core_id(), it needs the export, otherwise:
      
          ERROR: "cpu_to_core_id" [drivers/scsi/lpfc/lpfc.ko] undefined!
      
      Tested on next-20160601.
      Signed-off-by: NMauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f8ab4810
  26. 16 6月, 2016 2 次提交
  27. 06 5月, 2016 1 次提交
  28. 05 3月, 2016 1 次提交
  29. 02 3月, 2016 1 次提交
    • T
      arch/hotplug: Call into idle with a proper state · fc6d73d6
      Thomas Gleixner 提交于
      Let the non boot cpus call into idle with the corresponding hotplug state, so
      the hotplug core can handle the further bringup. That's a first step to
      convert the boot side of the hotplugged cpus to do all the synchronization
      with the other side through the state machine. For now it'll only start the
      hotplug thread and kick the full bringup of the cpu.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: linux-arch@vger.kernel.org
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: http://lkml.kernel.org/r/20160226182341.614102639@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      fc6d73d6
  30. 29 2月, 2016 2 次提交
    • S
      KVM: PPC: Book3S HV: Send IPI to host core to wake VCPU · e17769eb
      Suresh E. Warrier 提交于
      This patch adds support to real-mode KVM to search for a core
      running in the host partition and send it an IPI message with
      VCPU to be woken. This avoids having to switch to the host
      partition to complete an H_IPI hypercall when the VCPU which
      is the target of the the H_IPI is not loaded (is not running
      in the guest).
      
      The patch also includes the support in the IPI handler running
      in the host to do the wakeup by calling kvmppc_xics_ipi_action
      for the PPC_MSG_RM_HOST_ACTION message.
      
      When a guest is being destroyed, we need to ensure that there
      are no pending IPIs waiting to wake up a VCPU before we free
      the VCPUs of the guest. This is accomplished by:
      - Forces a PPC_MSG_CALL_FUNCTION IPI to be completed by all CPUs
        before freeing any VCPUs in kvm_arch_destroy_vm().
      - Any PPC_MSG_RM_HOST_ACTION messages must be executed first
        before any other PPC_MSG_CALL_FUNCTION messages.
      Signed-off-by: NSuresh Warrier <warrier@linux.vnet.ibm.com>
      Acked-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      e17769eb
    • S
      powerpc/smp: Add smp_muxed_ipi_set_message · 31639c77
      Suresh Warrier 提交于
      smp_muxed_ipi_message_pass() invokes smp_ops->cause_ipi, which
      uses an ioremapped address to access registers on the XICS
      interrupt controller to cause the IPI. Because of this real
      mode callers cannot call smp_muxed_ipi_message_pass() for IPI
      messaging.
      
      This patch creates a separate function smp_muxed_ipi_set_message
      just to set the IPI message without the cause_ipi routine.
      After calling this function to set the IPI message, real
      mode callers must cause the IPI by writing to the XICS registers
      directly.
      
      As part of this, we also change smp_muxed_ipi_message_pass
      to call smp_muxed_ipi_set_message to set the message instead
      of doing it directly inside the routine.
      Signed-off-by: NSuresh Warrier <warrier@linux.vnet.ibm.com>
      Acked-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      31639c77