1. 25 7月, 2018 1 次提交
    • I
      stop_machine: Disable preemption after queueing stopper threads · 2610e889
      Isaac J. Manjarres 提交于
      This commit:
      
        9fb8d5dc ("stop_machine, Disable preemption when waking two stopper threads")
      
      does not fully address the race condition that can occur
      as follows:
      
      On one CPU, call it CPU 3, thread 1 invokes
      cpu_stop_queue_two_works(2, 3,...), and the execution is such
      that thread 1 queues the works for migration/2 and migration/3,
      and is preempted after releasing the locks for migration/2 and
      migration/3, but before waking the threads.
      
      Then, On CPU 2, a kworker, call it thread 2, is running,
      and it invokes cpu_stop_queue_two_works(1, 2,...), such that
      thread 2 queues the works for migration/1 and migration/2.
      Meanwhile, on CPU 3, thread 1 resumes execution, and wakes
      migration/2 and migration/3. This means that when CPU 2
      releases the locks for migration/1 and migration/2, but before
      it wakes those threads, it can be preempted by migration/2.
      
      If thread 2 is preempted by migration/2, then migration/2 will
      execute the first work item successfully, since migration/3
      was woken up by CPU 3, but when it goes to execute the second
      work item, it disables preemption, calls multi_cpu_stop(),
      and thus, CPU 2 will wait forever for migration/1, which should
      have been woken up by thread 2. However migration/1 cannot be
      woken up by thread 2, since it is a kworker, so it is affine to
      CPU 2, but CPU 2 is running migration/2 with preemption
      disabled, so thread 2 will never run.
      
      Disable preemption after queueing works for stopper threads
      to ensure that the operation of queueing the works and waking
      the stopper threads is atomic.
      Co-Developed-by: NPrasad Sodagudi <psodagud@codeaurora.org>
      Co-Developed-by: NPavankumar Kondeti <pkondeti@codeaurora.org>
      Signed-off-by: NIsaac J. Manjarres <isaacm@codeaurora.org>
      Signed-off-by: NPrasad Sodagudi <psodagud@codeaurora.org>
      Signed-off-by: NPavankumar Kondeti <pkondeti@codeaurora.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: bigeasy@linutronix.de
      Cc: gregkh@linuxfoundation.org
      Cc: matt@codeblueprint.co.uk
      Fixes: 9fb8d5dc ("stop_machine, Disable preemption when waking two stopper threads")
      Link: http://lkml.kernel.org/r/1531856129-9871-1-git-send-email-isaacm@codeaurora.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      2610e889
  2. 15 7月, 2018 1 次提交
    • I
      stop_machine: Disable preemption when waking two stopper threads · 9fb8d5dc
      Isaac J. Manjarres 提交于
      When cpu_stop_queue_two_works() begins to wake the stopper threads, it does
      so without preemption disabled, which leads to the following race
      condition:
      
      The source CPU calls cpu_stop_queue_two_works(), with cpu1 as the source
      CPU, and cpu2 as the destination CPU. When adding the stopper threads to
      the wake queue used in this function, the source CPU stopper thread is
      added first, and the destination CPU stopper thread is added last.
      
      When wake_up_q() is invoked to wake the stopper threads, the threads are
      woken up in the order that they are queued in, so the source CPU's stopper
      thread is woken up first, and it preempts the thread running on the source
      CPU.
      
      The stopper thread will then execute on the source CPU, disable preemption,
      and begin executing multi_cpu_stop(), and wait for an ack from the
      destination CPU's stopper thread, with preemption still disabled. Since the
      worker thread that woke up the stopper thread on the source CPU is affine
      to the source CPU, and preemption is disabled on the source CPU, that
      thread will never run to dequeue the destination CPU's stopper thread from
      the wake queue, and thus, the destination CPU's stopper thread will never
      run, causing the source CPU's stopper thread to wait forever, and stall.
      
      Disable preemption when waking the stopper threads in
      cpu_stop_queue_two_works().
      
      Fixes: 0b26351b ("stop_machine, sched: Fix migrate_swap() vs. active_balance() deadlock")
      Co-Developed-by: NPrasad Sodagudi <psodagud@codeaurora.org>
      Signed-off-by: NPrasad Sodagudi <psodagud@codeaurora.org>
      Co-Developed-by: NPavankumar Kondeti <pkondeti@codeaurora.org>
      Signed-off-by: NPavankumar Kondeti <pkondeti@codeaurora.org>
      Signed-off-by: NIsaac J. Manjarres <isaacm@codeaurora.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: peterz@infradead.org
      Cc: matt@codeblueprint.co.uk
      Cc: bigeasy@linutronix.de
      Cc: gregkh@linuxfoundation.org
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/1530655334-4601-1-git-send-email-isaacm@codeaurora.org
      9fb8d5dc
  3. 03 5月, 2018 1 次提交
  4. 27 4月, 2018 1 次提交
  5. 26 5月, 2017 1 次提交
  6. 16 11月, 2016 1 次提交
    • C
      locking/core, stop_machine: Yield the CPU during stop machine() · bf0d31c0
      Christian Borntraeger 提交于
      Some time ago the following commit:
      
        57f2ffe1 ("s390: remove diag 44 calls from cpu_relax()")
      
      ... stopped cpu_relax() on s390 yielding to the hypervisor.
      
      As it turns out this made stop_machine() run really slow on virtualized
      overcommited systems. For example the kprobes test during bootup took
      several seconds instead of just running unnoticed with large guests.
      
      Therefore, yielding was reintroduced with commit:
      
        4d92f502 ("s390: reintroduce diag 44 calls for cpu_relax()")
      
      ... but in fact the stop machine code seems to be the only place where
      this yielding was really necessary. This place is probably the most
      important one as it makes all but one guest CPUs wait for one guest CPU.
      
      As we now have cpu_relax_yield(), we can use this in multi_cpu_stop().
      For now lets only add it here. We can add it later in other places
      when necessary.
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Noam Camus <noamc@ezchip.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: virtualization@lists.linux-foundation.org
      Cc: xen-devel@lists.xenproject.org
      Link: http://lkml.kernel.org/r/1477386195-32736-3-git-send-email-borntraeger@de.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      bf0d31c0
  7. 22 9月, 2016 2 次提交
  8. 27 7月, 2016 1 次提交
    • O
      stop_machine: Touch_nmi_watchdog() after MULTI_STOP_PREPARE · ce4f06dc
      Oleg Nesterov 提交于
      Suppose that stop_machine(fn) hangs because fn() hangs. In this case NMI
      hard-lockup can be triggered on another CPU which does nothing wrong and
      the trace from nmi_panic() won't help to investigate the problem.
      
      And this change "fixes" the problem we (seem to) hit in practice.
      
       - stop_two_cpus(0, 1) races with show_state_filter() running on CPU_0.
      
       - CPU_1 already spins in MULTI_STOP_PREPARE state, it detects the soft
         lockup and tries to report the problem.
      
       - show_state_filter() enables preemption, CPU_0 calls multi_cpu_stop()
         which goes to MULTI_STOP_DISABLE_IRQ state and disables interrupts.
      
       - CPU_1 spends more than 10 seconds trying to flush the log buffer to
         the slow serial console.
      
       - NMI interrupt on CPU_0 (which now waits for CPU_1) calls nmi_panic().
      Reported-by: NWang Shu <shuwang@redhat.com>
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dave Anderson <anderson@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tejun Heo <tj@kernel.org>
      Link: http://lkml.kernel.org/r/20160726185736.GB4088@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ce4f06dc
  9. 17 1月, 2016 1 次提交
  10. 13 12月, 2015 1 次提交
    • C
      kernel: remove stop_machine() Kconfig dependency · 86fffe4a
      Chris Wilson 提交于
      Currently the full stop_machine() routine is only enabled on SMP if
      module unloading is enabled, or if the CPUs are hotpluggable.  This
      leads to configurations where stop_machine() is broken as it will then
      only run the callback on the local CPU with irqs disabled, and not stop
      the other CPUs or run the callback on them.
      
      For example, this breaks MTRR setup on x86 in certain configs since
      ea8596bb ("kprobes/x86: Remove unused text_poke_smp() and
      text_poke_smp_batch() functions") as the MTRR is only established on the
      boot CPU.
      
      This patch removes the Kconfig option for STOP_MACHINE and uses the SMP
      and HOTPLUG_CPU config options to compile the correct stop_machine() for
      the architecture, removing the false dependency on MODULE_UNLOAD in the
      process.
      
      Link: https://lkml.org/lkml/2014/10/8/124
      References: https://bugs.freedesktop.org/show_bug.cgi?id=84794Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Pranith Kumar <bobby.prani@gmail.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Vladimir Davydov <vdavydov@parallels.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Iulia Manda <iulia.manda21@gmail.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Chuck Ebbert <cebbert.lkml@gmail.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      86fffe4a
  11. 23 11月, 2015 8 次提交
  12. 20 10月, 2015 6 次提交
  13. 03 8月, 2015 5 次提交
  14. 19 6月, 2015 1 次提交
    • P
      sched/stop_machine: Fix deadlock between multiple stop_two_cpus() · b17718d0
      Peter Zijlstra 提交于
      Jiri reported a machine stuck in multi_cpu_stop() with
      migrate_swap_stop() as function and with the following src,dst cpu
      pairs: {11,  4} {13, 11} { 4, 13}
      
                              4       11      13
      
      cpuM: queue(4 ,13)
                              *Ma
      cpuN: queue(13,11)
                                      *N      Na
                              *M              Mb
      cpuO: queue(11, 4)
                              *O      Oa
                                      *Nb
                              *Ob
      
      Where *X denotes the cpu running the queueing of cpu-X and X[ab] denotes
      the first/second queued work.
      
      You'll observe the top of the workqueue for each cpu: 4,11,13 to be work
      from cpus: M, O, N resp. IOW. deadlock.
      
      Do away with the queueing trickery and introduce lg_double_lock() to
      lock both CPUs and fully serialize the stop_two_cpus() callers instead
      of the partial (and buggy) serialization we have now.
      Reported-by: NJiri Olsa <jolsa@redhat.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20150605153023.GH19282@twins.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b17718d0
  15. 05 6月, 2014 1 次提交
  16. 11 3月, 2014 1 次提交
  17. 11 11月, 2013 1 次提交
  18. 16 10月, 2013 1 次提交
    • P
      sched: Remove get_online_cpus() usage · 6acce3ef
      Peter Zijlstra 提交于
      Remove get_online_cpus() usage from the scheduler; there's 4 sites that
      use it:
      
       - sched_init_smp(); where its completely superfluous since we're in
         'early' boot and there simply cannot be any hotplugging.
      
       - sched_getaffinity(); we already take a raw spinlock to protect the
         task cpus_allowed mask, this disables preemption and therefore
         also stabilizes cpu_online_mask as that's modified using
         stop_machine. However switch to active mask for symmetry with
         sched_setaffinity()/set_cpus_allowed_ptr(). We guarantee active
         mask stability by inserting sync_rcu/sched() into _cpu_down.
      
       - sched_setaffinity(); we don't appear to need get_online_cpus()
         either, there's two sites where hotplug appears relevant:
          * cpuset_cpus_allowed(); for the !cpuset case we use possible_mask,
            for the cpuset case we hold task_lock, which is a spinlock and
            thus for mainline disables preemption (might cause pain on RT).
          * set_cpus_allowed_ptr(); Holds all scheduler locks and thus has
            preemption properly disabled; also it already deals with hotplug
            races explicitly where it releases them.
      
       - migrate_swap(); we can make stop_two_cpus() do the heavy lifting for
         us with a little trickery. By adding a sync_sched/rcu() after the
         CPU_DOWN_PREPARE notifier we can provide preempt/rcu guarantees for
         cpu_active_mask. Use these to validate that both our cpus are active
         when queueing the stop work before we queue the stop_machine works
         for take_cpu_down().
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Link: http://lkml.kernel.org/r/20131011123820.GV3081@twins.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6acce3ef
  19. 09 10月, 2013 1 次提交
  20. 27 2月, 2013 1 次提交
    • T
      stop_machine: Mark per cpu stopper enabled early · 46c498c2
      Thomas Gleixner 提交于
      commit 14e568e7 (stop_machine: Use smpboot threads) introduced the
      following regression:
      
      Before this commit the stopper enabled bit was set in the online
      notifier.
      
      CPU0				CPU1
      cpu_up
      				cpu online
      hotplug_notifier(ONLINE)
        stopper(CPU1)->enabled = true;
      ...
      stop_machine()
      
      The conversion to smpboot threads moved the enablement to the wakeup
      path of the parked thread. The majority of users seem to have the
      following working order:
      
      CPU0				CPU1
      cpu_up
      				cpu online
      unpark_threads()
        wakeup(stopper[CPU1])
      ....
      				stopper thread runs
      				  stopper(CPU1)->enabled = true;
      stop_machine()
      
      But Konrad and Sander have observed:
      
      CPU0				CPU1
      cpu_up
      				cpu online
      unpark_threads()
        wakeup(stopper[CPU1])
      ....
      stop_machine()
      				stopper thread runs
      				  stopper(CPU1)->enabled = true;
      
      Now the stop machinery kicks CPU0 into the stop loop, where it gets
      stuck forever because the queue code saw stopper(CPU1)->enabled ==
      false, so CPU0 waits for CPU1 to enter stomp_machine, but the CPU1
      stopper work got discarded due to enabled == false.
      
      Add a pre_unpark function to the smpboot thread descriptor and call it
      before waking the thread.
      
      This fixes the problem at hand, but the stop_machine code should be
      more robust. The stopper->enabled flag smells fishy at best.
      
      Thanks to Konrad for going through a loop of debug patches and
      providing the information to decode this issue.
      Reported-and-tested-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Reported-and-tested-by: NSander Eikelenboom <linux@eikelenboom.it>
      Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1302261843240.22263@ionosSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      46c498c2
  21. 14 2月, 2013 2 次提交
  22. 01 11月, 2011 1 次提交