1. 02 3月, 2016 5 次提交
    • T
      cpu/hotplug: Convert the hotplugged cpu work to a state machine · 4baa0afc
      Thomas Gleixner 提交于
      Move the functions which need to run on the hotplugged processor into
      a state machine array and let the code iterate through these functions.
      
      In a later state, this will grow synchronization points between the
      control processor and the hotplugged processor, so we can move the
      various architecture implementations of the synchronizations to the
      core.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: linux-arch@vger.kernel.org
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: http://lkml.kernel.org/r/20160226182340.770651526@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      4baa0afc
    • T
      cpu/hotplug: Convert to a state machine for the control processor · cff7d378
      Thomas Gleixner 提交于
      Move the split out steps into a callback array and let the cpu_up/down
      code iterate through the array functions. For now most of the
      callbacks are asymmetric to resemble the current hotplug maze.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: linux-arch@vger.kernel.org
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: http://lkml.kernel.org/r/20160226182340.671816690@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      cff7d378
    • T
      cpu/hotplug: Split out cpu down functions · 98458172
      Thomas Gleixner 提交于
      Split cpu_down in separate functions in preparation for state machine
      conversion.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: linux-arch@vger.kernel.org
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: http://lkml.kernel.org/r/20160226182340.511796562@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      98458172
    • T
      cpu/hotplug: Restructure cpu_up code · ba997462
      Thomas Gleixner 提交于
      Split out into separate functions, so we can convert it to a state machine.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: linux-arch@vger.kernel.org
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: http://lkml.kernel.org/r/20160226182340.429389195@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      ba997462
    • T
      cpu/hotplug: Restructure FROZEN state handling · 090e77c3
      Thomas Gleixner 提交于
      There are only a few callbacks which really care about FROZEN
      vs. !FROZEN. No need to have extra states for this.
      
      Publish the frozen state in an extra variable which is updated under
      the hotplug lock and let the users interested deal with it w/o
      imposing that extra state checks on everyone.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: linux-arch@vger.kernel.org
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: http://lkml.kernel.org/r/20160226182340.334912357@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      090e77c3
  2. 21 1月, 2016 4 次提交
  3. 20 10月, 2015 3 次提交
    • P
      sched: Start stopper early · 07f06cb3
      Peter Zijlstra 提交于
      Ensure the stopper thread is active 'early', because the load balancer
      pretty much assumes that its available. And when 'online && active' the
      load-balancer is fully available.
      
      Not only the numa balancing stop_two_cpus() caller relies on it, but
      also the self migration stuff does, and at CPU_ONLINE time the cpu
      really is 'free' to run anything.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: heiko.carstens@de.ibm.com
      Link: http://lkml.kernel.org/r/20151009160054.GA10176@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      07f06cb3
    • O
      stop_machine: Kill smp_hotplug_thread->pre_unpark, introduce stop_machine_unpark() · c00166d8
      Oleg Nesterov 提交于
      1. Change smpboot_unpark_thread() to check ->selfparking, just
         like smpboot_park_thread() does.
      
      2. Introduce stop_machine_unpark() which sets ->enabled and calls
         kthread_unpark().
      
      3. Change smpboot_thread_call() and cpu_stop_init() to call
         stop_machine_unpark() by hand.
      
      This way:
      
          - IMO the ->selfparking logic becomes more consistent.
      
          - We can kill the smp_hotplug_thread->pre_unpark() method.
      
          - We can easily unpark the stopper thread earlier. Say, we
            can move stop_machine_unpark() from smpboot_thread_call()
            to sched_cpu_active() as Peter suggests.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: heiko.carstens@de.ibm.com
      Link: http://lkml.kernel.org/r/20151009160049.GA10166@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c00166d8
    • O
      stop_machine: Ensure that a queued callback will be called before cpu_stop_park() · 233e7f26
      Oleg Nesterov 提交于
      cpu_stop_queue_work() checks stopper->enabled before it queues the
      work, but ->enabled == T can only guarantee cpu_stop_signal_done()
      if we race with cpu_down().
      
      This is not enough for stop_two_cpus() or stop_machine(), they will
      deadlock if multi_cpu_stop() won't be called by one of the target
      CPU's. stop_machine/stop_cpus are fine, they rely on stop_cpus_mutex.
      But stop_two_cpus() has to check cpu_active() to avoid the same race
      with hotplug, and this check is very unobvious and probably not even
      correct if we race with cpu_up().
      
      Change cpu_down() pass to clear ->enabled before cpu_stopper_thread()
      flushes the pending ->works and returns with KTHREAD_SHOULD_PARK set.
      
      Note also that smpboot_thread_call() calls cpu_stop_unpark() which
      sets enabled == T at CPU_ONLINE stage, so this CPU can't go away until
      cpu_stopper_thread() is called at least once. This all means that if
      cpu_stop_queue_work() succeeds, we know that work->fn() will be called.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: heiko.carstens@de.ibm.com
      Link: http://lkml.kernel.org/r/20151008145131.GA18139@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      233e7f26
  4. 08 10月, 2015 1 次提交
  5. 12 9月, 2015 1 次提交
  6. 06 8月, 2015 3 次提交
  7. 03 8月, 2015 1 次提交
  8. 23 7月, 2015 1 次提交
  9. 15 7月, 2015 1 次提交
    • T
      genirq: Revert sparse irq locking around __cpu_up() and move it to x86 for now · ce0d3c0a
      Thomas Gleixner 提交于
      Boris reported that the sparse_irq protection around __cpu_up() in the
      generic code causes a regression on Xen. Xen allocates interrupts and
      some more in the xen_cpu_up() function, so it deadlocks on the
      sparse_irq_lock.
      
      There is no simple fix for this and we really should have the
      protection for all architectures, but for now the only solution is to
      move it to x86 where actual wreckage due to the lack of protection has
      been observed.
      Reported-and-tested-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Fixes: a8994181 'hotplug: Prevent alloc/free of irq descriptors during cpu up/down'
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: xiao jin <jin.xiao@intel.com>
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Yanmin Zhang <yanmin_zhang@linux.intel.com>
      Cc: xen-devel <xen-devel@lists.xenproject.org>
      ce0d3c0a
  10. 08 7月, 2015 1 次提交
    • T
      hotplug: Prevent alloc/free of irq descriptors during cpu up/down · a8994181
      Thomas Gleixner 提交于
      When a cpu goes up some architectures (e.g. x86) have to walk the irq
      space to set up the vector space for the cpu. While this needs extra
      protection at the architecture level we can avoid a few race
      conditions by preventing the concurrent allocation/free of irq
      descriptors and the associated data.
      
      When a cpu goes down it moves the interrupts which are targeted to
      this cpu away by reassigning the affinities. While this happens
      interrupts can be allocated and freed, which opens a can of race
      conditions in the code which reassignes the affinities because
      interrupt descriptors might be freed underneath.
      
      Example:
      
      CPU1				CPU2
      cpu_up/down
       irq_desc = irq_to_desc(irq);
      				remove_from_radix_tree(desc);
       raw_spin_lock(&desc->lock);
      				free(desc);
      
      We could protect the irq descriptors with RCU, but that would require
      a full tree change of all accesses to interrupt descriptors. But
      fortunately these kind of race conditions are rather limited to a few
      things like cpu hotplug. The normal setup/teardown is very well
      serialized. So the simpler and obvious solution is:
      
      Prevent allocation and freeing of interrupt descriptors accross cpu
      hotplug.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: xiao jin <jin.xiao@intel.com>
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Yanmin Zhang <yanmin_zhang@linux.intel.com>
      Link: http://lkml.kernel.org/r/20150705171102.063519515@linutronix.de
      a8994181
  11. 17 6月, 2015 1 次提交
  12. 28 5月, 2015 2 次提交
  13. 13 4月, 2015 1 次提交
    • P
      cpu: Defer smpboot kthread unparking until CPU known to scheduler · 00df35f9
      Paul E. McKenney 提交于
      Currently, smpboot_unpark_threads() is invoked before the incoming CPU
      has been added to the scheduler's runqueue structures.  This might
      potentially cause the unparked kthread to run on the wrong CPU, since the
      correct CPU isn't fully set up yet.
      
      That causes a sporadic, hard to debug boot crash triggering on some
      systems, reported by Borislav Petkov, and bisected down to:
      
        2a442c9c ("x86: Use common outgoing-CPU-notification code")
      
      This patch places smpboot_unpark_threads() in a CPU hotplug
      notifier with priority set so that these kthreads are unparked just after
      the CPU has been added to the runqueues.
      Reported-and-tested-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      00df35f9
  14. 03 4月, 2015 2 次提交
  15. 02 4月, 2015 1 次提交
  16. 13 3月, 2015 1 次提交
    • P
      cpu: Make CPU-offline idle-loop transition point more precise · 528a25b0
      Paul E. McKenney 提交于
      This commit uses a per-CPU variable to make the CPU-offline code path
      through the idle loop more precise, so that the outgoing CPU is
      guaranteed to make it into the idle loop before it is powered off.
      This commit is in preparation for putting the RCU offline-handling
      code on this code path, which will eliminate the magic one-jiffy
      wait that RCU uses as the maximum time for an outgoing CPU to get
      all the way through the scheduler.
      
      The magic one-jiffy wait for incoming CPUs remains a separate issue.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      528a25b0
  17. 07 1月, 2015 1 次提交
    • D
      hotplugcpu: Avoid deadlocks by waking active_writer · 87af9e7f
      David Hildenbrand 提交于
      Commit b2c4623d ("rcu: More on deadlock between CPU hotplug and expedited
      grace periods") introduced another problem that can easily be reproduced by
      starting/stopping cpus in a loop.
      
      E.g.:
        for i in `seq 5000`; do
            echo 1 > /sys/devices/system/cpu/cpu1/online
            echo 0 > /sys/devices/system/cpu/cpu1/online
        done
      
      Will result in:
        INFO: task /cpu_start_stop:1 blocked for more than 120 seconds.
        Call Trace:
        ([<00000000006a028e>] __schedule+0x406/0x91c)
         [<0000000000130f60>] cpu_hotplug_begin+0xd0/0xd4
         [<0000000000130ff6>] _cpu_up+0x3e/0x1c4
         [<0000000000131232>] cpu_up+0xb6/0xd4
         [<00000000004a5720>] device_online+0x80/0xc0
         [<00000000004a57f0>] online_store+0x90/0xb0
        ...
      
      And a deadlock.
      
      Problem is that if the last ref in put_online_cpus() can't get the
      cpu_hotplug.lock the puts_pending count is incremented, but a sleeping
      active_writer might never be woken up, therefore never exiting the loop in
      cpu_hotplug_begin().
      
      This fix removes puts_pending and turns refcount into an atomic variable. We
      also introduce a wait queue for the active_writer, to avoid possible races and
      use-after-free. There is no need to take the lock in put_online_cpus() anymore.
      
      Can't reproduce it with this fix.
      Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      87af9e7f
  18. 04 11月, 2014 1 次提交
    • P
      cpu: Avoid puts_pending overflow · 62db99f4
      Paul E. McKenney 提交于
      A long string of get_online_cpus() with each followed by a
      put_online_cpu() that fails to acquire cpu_hotplug.lock can result in
      overflow of the cpu_hotplug.puts_pending counter.  Although this is
      perhaps improbably, a system with absolutely no CPU-hotplug operations
      will have an arbitrarily long time in which this overflow could occur.
      This commit therefore adds overflow checks to get_online_cpus() and
      try_get_online_cpus().
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NPranith Kumar <bobby.prani@gmail.com>
      62db99f4
  19. 23 10月, 2014 1 次提交
  20. 19 9月, 2014 1 次提交
    • P
      rcu: Eliminate deadlock between CPU hotplug and expedited grace periods · dd56af42
      Paul E. McKenney 提交于
      Currently, the expedited grace-period primitives do get_online_cpus().
      This greatly simplifies their implementation, but means that calls
      to them holding locks that are acquired by CPU-hotplug notifiers (to
      say nothing of calls to these primitives from CPU-hotplug notifiers)
      can deadlock.  But this is starting to become inconvenient, as can be
      seen here: https://lkml.org/lkml/2014/8/5/754.  The problem in this
      case is that some developers need to acquire a mutex from a CPU-hotplug
      notifier, but also need to hold it across a synchronize_rcu_expedited().
      As noted above, this currently results in deadlock.
      
      This commit avoids the deadlock and retains the simplicity by creating
      a try_get_online_cpus(), which returns false if the get_online_cpus()
      reference count could not immediately be incremented.  If a call to
      try_get_online_cpus() returns true, the expedited primitives operate as
      before.  If a call returns false, the expedited primitives fall back to
      normal grace-period operations.  This falling back of course results in
      increased grace-period latency, but only during times when CPU hotplug
      operations are actually in flight.  The effect should therefore be
      negligible during normal operation.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Tested-by: NLan Tianyu <tianyu.lan@intel.com>
      dd56af42
  21. 05 7月, 2014 1 次提交
    • K
      sched: Rework check_for_tasks() · b728ca06
      Kirill Tkhai 提交于
      1) Iterate thru all of threads in the system.
         Check for all threads, not only for group leaders.
      
      2) Check for p->on_rq instead of p->state and cputime.
         Preempted task in !TASK_RUNNING state  OR just
         created task may be queued, that we want to be
         reported too.
      
      3) Use read_lock() instead of write_lock().
         This function does not change any structures, and
         read_lock() is enough.
      Signed-off-by: NKirill Tkhai <ktkhai@parallels.com>
      Reviewed-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ben Segall <bsegall@google.com>
      Cc: Fabian Frederick <fabf@skynet.be>
      Cc: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
      Cc: Konstantin Khorenko <khorenko@parallels.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michael wang <wangyun@linux.vnet.ibm.com>
      Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Paul Turner <pjt@google.com>
      Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
      Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Cc: Todd E Brandt <todd.e.brandt@linux.intel.com>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1403684395.3462.44.camel@tkhaiSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b728ca06
  22. 07 6月, 2014 1 次提交
    • T
      PM / sleep: trace events for suspend/resume · bb3632c6
      Todd E Brandt 提交于
      Adds trace events that give finer resolution into suspend/resume. These
      events are graphed in the timelines generated by the analyze_suspend.py
      script. They represent large areas of time consumed that are typical to
      suspend and resume.
      
      The event is triggered by calling the function "trace_suspend_resume"
      with three arguments: a string (the name of the event to be displayed
      in the timeline), an integer (case specific number, such as the power
      state or cpu number), and a boolean (where true is used to denote the start
      of the timeline event, and false to denote the end).
      
      The suspend_resume trace event reproduces the data that the machine_suspend
      trace event did, so the latter has been removed.
      Signed-off-by: NTodd Brandt <todd.e.brandt@intel.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      bb3632c6
  23. 05 6月, 2014 1 次提交
  24. 22 5月, 2014 1 次提交
    • L
      sched: Fix hotplug vs. set_cpus_allowed_ptr() · 6acbfb96
      Lai Jiangshan 提交于
      Lai found that:
      
        WARNING: CPU: 1 PID: 13 at arch/x86/kernel/smp.c:124 native_smp_send_reschedule+0x2d/0x4b()
        ...
        migration_cpu_stop+0x1d/0x22
      
      was caused by set_cpus_allowed_ptr() assuming that cpu_active_mask is
      always a sub-set of cpu_online_mask.
      
      This isn't true since 5fbd036b ("sched: Cleanup cpu_active madness").
      
      So set active and online at the same time to avoid this particular
      problem.
      
      Fixes: 5fbd036b ("sched: Cleanup cpu_active madness")
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michael wang <wangyun@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
      Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Link: http://lkml.kernel.org/r/53758B12.8060609@cn.fujitsu.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6acbfb96
  25. 20 3月, 2014 2 次提交
    • S
      CPU hotplug: Provide lockless versions of callback registration functions · 93ae4f97
      Srivatsa S. Bhat 提交于
      The following method of CPU hotplug callback registration is not safe
      due to the possibility of an ABBA deadlock involving the cpu_add_remove_lock
      and the cpu_hotplug.lock.
      
      	get_online_cpus();
      
      	for_each_online_cpu(cpu)
      		init_cpu(cpu);
      
      	register_cpu_notifier(&foobar_cpu_notifier);
      
      	put_online_cpus();
      
      The deadlock is shown below:
      
                CPU 0                                         CPU 1
                -----                                         -----
      
         Acquire cpu_hotplug.lock
         [via get_online_cpus()]
      
                                                    CPU online/offline operation
                                                    takes cpu_add_remove_lock
                                                    [via cpu_maps_update_begin()]
      
         Try to acquire
         cpu_add_remove_lock
         [via register_cpu_notifier()]
      
                                                    CPU online/offline operation
                                                    tries to acquire cpu_hotplug.lock
                                                    [via cpu_hotplug_begin()]
      
                                  *** DEADLOCK! ***
      
      The problem here is that callback registration takes the locks in one order
      whereas the CPU hotplug operations take the same locks in the opposite order.
      To avoid this issue and to provide a race-free method to register CPU hotplug
      callbacks (along with initialization of already online CPUs), introduce new
      variants of the callback registration APIs that simply register the callbacks
      without holding the cpu_add_remove_lock during the registration. That way,
      we can avoid the ABBA scenario. However, we will need to hold the
      cpu_add_remove_lock throughout the entire critical section, to protect updates
      to the callback/notifier chain.
      
      This can be achieved by writing the callback registration code as follows:
      
      	cpu_maps_update_begin(); [ or cpu_notifier_register_begin(); see below ]
      
      	for_each_online_cpu(cpu)
      		init_cpu(cpu);
      
      	/* This doesn't take the cpu_add_remove_lock */
      	__register_cpu_notifier(&foobar_cpu_notifier);
      
      	cpu_maps_update_done();  [ or cpu_notifier_register_done(); see below ]
      
      Note that we can't use get_online_cpus() here instead of cpu_maps_update_begin()
      because the cpu_hotplug.lock is dropped during the invocation of CPU_POST_DEAD
      notifiers, and hence get_online_cpus() cannot provide the necessary
      synchronization to protect the callback/notifier chains against concurrent
      reads and writes. On the other hand, since the cpu_add_remove_lock protects
      the entire hotplug operation (including CPU_POST_DEAD), we can use
      cpu_maps_update_begin/done() to guarantee proper synchronization.
      
      Also, since cpu_maps_update_begin/done() is like a super-set of
      get/put_online_cpus(), the former naturally protects the critical sections
      from concurrent hotplug operations.
      
      Since the names cpu_maps_update_begin/done() don't make much sense in CPU
      hotplug callback registration scenarios, we'll introduce new APIs named
      cpu_notifier_register_begin/done() and map them to cpu_maps_update_begin/done().
      
      In summary, introduce the lockless variants of un/register_cpu_notifier() and
      also export the cpu_notifier_register_begin/done() APIs for use by modules.
      This way, we provide a race-free way to register hotplug callbacks as well as
      perform initialization for the CPUs that are already online.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Acked-by: NOleg Nesterov <oleg@redhat.com>
      Acked-by: NToshi Kani <toshi.kani@hp.com>
      Reviewed-by: NGautham R. Shenoy <ego@linux.vnet.ibm.com>
      Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      93ae4f97
    • G
      CPU hotplug: Add lockdep annotations to get/put_online_cpus() · a19423b9
      Gautham R. Shenoy 提交于
      Add lockdep annotations for get/put_online_cpus() and
      cpu_hotplug_begin()/cpu_hotplug_end().
      
      Cc: Ingo Molnar <mingo@kernel.org>
      Reviewed-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NGautham R. Shenoy <ego@linux.vnet.ibm.com>
      Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      a19423b9
  26. 13 11月, 2013 1 次提交