1. 04 4月, 2017 1 次提交
    • X
      rtmutex: Deboost before waking up the top waiter · 2a1c6029
      Xunlei Pang 提交于
      We should deboost before waking the high-priority task, such that we
      don't run two tasks with the same "state" (priority, deadline,
      sched_class, etc).
      
      In order to make sure the boosting task doesn't start running between
      unlock and deboost (due to 'spurious' wakeup), we move the deboost
      under the wait_lock, that way its serialized against the wait loop in
      __rt_mutex_slowlock().
      
      Doing the deboost early can however lead to priority-inversion if
      current would get preempted after the deboost but before waking our
      high-prio task, hence we disable preemption before doing deboost, and
      enabling it after the wake up is over.
      
      This gets us the right semantic order, but most importantly however;
      this change ensures pointer stability for the next patch, where we
      have rt_mutex_setprio() cache a pointer to the top-most waiter task.
      If we, as before this change, do the wakeup first and then deboost,
      this pointer might point into thin air.
      
      [peterz: Changelog + patch munging]
      Suggested-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NXunlei Pang <xlpang@redhat.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: juri.lelli@arm.com
      Cc: bigeasy@linutronix.de
      Cc: mathieu.desnoyers@efficios.com
      Cc: jdesfossez@efficios.com
      Cc: bristot@redhat.com
      Link: http://lkml.kernel.org/r/20170323150216.110065320@infradead.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      2a1c6029
  2. 24 3月, 2017 6 次提交
    • P
      futex: Drop hb->lock before enqueueing on the rtmutex · 56222b21
      Peter Zijlstra 提交于
      When PREEMPT_RT_FULL does the spinlock -> rt_mutex substitution the PI
      chain code will (falsely) report a deadlock and BUG.
      
      The problem is that it hold hb->lock (now an rt_mutex) while doing
      task_blocks_on_rt_mutex on the futex's pi_state::rtmutex. This, when
      interleaved just right with futex_unlock_pi() leads it to believe to see an
      AB-BA deadlock.
      
        Task1 (holds rt_mutex,	Task2 (does FUTEX_LOCK_PI)
               does FUTEX_UNLOCK_PI)
      
      				lock hb->lock
      				lock rt_mutex (as per start_proxy)
        lock hb->lock
      
      Which is a trivial AB-BA.
      
      It is not an actual deadlock, because it won't be holding hb->lock by the
      time it actually blocks on the rt_mutex, but the chainwalk code doesn't
      know that and it would be a nightmare to handle this gracefully.
      
      To avoid this problem, do the same as in futex_unlock_pi() and drop
      hb->lock after acquiring wait_lock. This still fully serializes against
      futex_unlock_pi(), since adding to the wait_list does the very same lock
      dance, and removing it holds both locks.
      
      Aside of solving the RT problem this makes the lock and unlock mechanism
      symetric and reduces the hb->lock held time.
      Reported-and-tested-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Suggested-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: juri.lelli@arm.com
      Cc: xlpang@redhat.com
      Cc: rostedt@goodmis.org
      Cc: mathieu.desnoyers@efficios.com
      Cc: jdesfossez@efficios.com
      Cc: dvhart@infradead.org
      Cc: bristot@redhat.com
      Link: http://lkml.kernel.org/r/20170322104152.161341537@infradead.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      56222b21
    • P
      futex: Rework futex_lock_pi() to use rt_mutex_*_proxy_lock() · cfafcd11
      Peter Zijlstra 提交于
      By changing futex_lock_pi() to use rt_mutex_*_proxy_lock() all wait_list
      modifications are done under both hb->lock and wait_lock.
      
      This closes the obvious interleave pattern between futex_lock_pi() and
      futex_unlock_pi(), but not entirely so. See below:
      
      Before:
      
      futex_lock_pi()			futex_unlock_pi()
        unlock hb->lock
      
      				  lock hb->lock
      				  unlock hb->lock
      
      				  lock rt_mutex->wait_lock
      				  unlock rt_mutex_wait_lock
      				    -EAGAIN
      
        lock rt_mutex->wait_lock
        list_add
        unlock rt_mutex->wait_lock
      
        schedule()
      
        lock rt_mutex->wait_lock
        list_del
        unlock rt_mutex->wait_lock
      
      				  <idem>
      				    -EAGAIN
      
        lock hb->lock
      
      
      After:
      
      futex_lock_pi()			futex_unlock_pi()
      
        lock hb->lock
        lock rt_mutex->wait_lock
        list_add
        unlock rt_mutex->wait_lock
        unlock hb->lock
      
        schedule()
      				  lock hb->lock
      				  unlock hb->lock
        lock hb->lock
        lock rt_mutex->wait_lock
        list_del
        unlock rt_mutex->wait_lock
      
      				  lock rt_mutex->wait_lock
      				  unlock rt_mutex_wait_lock
      				    -EAGAIN
      
        unlock hb->lock
      
      
      It does however solve the earlier starvation/live-lock scenario which got
      introduced with the -EAGAIN since unlike the before scenario; where the
      -EAGAIN happens while futex_unlock_pi() doesn't hold any locks; in the
      after scenario it happens while futex_unlock_pi() actually holds a lock,
      and then it is serialized on that lock.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: juri.lelli@arm.com
      Cc: bigeasy@linutronix.de
      Cc: xlpang@redhat.com
      Cc: rostedt@goodmis.org
      Cc: mathieu.desnoyers@efficios.com
      Cc: jdesfossez@efficios.com
      Cc: dvhart@infradead.org
      Cc: bristot@redhat.com
      Link: http://lkml.kernel.org/r/20170322104152.062785528@infradead.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      cfafcd11
    • P
      futex,rt_mutex: Restructure rt_mutex_finish_proxy_lock() · 38d589f2
      Peter Zijlstra 提交于
      With the ultimate goal of keeping rt_mutex wait_list and futex_q waiters
      consistent it's necessary to split 'rt_mutex_futex_lock()' into finer
      parts, such that only the actual blocking can be done without hb->lock
      held.
      
      Split split_mutex_finish_proxy_lock() into two parts, one that does the
      blocking and one that does remove_waiter() when the lock acquire failed.
      
      When the rtmutex was acquired successfully the waiter can be removed in the
      acquisiton path safely, since there is no concurrency on the lock owner.
      
      This means that, except for futex_lock_pi(), all wait_list modifications
      are done with both hb->lock and wait_lock held.
      
      [bigeasy@linutronix.de: fix for futex_requeue_pi_signal_restart]
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: juri.lelli@arm.com
      Cc: bigeasy@linutronix.de
      Cc: xlpang@redhat.com
      Cc: rostedt@goodmis.org
      Cc: mathieu.desnoyers@efficios.com
      Cc: jdesfossez@efficios.com
      Cc: dvhart@infradead.org
      Cc: bristot@redhat.com
      Link: http://lkml.kernel.org/r/20170322104152.001659630@infradead.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      38d589f2
    • P
      futex,rt_mutex: Introduce rt_mutex_init_waiter() · 50809358
      Peter Zijlstra 提交于
      Since there's already two copies of this code, introduce a helper now
      before adding a third one.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: juri.lelli@arm.com
      Cc: bigeasy@linutronix.de
      Cc: xlpang@redhat.com
      Cc: rostedt@goodmis.org
      Cc: mathieu.desnoyers@efficios.com
      Cc: jdesfossez@efficios.com
      Cc: dvhart@infradead.org
      Cc: bristot@redhat.com
      Link: http://lkml.kernel.org/r/20170322104151.950039479@infradead.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      50809358
    • P
      futex,rt_mutex: Provide futex specific rt_mutex API · 5293c2ef
      Peter Zijlstra 提交于
      Part of what makes futex_unlock_pi() intricate is that
      rt_mutex_futex_unlock() -> rt_mutex_slowunlock() can drop
      rt_mutex::wait_lock.
      
      This means it cannot rely on the atomicy of wait_lock, which would be
      preferred in order to not rely on hb->lock so much.
      
      The reason rt_mutex_slowunlock() needs to drop wait_lock is because it can
      race with the rt_mutex fastpath, however futexes have their own fast path.
      
      Since futexes already have a bunch of separate rt_mutex accessors, complete
      that set and implement a rt_mutex variant without fastpath for them.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: juri.lelli@arm.com
      Cc: bigeasy@linutronix.de
      Cc: xlpang@redhat.com
      Cc: rostedt@goodmis.org
      Cc: mathieu.desnoyers@efficios.com
      Cc: jdesfossez@efficios.com
      Cc: dvhart@infradead.org
      Cc: bristot@redhat.com
      Link: http://lkml.kernel.org/r/20170322104151.702962446@infradead.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      5293c2ef
    • P
      futex: Remove rt_mutex_deadlock_account_*() · fffa954f
      Peter Zijlstra 提交于
      These are unused and clutter up the code.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: juri.lelli@arm.com
      Cc: bigeasy@linutronix.de
      Cc: xlpang@redhat.com
      Cc: rostedt@goodmis.org
      Cc: mathieu.desnoyers@efficios.com
      Cc: jdesfossez@efficios.com
      Cc: dvhart@infradead.org
      Cc: bristot@redhat.com
      Link: http://lkml.kernel.org/r/20170322104151.652692478@infradead.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      fffa954f
  3. 02 3月, 2017 3 次提交
  4. 30 1月, 2017 1 次提交
  5. 02 12月, 2016 2 次提交
    • T
      locking/rtmutex: Explain locking rules for rt_mutex_proxy_unlock()/init_proxy_locked() · 84d82ec5
      Thomas Gleixner 提交于
      While debugging the unlock vs. dequeue race which resulted in state
      corruption of futexes the lockless nature of rt_mutex_proxy_unlock()
      caused some confusion.
      
      Add commentry to explain why it is safe to do this lockless. Add matching
      comments to rt_mutex_init_proxy_locked() for completeness sake.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: David Daney <ddaney@caviumnetworks.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Link: http://lkml.kernel.org/r/20161130210030.591941927@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      84d82ec5
    • T
      locking/rtmutex: Prevent dequeue vs. unlock race · dbb26055
      Thomas Gleixner 提交于
      David reported a futex/rtmutex state corruption. It's caused by the
      following problem:
      
      CPU0		CPU1		CPU2
      
      l->owner=T1
      		rt_mutex_lock(l)
      		lock(l->wait_lock)
      		l->owner = T1 | HAS_WAITERS;
      		enqueue(T2)
      		boost()
      		  unlock(l->wait_lock)
      		schedule()
      
      				rt_mutex_lock(l)
      				lock(l->wait_lock)
      				l->owner = T1 | HAS_WAITERS;
      				enqueue(T3)
      				boost()
      				  unlock(l->wait_lock)
      				schedule()
      		signal(->T2)	signal(->T3)
      		lock(l->wait_lock)
      		dequeue(T2)
      		deboost()
      		  unlock(l->wait_lock)
      				lock(l->wait_lock)
      				dequeue(T3)
      				  ===> wait list is now empty
      				deboost()
      				 unlock(l->wait_lock)
      		lock(l->wait_lock)
      		fixup_rt_mutex_waiters()
      		  if (wait_list_empty(l)) {
      		    owner = l->owner & ~HAS_WAITERS;
      		    l->owner = owner
      		     ==> l->owner = T1
      		  }
      
      				lock(l->wait_lock)
      rt_mutex_unlock(l)		fixup_rt_mutex_waiters()
      				  if (wait_list_empty(l)) {
      				    owner = l->owner & ~HAS_WAITERS;
      cmpxchg(l->owner, T1, NULL)
       ===> Success (l->owner = NULL)
      				    l->owner = owner
      				     ==> l->owner = T1
      				  }
      
      That means the problem is caused by fixup_rt_mutex_waiters() which does the
      RMW to clear the waiters bit unconditionally when there are no waiters in
      the rtmutexes rbtree.
      
      This can be fatal: A concurrent unlock can release the rtmutex in the
      fastpath because the waiters bit is not set. If the cmpxchg() gets in the
      middle of the RMW operation then the previous owner, which just unlocked
      the rtmutex is set as the owner again when the write takes place after the
      successfull cmpxchg().
      
      The solution is rather trivial: verify that the owner member of the rtmutex
      has the waiters bit set before clearing it. This does not require a
      cmpxchg() or other atomic operations because the waiters bit can only be
      set and cleared with the rtmutex wait_lock held. It's also safe against the
      fast path unlock attempt. The unlock attempt via cmpxchg() will either see
      the bit set and take the slowpath or see the bit cleared and release it
      atomically in the fastpath.
      
      It's remarkable that the test program provided by David triggers on ARM64
      and MIPS64 really quick, but it refuses to reproduce on x86-64, while the
      problem exists there as well. That refusal might explain that this got not
      discovered earlier despite the bug existing from day one of the rtmutex
      implementation more than 10 years ago.
      
      Thanks to David for meticulously instrumenting the code and providing the
      information which allowed to decode this subtle problem.
      Reported-by: NDavid Daney <ddaney@caviumnetworks.com>
      Tested-by: NDavid Daney <david.daney@cavium.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NSteven Rostedt <rostedt@goodmis.org>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: stable@vger.kernel.org
      Fixes: 23f78d4a ("[PATCH] pi-futex: rt mutex core")
      Link: http://lkml.kernel.org/r/20161130210030.351136722@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      dbb26055
  6. 21 11月, 2016 1 次提交
  7. 08 6月, 2016 1 次提交
  8. 26 1月, 2016 1 次提交
    • T
      rtmutex: Make wait_lock irq safe · b4abf910
      Thomas Gleixner 提交于
      Sasha reported a lockdep splat about a potential deadlock between RCU boosting
      rtmutex and the posix timer it_lock.
      
      CPU0					CPU1
      
      rtmutex_lock(&rcu->rt_mutex)
        spin_lock(&rcu->rt_mutex.wait_lock)
      					local_irq_disable()
      					spin_lock(&timer->it_lock)
      					spin_lock(&rcu->mutex.wait_lock)
      --> Interrupt
          spin_lock(&timer->it_lock)
      
      This is caused by the following code sequence on CPU1
      
           rcu_read_lock()
           x = lookup();
           if (x)
           	spin_lock_irqsave(&x->it_lock);
           rcu_read_unlock();
           return x;
      
      We could fix that in the posix timer code by keeping rcu read locked across
      the spinlocked and irq disabled section, but the above sequence is common and
      there is no reason not to support it.
      
      Taking rt_mutex.wait_lock irq safe prevents the deadlock.
      Reported-by: NSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      b4abf910
  9. 06 10月, 2015 1 次提交
  10. 23 9月, 2015 1 次提交
  11. 20 7月, 2015 1 次提交
  12. 20 6月, 2015 2 次提交
  13. 19 6月, 2015 1 次提交
    • D
      locking/rtmutex: Implement lockless top-waiter wakeup · 45ab4eff
      Davidlohr Bueso 提交于
      Mark the task for later wakeup after the wait_lock has been released.
      This way, once the next task is awoken, it will have a better chance
      to of finding the wait_lock free when continuing executing in
      __rt_mutex_slowlock() when trying to acquire the rtmutex, calling
      try_to_take_rt_mutex(). Upon contended scenarios, other tasks attempting
      take the lock may acquire it first, right after the wait_lock is released,
      but (a) this can also occur with the current code, as it relies on the
      spinlock fairness, and (b) we are dealing with the top-waiter anyway,
      so it will always take the lock next.
      Signed-off-by: NDavidlohr Bueso <dbueso@suse.de>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1432056298-18738-2-git-send-email-dave@stgolabs.netSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      45ab4eff
  14. 14 5月, 2015 1 次提交
    • T
      rtmutex: Warn if trylock is called from hard/softirq context · 6ce47fd9
      Thomas Gleixner 提交于
      rt_mutex_trylock() must be called from thread context. It can be
      called from atomic regions (preemption or interrupts disabled), but
      not from hard/softirq/nmi context. Add a warning to alert abusers.
      
      The reasons for this are:
      
          1) There is a potential deadlock in the slowpath
      
          2) Another cpu which blocks on the rtmutex will boost the task
             which allegedly locked the rtmutex, but that cannot work
             because the hard/softirq context borrows the task context.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      6ce47fd9
  15. 13 5月, 2015 1 次提交
    • S
      locking/rtmutex: Drop usage of __HAVE_ARCH_CMPXCHG · cede8841
      Sebastian Andrzej Siewior 提交于
      The rtmutex code is the only user of __HAVE_ARCH_CMPXCHG and we have a few
      other user of cmpxchg() which do not care about __HAVE_ARCH_CMPXCHG. This
      define was first introduced in 23f78d4a ("[PATCH] pi-futex: rt mutex core")
      which is v2.6.18. The generic cmpxchg was introduced later in 068fbad2
      ("Add cmpxchg_local to asm-generic for per cpu atomic operations") which is
      v2.6.25.
      Back then something was required to get rtmutex working with the fast
      path on architectures without cmpxchg and this seems to be the result.
      
      It popped up recently on rt-users because ARM (v6+) does not define
      __HAVE_ARCH_CMPXCHG (even that it implements it) which results in slower
      locking performance in the fast path.
      To put some numbers on it: preempt -RT, am335x, 10 loops of
      100000 invocations of rt_spin_lock() + rt_spin_unlock() (time "total" is
      the average of the 10 loops for the 100000 invocations, "loop" is
      "total / 100000 * 1000"):
      
           cmpxchg |    slowpath used  ||    cmpxchg used
                   |   total   | loop  ||   total    | loop
           --------|-----------|-------||------------|-------
           ARMv6   | 9129.4 us | 91 ns ||  3311.9 us |  33 ns
           generic | 9360.2 us | 94 ns || 10834.6 us | 108 ns
           ----------------------------||--------------------
      
      Forcing it to generic cmpxchg() made things worse for the slowpath and
      even worse in cmpxchg() path. It boils down to 14ns more per lock+unlock
      in a cache hot loop so it might not be that much in real world.
      The last test was a substitute for pre ARMv6 machine but then I was able
      to perform the comparison on imx28 which is ARMv5 and therefore is
      always is using the generic cmpxchg implementation. And the numbers:
      
                    |   total     | loop
           -------- |-----------  |--------
           slowpath | 263937.2 us | 2639 ns
           cmpxchg  |  16934.2 us |  169 ns
           --------------------------------
      
      The numbers are larger since the machine is slower in general. However,
      letting rtmutex use cmpxchg() instead the slowpath seem to improve things.
      
      Since from the ARM (tested on am335x + imx28) point of view always
      using cmpxchg() in rt_mutex_lock() + rt_mutex_unlock() makes sense I
      would drop the define.
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: will.deacon@arm.com
      Cc: linux-arm-kernel@lists.infradead.org
      Link: http://lkml.kernel.org/r/20150225175613.GE6823@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      cede8841
  16. 08 5月, 2015 1 次提交
  17. 22 4月, 2015 1 次提交
  18. 25 3月, 2015 1 次提交
  19. 01 3月, 2015 1 次提交
  20. 18 2月, 2015 1 次提交
  21. 04 2月, 2015 1 次提交
  22. 13 8月, 2014 1 次提交
    • D
      locking/Documentation: Move locking related docs into Documentation/locking/ · 214e0aed
      Davidlohr Bueso 提交于
      Specifically:
        Documentation/locking/lockdep-design.txt
        Documentation/locking/lockstat.txt
        Documentation/locking/mutex-design.txt
        Documentation/locking/rt-mutex-design.txt
        Documentation/locking/rt-mutex.txt
        Documentation/locking/spinlocks.txt
        Documentation/locking/ww-mutex-design.txt
      Signed-off-by: NDavidlohr Bueso <davidlohr@hp.com>
      Acked-by: NRandy Dunlap <rdunlap@infradead.org>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: jason.low2@hp.com
      Cc: aswin@hp.com
      Cc: Alexei Starovoitov <ast@plumgrid.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Chris Mason <clm@fb.com>
      Cc: Dan Streetman <ddstreet@ieee.org>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Davidlohr Bueso <davidlohr@hp.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Jason Low <jason.low2@hp.com>
      Cc: Josef Bacik <jbacik@fusionio.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Lubomir Rintel <lkundrak@v3.sk>
      Cc: Masanari Iida <standby24x7@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: fengguang.wu@intel.com
      Link: http://lkml.kernel.org/r/1406752916-3341-6-git-send-email-davidlohr@hp.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      214e0aed
  23. 22 6月, 2014 9 次提交