1. 09 4月, 2015 1 次提交
    • J
      locking/mutex: Further simplify mutex_spin_on_owner() · 01ac33c1
      Jason Low 提交于
      Similar to what Linus suggested for rwsem_spin_on_owner(), in
      mutex_spin_on_owner() instead of having while (true) and
      breaking out of the spin loop on lock->owner != owner, we can
      have the loop directly check for while (lock->owner == owner) to
      improve the readability of the code.
      
      It also shrinks the code a bit:
      
         text    data     bss     dec     hex filename
         3721       0       0    3721     e89 mutex.o.before
         3705       0       0    3705     e79 mutex.o.after
      Signed-off-by: NJason Low <jason.low2@hp.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Aswin Chandramouleeswaran <aswin@hp.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Link: http://lkml.kernel.org/r/1428521960-5268-2-git-send-email-jason.low2@hp.com
      [ Added code generation info. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      01ac33c1
  2. 25 3月, 2015 1 次提交
  3. 07 3月, 2015 1 次提交
    • J
      locking/rwsem: Fix lock optimistic spinning when owner is not running · 9198f6ed
      Jason Low 提交于
      Ming reported soft lockups occurring when running xfstest due to
      the following tip:locking/core commit:
      
        b3fd4f03 ("locking/rwsem: Avoid deceiving lock spinners")
      
      When doing optimistic spinning in rwsem, threads should stop
      spinning when the lock owner is not running. While a thread is
      spinning on owner, if the owner reschedules, owner->on_cpu
      returns false and we stop spinning.
      
      However, this commit essentially caused the check to get
      ignored because when we break out of the spin loop due to
      !on_cpu, we continue spinning if sem->owner != NULL.
      
      This patch fixes this by making sure we stop spinning if the
      owner is not running. Furthermore, just like with mutexes,
      refactor the code such that we don't have separate checks for
      owner_running(). This makes it more straightforward in terms of
      why we exit the spin on owner loop and we would also avoid
      needing to "guess" why we broke out of the loop to make this
      more readable.
      Reported-and-tested-by: NMing Lei <ming.lei@canonical.com>
      Signed-off-by: NJason Low <jason.low2@hp.com>
      Acked-by: NDavidlohr Bueso <dave@stgolabs.net>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dave Jones <davej@codemonkey.org.uk>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Link: http://lkml.kernel.org/r/1425714331.2475.388.camel@j-VirtualBoxSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9198f6ed
  4. 24 2月, 2015 1 次提交
  5. 18 2月, 2015 7 次提交
  6. 04 2月, 2015 4 次提交
  7. 29 1月, 2015 1 次提交
  8. 14 1月, 2015 5 次提交
  9. 09 1月, 2015 1 次提交
  10. 04 1月, 2015 1 次提交
  11. 28 10月, 2014 1 次提交
    • P
      locking/mutex: Don't assume TASK_RUNNING · 6f942a1f
      Peter Zijlstra 提交于
      We're going to make might_sleep() test for TASK_RUNNING, because
      blocking without TASK_RUNNING will destroy the task state by setting
      it to TASK_RUNNING.
      
      There are a few occasions where its 'valid' to call blocking
      primitives (and mutex_lock in particular) and not have TASK_RUNNING,
      typically such cases are right before we set TASK_RUNNING anyhow.
      
      Robustify the code by not assuming this; this has the beneficial side
      effect of allowing optional code emission for fixing the above
      might_sleep() false positives.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: tglx@linutronix.de
      Cc: ilya.dryomov@inktank.com
      Cc: umgwanakikbuti@gmail.com
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/20140924082241.988560063@infradead.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6f942a1f
  12. 03 10月, 2014 2 次提交
    • P
      locking/lockdep: Revert qrwlock recusive stuff · 8acd91e8
      Peter Zijlstra 提交于
      Commit f0bab73c ("locking/lockdep: Restrict the use of recursive
      read_lock() with qrwlock") changed lockdep to try and conform to the
      qrwlock semantics which differ from the traditional rwlock semantics.
      
      In particular qrwlock is fair outside of interrupt context, but in
      interrupt context readers will ignore all fairness.
      
      The problem modeling this is that read and write side have different
      lock state (interrupts) semantics but we only have a single
      representation of these. Therefore lockdep will get confused, thinking
      the lock can cause interrupt lock inversions.
      
      So revert it for now; the old rwlock semantics were already imperfectly
      modeled and the qrwlock extra won't fit either.
      
      If we want to properly fix this, I think we need to resurrect the work
      by Gautham did a few years ago that split the read and write state of
      locks:
      
         http://lwn.net/Articles/332801/
      
      FWIW the locking selftest that would've failed (and was reported by
      Borislav earlier) is something like:
      
        RL(X1);	/* IRQ-ON */
        LOCK(A);
        UNLOCK(A);
        RU(X1);
      
        IRQ_ENTER();
        RL(X1);	/* IN-IRQ */
        RU(X1);
        IRQ_EXIT();
      
      At which point it would report that because A is an IRQ-unsafe lock we
      can suffer the following inversion:
      
      	CPU0		CPU1
      
      	lock(A)
      			lock(X1)
      			lock(A)
      	<IRQ>
      	 lock(X1)
      
      And this is 'wrong' because X1 can recurse (assuming the above lock are
      in fact read-lock) but lockdep doesn't know about this.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Waiman Long <Waiman.Long@hp.com>
      Cc: ego@linux.vnet.ibm.com
      Cc: bp@alien8.de
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Link: http://lkml.kernel.org/r/20140930132600.GA7444@worktop.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8acd91e8
    • J
      locking/rwsem: Avoid double checking before try acquiring write lock · debfab74
      Jason Low 提交于
      Commit 9b0fc9c0 ("rwsem: skip initial trylock in rwsem_down_write_failed")
      checks for if there are known active lockers in order to avoid write trylocking
      using expensive cmpxchg() when it likely wouldn't get the lock.
      
      However, a subsequent patch was added such that we directly
      check for sem->count == RWSEM_WAITING_BIAS right before trying
      that cmpxchg().
      
      Thus, commit 9b0fc9c0 now just adds overhead.
      
      This patch modifies it so that we only do a check for if
      count == RWSEM_WAITING_BIAS.
      
      Also, add a comment on why we do an "extra check" of count
      before the cmpxchg().
      Signed-off-by: NJason Low <jason.low2@hp.com>
      Acked-by: NDavidlohr Bueso <dbueso@suse.de>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Aswin Chandramouleeswaran <aswin@hp.com>
      Cc: Chegu Vinod <chegu_vinod@hp.com>
      Cc: Peter Hurley <peter@hurleysoftware.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/1410913017.2447.22.camel@j-VirtualBoxSigned-off-by: NIngo Molnar <mingo@kernel.org>
      debfab74
  13. 30 9月, 2014 4 次提交
  14. 17 9月, 2014 8 次提交
  15. 16 9月, 2014 1 次提交
  16. 04 9月, 2014 1 次提交