1. 28 5月, 2013 1 次提交
  2. 08 2月, 2013 1 次提交
  3. 12 12月, 2011 1 次提交
  4. 31 10月, 2011 1 次提交
  5. 29 9月, 2011 1 次提交
    • P
      rcu: Permit rt_mutex_unlock() with irqs disabled · 5342e269
      Paul E. McKenney 提交于
      Create a separate lockdep class for the rt_mutex used for RCU priority
      boosting and enable use of rt_mutex_lock() with irqs disabled.  This
      prevents RCU priority boosting from falling prey to deadlocks when
      someone begins an RCU read-side critical section in preemptible state,
      but releases it with an irq-disabled lock held.
      
      Unfortunately, the scheduler's runqueue and priority-inheritance locks
      still must either completely enclose or be completely enclosed by any
      overlapping RCU read-side critical section.
      
      This version removes a redundant local_irq_restore() noted by
      Yong Zhang.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      5342e269
  6. 08 7月, 2011 1 次提交
  7. 28 1月, 2011 1 次提交
    • L
      rtmutex: Simplify PI algorithm and make highest prio task get lock · 8161239a
      Lai Jiangshan 提交于
      In current rtmutex, the pending owner may be boosted by the tasks
      in the rtmutex's waitlist when the pending owner is deboosted
      or a task in the waitlist is boosted. This boosting is unrelated,
      because the pending owner does not really take the rtmutex.
      It is not reasonable.
      
      Example.
      
      time1:
      A(high prio) onwers the rtmutex.
      B(mid prio) and C (low prio) in the waitlist.
      
      time2
      A release the lock, B becomes the pending owner
      A(or other high prio task) continues to run. B's prio is lower
      than A, so B is just queued at the runqueue.
      
      time3
      A or other high prio task sleeps, but we have passed some time
      The B and C's prio are changed in the period (time2 ~ time3)
      due to boosting or deboosting. Now C has the priority higher
      than B. ***Is it reasonable that C has to boost B and help B to
      get the rtmutex?
      
      NO!! I think, it is unrelated/unneed boosting before B really
      owns the rtmutex. We should give C a chance to beat B and
      win the rtmutex.
      
      This is the motivation of this patch. This patch *ensures*
      only the top waiter or higher priority task can take the lock.
      
      How?
      1) we don't dequeue the top waiter when unlock, if the top waiter
         is changed, the old top waiter will fail and go to sleep again.
      2) when requiring lock, it will get the lock when the lock is not taken and:
         there is no waiter OR higher priority than waiters OR it is top waiter.
      3) In any time, the top waiter is changed, the top waiter will be woken up.
      
      The algorithm is much simpler than before, no pending owner, no
      boosting for pending owner.
      
      Other advantage of this patch:
      1) The states of a rtmutex are reduced a half, easier to read the code.
      2) the codes become shorter.
      3) top waiter is not dequeued until it really take the lock:
         they will retain FIFO when it is stolen.
      
      Not advantage nor disadvantage
      1) Even we may wakeup multiple waiters(any time when top waiter changed),
         we hardly cause "thundering herd",
         the number of wokenup task is likely 1 or very little.
      2) two APIs are changed.
         rt_mutex_owner() will not return pending owner, it will return NULL when
                          the top waiter is going to take the lock.
         rt_mutex_next_owner() always return the top waiter.
      	                 will not return NULL if we have waiters
                               because the top waiter is not dequeued.
      
         I have fixed the code that use these APIs.
      
      need updated after this patch is accepted
      1) Document/*
      2) the testcase scripts/rt-tester/t4-l2-pi-deboost.tst
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      LKML-Reference: <4D3012D5.4060709@cn.fujitsu.com>
      Reviewed-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8161239a
  8. 15 12月, 2009 2 次提交
  9. 06 8月, 2009 1 次提交
    • D
      rtmutex: Avoid deadlock in rt_mutex_start_proxy_lock() · 1bbf2083
      Darren Hart 提交于
      In the event of a lock steal or owner died,
      rt_mutex_start_proxy_lock() will give the rt_mutex to the
      waiting task, but it fails to release the wait_lock. This leads
      to subsequent deadlocks when other tasks try to acquire the
      rt_mutex.
      
      I also removed a few extra blank lines that really spaced this
      routine out. I must have been high on the \n when I wrote this
      originally...
      Signed-off-by: NDarren Hart <dvhltc@us.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Dinakar Guniguntala <dino@in.ibm.com>
      Cc: John Stultz <johnstul@linux.vnet.ibm.com>
      LKML-Reference: <4A79D7F1.4000405@us.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1bbf2083
  10. 13 6月, 2009 1 次提交
  11. 30 4月, 2009 1 次提交
  12. 06 4月, 2009 1 次提交
  13. 06 9月, 2008 1 次提交
  14. 13 2月, 2008 1 次提交
  15. 20 10月, 2007 1 次提交
  16. 17 7月, 2007 1 次提交
  17. 19 6月, 2007 1 次提交
    • T
      Revert "futex_requeue_pi optimization" · bd197234
      Thomas Gleixner 提交于
      This reverts commit d0aa7a70.
      
      It not only introduced user space visible changes to the futex syscall,
      it is also non-functional and there is no way to fix it proper before
      the 2.6.22 release.
      
      The breakage report ( http://lkml.org/lkml/2007/5/12/17 ) went
      unanswered, and unfortunately it turned out that the concept is not
      feasible at all.  It violates the rtmutex semantics badly by introducing
      a virtual owner, which hacks around the coupling of the user-space
      pi_futex and the kernel internal rt_mutex representation.
      
      At the moment the only safe option is to remove it fully as it contains
      user-space visible changes to broken kernel code, which we do not want
      to expose in the 2.6.22 release.
      
      The patch reverts the original patch mostly 1:1, but contains a couple
      of trivial manual cleanups which were necessary due to patches, which
      touched the same area of code later.
      
      Verified against the glibc tests and my own PI futex tests.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Acked-by: NUlrich Drepper <drepper@redhat.com>
      Cc: Pierre Peiffer <pierre.peiffer@bull.net>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bd197234
  18. 09 6月, 2007 2 次提交
  19. 10 5月, 2007 1 次提交
  20. 17 2月, 2007 1 次提交
  21. 30 9月, 2006 1 次提交
    • S
      [PATCH] clean up and remove some extra spinlocks from rtmutex · db630637
      Steven Rostedt 提交于
      Oleg brought up some interesting points about grabbing the pi_lock for some
      protections.  In this discussion, I realized that there are some places
      that the pi_lock is being grabbed when it really wasn't necessary.  Also
      this patch does a little bit of clean up.
      
      This patch basically does three things:
      
      1) renames the "boost" variable to "chain_walk".  Since it is used in
         the debugging case when it isn't going to be boosted.  It better
         describes what the test is going to do if it succeeds.
      
      2) moves get_task_struct to just before the unlocking of the wait_lock.
         This removes duplicate code, and makes it a little easier to read.  The
         owner wont go away while either the pi_lock or the wait_lock are held.
      
      3) removes the pi_locking and owner blocked checking completely from the
         debugging case.  This is because the grabbing the lock and doing the
         check, then releasing the lock is just so full of races.  It's just as
         good to go ahead and call the pi_chain_walk function, since after
         releasing the lock the owner can then block anyway, and we would have
         missed that.  For the debug case, we really do want to do the chain walk
         to test for deadlocks anyway.
      
      [oleg@tv-sign.ru: more of the same]
      Signed-of-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: Esben Nielsen <nielsen.esben@googlemail.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      db630637
  22. 01 8月, 2006 1 次提交
  23. 04 7月, 2006 2 次提交
    • I
      [PATCH] sched: cleanup, remove task_t, convert to struct task_struct · 36c8b586
      Ingo Molnar 提交于
      cleanup: remove task_t and convert all the uses to struct task_struct. I
      introduced it for the scheduler anno and it was a mistake.
      
      Conversion was mostly scripted, the result was reviewed and all
      secondary whitespace and style impact (if any) was fixed up by hand.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      36c8b586
    • I
      [PATCH] lockdep: better lock debugging · 9a11b49a
      Ingo Molnar 提交于
      Generic lock debugging:
      
       - generalized lock debugging framework. For example, a bug in one lock
         subsystem turns off debugging in all lock subsystems.
      
       - got rid of the caller address passing (__IP__/__IP_DECL__/etc.) from
         the mutex/rtmutex debugging code: it caused way too much prototype
         hackery, and lockdep will give the same information anyway.
      
       - ability to do silent tests
      
       - check lock freeing in vfree too.
      
       - more finegrained debugging options, to allow distributions to
         turn off more expensive debugging features.
      
      There's no separate 'held mutexes' list anymore - but there's a 'held locks'
      stack within lockdep, which unifies deadlock detection across all lock
      classes.  (this is independent of the lockdep validation stuff - lockdep first
      checks whether we are holding a lock already)
      
      Here are the current debugging options:
      
      CONFIG_DEBUG_MUTEXES=y
      CONFIG_DEBUG_LOCK_ALLOC=y
      
      which do:
      
       config DEBUG_MUTEXES
                bool "Mutex debugging, basic checks"
      
       config DEBUG_LOCK_ALLOC
               bool "Detect incorrect freeing of live mutexes"
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9a11b49a
  24. 28 6月, 2006 4 次提交