1. 07 3月, 2008 3 次提交
  2. 05 3月, 2008 1 次提交
    • P
      sched: revert load_balance_monitor() changes · 62fb1851
      Peter Zijlstra 提交于
      The following commits cause a number of regressions:
      
        commit 58e2d4ca
        Author: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
        Date:   Fri Jan 25 21:08:00 2008 +0100
        sched: group scheduling, change how cpu load is calculated
      
        commit 6b2d7700
        Author: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
        Date:   Fri Jan 25 21:08:00 2008 +0100
        sched: group scheduler, fix fairness of cpu bandwidth allocation for task groups
      
      Namely:
       - very frequent wakeups on SMP, reported by PowerTop users.
       - cacheline trashing on (large) SMP
       - some latencies larger than 500ms
      
      While there is a mergeable patch to fix the latter, the former issues
      are not fixable in a manner suitable for .25 (we're at -rc3 now).
      
      Hence we revert them and try again in v2.6.26.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      CC: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Tested-by: NAlexey Zaytsev <alexey.zaytsev@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      62fb1851
  3. 25 2月, 2008 2 次提交
  4. 24 2月, 2008 2 次提交
    • L
      Add memory barrier semantics to wake_up() & co · 04e2f174
      Linus Torvalds 提交于
      Oleg Nesterov and others have pointed out that on some architectures,
      the traditional sequence of
      
      	set_current_state(TASK_INTERRUPTIBLE);
      	if (CONDITION)
      		return;
      	schedule();
      
      is racy wrt another CPU doing
      
      	CONDITION = 1;
      	wake_up_process(p);
      
      because while set_current_state() has a memory barrier separating
      setting of the TASK_INTERRUPTIBLE state from reading of the CONDITION
      variable, there is no such memory barrier on the wakeup side.
      
      Now, wake_up_process() does actually take a spinlock before it reads and
      sets the task state on the waking side, and on x86 (and many other
      architectures) that spinlock is in fact equivalent to a memory barrier,
      but that is not generally guaranteed.  The write that sets CONDITION
      could move into the critical region protected by the runqueue spinlock.
      
      However, adding a smp_wmb() to before the spinlock should now order the
      writing of CONDITION wrt the lock itself, which in turn is ordered wrt
      the accesses within the spinlock (which includes the reading of the old
      state).
      
      This should thus close the race (which probably has never been seen in
      practice, but since smp_wmb() is a no-op on x86, it's not like this will
      make anything worse either on the most common architecture where the
      spinlock already gave the required protection).
      Acked-by: NOleg Nesterov <oleg@tv-sign.ru>
      Acked-by: NDmitry Adamushko <dmitry.adamushko@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      04e2f174
    • S
      kprobes: refuse kprobe insertion on add/sub_preempt_counter() · 43627582
      Srinivasa Ds 提交于
      Kprobes makes use of preempt_disable(),preempt_enable_noresched() and these
      functions inturn call add/sub_preempt_count().  So we need to refuse user from
      inserting probe in to these functions.
      
      This patch disallows user from probing add/sub_preempt_count().
      Signed-off-by: NSrinivasa DS <srinivasa@in.ibm.com>
      Acked-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      43627582
  5. 13 2月, 2008 7 次提交
  6. 09 2月, 2008 1 次提交
  7. 01 2月, 2008 1 次提交
  8. 30 1月, 2008 1 次提交
    • N
      spinlock: lockbreak cleanup · 95c354fe
      Nick Piggin 提交于
      The break_lock data structure and code for spinlocks is quite nasty.
      Not only does it double the size of a spinlock but it changes locking to
      a potentially less optimal trylock.
      
      Put all of that under CONFIG_GENERIC_LOCKBREAK, and introduce a
      __raw_spin_is_contended that uses the lock data itself to determine whether
      there are waiters on the lock, to be used if CONFIG_GENERIC_LOCKBREAK is
      not set.
      
      Rename need_lockbreak to spin_needbreak, make it use spin_is_contended to
      decouple it from the spinlock implementation, and make it typesafe (rwlocks
      do not have any need_lockbreak sites -- why do they even get bloated up
      with that break_lock then?).
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      95c354fe
  9. 26 1月, 2008 22 次提交