1. 15 10月, 2009 4 次提交
    • P
      rcu: Add rnp->blocked_tasks to tracing · 3397e040
      Paul E. McKenney 提交于
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      Cc: npiggin@suse.de
      Cc: jens.axboe@oracle.com
      Cc: Josh Triplett <josh@joshtriplett.org>
      LKML-Reference: <20091014233638.GE6763@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
       kernel/rcutree_trace.c |    8 ++++++--
       1 file changed, 6 insertions(+), 2 deletions(-)
      3397e040
    • P
      rcu: Stopgap fix for synchronize_rcu_expedited() for TREE_PREEMPT_RCU · 019129d5
      Paul E. McKenney 提交于
      For the short term, map synchronize_rcu_expedited() to
      synchronize_rcu() for TREE_PREEMPT_RCU and to
      synchronize_sched_expedited() for TREE_RCU.
      
      Longer term, there needs to be a real expedited grace period for
      TREE_PREEMPT_RCU, but candidate patches to date are considerably
      more complex and intrusive.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      Cc: npiggin@suse.de
      Cc: jens.axboe@oracle.com
      LKML-Reference: <12555405592331-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      019129d5
    • P
      rcu: Prevent RCU IPI storms in presence of high call_rcu() load · 37c72e56
      Paul E. McKenney 提交于
      As the number of callbacks on a given CPU rises, invoke
      force_quiescent_state() only every blimit number of callbacks
      (defaults to 10,000), and even then only if no other CPU has
      invoked force_quiescent_state() in the meantime.
      
      This should fix the performance regression reported by Nick.
      Reported-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      Cc: jens.axboe@oracle.com
      LKML-Reference: <12555405592133-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      37c72e56
    • D
      futex: Check for NULL keys in match_futex · 2bc87203
      Darren Hart 提交于
      If userspace tries to perform a requeue_pi on a non-requeue_pi waiter,
      it will find the futex_q->requeue_pi_key to be NULL and OOPS.
      
      Check for NULL in match_futex() instead of doing explicit NULL pointer
      checks on all call sites.  While match_futex(NULL, NULL) returning
      false is a little odd, it's still correct as we expect valid key
      references.
      Signed-off-by: NDarren Hart <dvhltc@us.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: Dinakar Guniguntala <dino@in.ibm.com>
      CC: John Stultz <johnstul@us.ibm.com>
      Cc: stable@kernel.org
      LKML-Reference: <4AD60687.10306@us.ibm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      2bc87203
  2. 14 10月, 2009 1 次提交
    • T
      futex: Handle spurious wake up · d58e6576
      Thomas Gleixner 提交于
      The futex code does not handle spurious wake up in futex_wait and
      futex_wait_requeue_pi.
      
      The code assumes that any wake up which was not caused by futex_wake /
      requeue or by a timeout was caused by a signal wake up and returns one
      of the syscall restart error codes.
      
      In case of a spurious wake up the signal delivery code which deals
      with the restart error codes is not invoked and we return that error
      code to user space. That causes applications which actually check the
      return codes to fail. Blaise reported that on preempt-rt a python test
      program run into a exception trap. -rt exposed that due to a built in
      spurious wake up accelerator :)
      
      Solve this by checking signal_pending(current) in the wake up path and
      handle the spurious wake up case w/o returning to user space.
      Reported-by: NBlaise Gassend <blaise@willowgarage.com>
      Debugged-by: NDarren Hart <dvhltc@us.ibm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: stable@kernel.org
      LKML-Reference: <new-submission>
      d58e6576
  3. 13 10月, 2009 1 次提交
  4. 10 10月, 2009 2 次提交
    • R
      oprofile: warn on freeing event buffer too early · c0868934
      Robert Richter 提交于
      A race shouldn't happen since all workqueues or handlers are canceled
      or flushed before the event buffer is freed. A warning is triggered
      now if the buffer is freed too early.
      
      Also, this patch adds some comments about event buffer protection,
      reworks some code and adds code to clear buffer_pos during alloc and
      free of the event buffer.
      
      Cc: David Rientjes <rientjes@google.com>
      Cc: Stephane Eranian <eranian@google.com>
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      c0868934
    • D
      oprofile: fix race condition in event_buffer free · 066b3aa8
      David Rientjes 提交于
      Looking at the 2.6.31-rc9 code, it appears there is a race condition
      in the event_buffer cleanup code path (shutdown). This could lead to
      kernel panic as some CPUs may be operating on the event buffer AFTER
      it has been freed. The attached patch solves the problem and makes
      sure CPUs check if the buffer is not NULL before they access it as
      some may have been spinning on the mutex while the buffer was being
      freed.
      
      The race may happen if the buffer is freed during pending reads. But
      it is not clear why there are races in add_event_entry() since all
      workqueues or handlers are canceled or flushed before the event buffer
      is freed.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      066b3aa8
  5. 09 10月, 2009 15 次提交
  6. 08 10月, 2009 17 次提交