1. 15 1月, 2010 1 次提交
  2. 07 1月, 2010 2 次提交
    • S
      ring-buffer: Add rb_list_head() wrapper around new reader page next field · 0e1ff5d7
      Steven Rostedt 提交于
      If the very unlikely case happens where the writer moves the head by one
      between where the head page is read and where the new reader page
      is assigned _and_ the writer then writes and wraps the entire ring buffer
      so that the head page is back to what was originally read as the head page,
      the page to be swapped will have a corrupted next pointer.
      
      Simple solution is to wrap the assignment of the next pointer with a
      rb_list_head().
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0e1ff5d7
    • D
      ring-buffer: Wrap a list.next reference with rb_list_head() · 5ded3dc6
      David Sharp 提交于
      This reference at the end of rb_get_reader_page() was causing off-by-one
      writes to the prev pointer of the page after the reader page when that
      page is the head page, and therefore the reader page has the RB_PAGE_HEAD
      flag in its list.next pointer. This eventually results in a GPF in a
      subsequent call to rb_set_head_page() (usually from rb_get_reader_page())
      when that prev pointer is dereferenced. The dereferenced register would
      characteristically have an address that appears shifted left by one byte
      (eg, ffxxxxxxxxxxxxyy instead of ffffxxxxxxxxxxxx) due to being written at
      an address one byte too high.
      Signed-off-by: NDavid Sharp <dhsharp@google.com>
      LKML-Reference: <1262826727-9090-1-git-send-email-dhsharp@google.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5ded3dc6
  3. 06 1月, 2010 1 次提交
  4. 31 12月, 2009 1 次提交
  5. 30 12月, 2009 6 次提交
  6. 28 12月, 2009 2 次提交
  7. 24 12月, 2009 1 次提交
    • A
      SYSCTL: Print binary sysctl warnings (nearly) only once · 4440095c
      Andi Kleen 提交于
      When printing legacy sysctls print the warning message
      for each of them only once.  This way there is a guarantee
      the syslog won't be flooded for any sane program.
      
      The original attempt at this made the tables non const and stored
      the flag inline.
      
      Linus suggested using a separate hash table for this, this is based on a
      code snippet from him.
      
      The hash implies this is not exact and can sometimes not print a
      new sysctl due to a hash collision, but in practice this should not
      be a problem
      
      I used a FNV32 hash over the binary string with a 32byte bitmap. This
      gives relatively little collisions when all the predefined binary sysctls
      are hashed:
      
      size 256
      bucket
      length      number
      0:          [25]
      1:          [67]
      2:          [88]
      3:          [47]
      4:          [22]
      5:          [6]
      6:          [1]
      
      The worst case is a single collision of 6 hash values.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      4440095c
  8. 23 12月, 2009 10 次提交
  9. 22 12月, 2009 2 次提交
  10. 21 12月, 2009 2 次提交
  11. 20 12月, 2009 2 次提交
    • A
      fix more leaks in audit_tree.c tag_chunk() · b4c30aad
      Al Viro 提交于
      Several leaks in audit_tree didn't get caught by commit
      318b6d3d, including the leak on normal
      exit in case of multiple rules refering to the same chunk.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b4c30aad
    • A
      fix braindamage in audit_tree.c untag_chunk() · 6f5d5114
      Al Viro 提交于
      ... aka "Al had badly fscked up when writing that thing and nobody
      noticed until Eric had fixed leaks that used to mask the breakage".
      
      The function essentially creates a copy of old array sans one element
      and replaces the references to elements of original (they are on cyclic
      lists) with those to corresponding elements of new one.  After that the
      old one is fair game for freeing.
      
      First of all, there's a dumb braino: when we get to list_replace_init we
      use indices for wrong arrays - position in new one with the old array
      and vice versa.
      
      Another bug is more subtle - termination condition is wrong if the
      element to be excluded happens to be the last one.  We shouldn't go
      until we fill the new array, we should go until we'd finished the old
      one.  Otherwise the element we are trying to kill will remain on the
      cyclic lists...
      
      That crap used to be masked by several leaks, so it was not quite
      trivial to hit.  Eric had fixed some of those leaks a while ago and the
      shit had hit the fan...
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6f5d5114
  12. 18 12月, 2009 3 次提交
  13. 17 12月, 2009 7 次提交
    • P
      sched: Fix broken assertion · 077614ee
      Peter Zijlstra 提交于
      There's a preemption race in the set_task_cpu() debug check in
      that when we get preempted after setting task->state we'd still
      be on the rq proper, but fail the test.
      
      Check for preempted tasks, since those are always on the RQ.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20091217121830.137155561@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      077614ee
    • P
      perf events: Dont report side-band events on each cpu for per-task-per-cpu events · 5d27c23d
      Peter Zijlstra 提交于
      Acme noticed that his FORK/MMAP numbers were inflated by about
      the same factor as his cpu-count.
      
      This led to the discovery of a few more sites that need to
      respect the event->cpu filter.
      Reported-by: NArnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20091217121830.215333434@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5d27c23d
    • F
      perf events, x86/stacktrace: Make stack walking optional · 61c1917f
      Frederic Weisbecker 提交于
      The current print_context_stack helper that does the stack
      walking job is good for usual stacktraces as it walks through
      all the stack and reports even addresses that look unreliable,
      which is nice when we don't have frame pointers for example.
      
      But we have users like perf that only require reliable
      stacktraces, and those may want a more adapted stack walker, so
      lets make this function a callback in stacktrace_ops that users
      can tune for their needs.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <1261024834-5336-1-git-send-regression-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      61c1917f
    • F
      sched: Teach might_sleep() about preemptible RCU · 234da7bc
      Frederic Weisbecker 提交于
      In practice, it is harmless to voluntarily sleep in a
      rcu_read_lock() section if we are running under preempt rcu, but
      it is illegal if we build a kernel running non-preemptable rcu.
      
      Currently, might_sleep() doesn't notice sleepable operations
      under rcu_read_lock() sections if we are running under
      preemptable rcu because preempt_count() is left untouched after
      rcu_read_lock() in this case. But we want developers who test
      their changes under such config to notice the "sleeping while
      atomic" issues.
      
      So we add rcu_read_lock_nesting to prempt_count() in
      might_sleep() checks.
      
      [ v2: Handle rcu-tiny ]
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      LKML-Reference: <1260991265-8451-1-git-send-regression-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      234da7bc
    • M
      kprobe-tracer: Check new event/group name · 6f3cf440
      Masami Hiramatsu 提交于
      Check new event/group name is same syntax as a C symbol. In other
      words, checking the name is as like as other tracepoint events.
      
      This can prevent user to create an event with useless name (e.g.
      foo|bar, foo*bar).
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Jim Keniston <jkenisto@us.ibm.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: K.Prasad <prasad@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: systemtap <systemtap@sources.redhat.com>
      Cc: DLE <dle-develop@lists.sourceforge.net>
      LKML-Reference: <20091216222408.14459.68790.stgit@dhcp-100-2-132.bos.redhat.com>
      [ v2: minor cleanups ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6f3cf440
    • I
      sched: Make warning less noisy · 416eb395
      Ingo Molnar 提交于
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      LKML-Reference: <20091216170517.807938893@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      416eb395
    • R
      cpumask: avoid dereferencing struct cpumask · 62ac1279
      Rusty Russell 提交于
      struct cpumask will be undefined soon with CONFIG_CPUMASK_OFFSTACK=y,
      to avoid them being declared on the stack.
      
      cpumask_bits() does what we want here (of course, this code is crap).
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      To: Thomas Gleixner <tglx@linutronix.de>
      62ac1279