1. 04 12月, 2009 2 次提交
  2. 04 11月, 2009 1 次提交
  3. 28 10月, 2009 1 次提交
  4. 20 10月, 2009 1 次提交
  5. 16 10月, 2009 2 次提交
  6. 15 10月, 2009 2 次提交
  7. 14 10月, 2009 1 次提交
    • L
      x86, perf_event: Rename 'performance counter interrupt' · 89ccf465
      Li Hong 提交于
      In 'cdd6c482', we renamed
      Performance Counters -> Performance Events.
      
      The name showed up in /proc/interrupts also needs a change. I use
      PMI (Performance monitoring interrupt) here, since it is the
      official name used in Intel's documents.
      Signed-off-by: NLi Hong <lihong.hi@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20091014105039.GA22670@uhli>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      89ccf465
  8. 13 10月, 2009 3 次提交
  9. 12 10月, 2009 3 次提交
  10. 09 10月, 2009 2 次提交
  11. 08 10月, 2009 1 次提交
    • A
      x86, timers: Check for pending timers after (device) interrupts · 9bcbdd9c
      Arjan van de Ven 提交于
      Now that range timers and deferred timers are common, I found a
      problem with these using the "perf timechart" tool. Frans Pop also
      reported high scheduler latencies via LatencyTop, when using
      iwlagn.
      
      It turns out that on x86, these two 'opportunistic' timers only get
      checked when another "real" timer happens. These opportunistic
      timers have the objective to save power by hitchhiking on other
      wakeups, as to avoid CPU wakeups by themselves as much as possible.
      
      The change in this patch runs this check not only at timer
      interrupts, but at all (device) interrupts. The effect is that:
      
       1) the deferred timers/range timers get delayed less
      
       2) the range timers cause less wakeups by themselves because
          the percentage of hitchhiking on existing wakeup events goes up.
      
      I've verified the working of the patch using "perf timechart", the
      original exposed bug is gone with this patch. Frans also reported
      success - the latencies are now down in the expected ~10 msec
      range.
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      Tested-by: NFrans Pop <elendil@planet.nl>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mike Galbraith <efault@gmx.de>
      LKML-Reference: <20091008064041.67219b13@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9bcbdd9c
  12. 04 10月, 2009 9 次提交
  13. 03 10月, 2009 1 次提交
    • A
      x86: Simplify bound checks in the MTRR code · 11879ba5
      Arjan van de Ven 提交于
      The current bound checks for copy_from_user in the MTRR driver are
      not as obvious as they could be, and gcc agrees with that.
      
      This patch simplifies the boundary checks to the point that gcc can
      now prove to itself that the copy_from_user() is never going past
      its bounds.
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <20090926205150.30797709@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      11879ba5
  14. 02 10月, 2009 3 次提交
  15. 01 10月, 2009 7 次提交
  16. 30 9月, 2009 1 次提交