1. 06 11月, 2009 1 次提交
  2. 04 11月, 2009 1 次提交
    • P
      x86/hw-breakpoints: Actually flush thread breakpoints in flush_thread(). · 41a48d14
      Paul Mundt 提交于
      flush_thread() tries to do a TIF_DEBUG check before calling in to
      flush_thread_hw_breakpoint() (which subsequently clears the thread flag),
      but for some reason, the x86 code is manually clearing TIF_DEBUG
      immediately before the test, so this path will never be taken.
      
      This kills off the erroneous clear_tsk_thread_flag() and lets
      flush_thread_hw_breakpoint() actually get invoked.
      
      Presumably folks were getting lucky with testing and the
      free_thread_info() -> free_thread_xstate() path was taking care of the
      flush there.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      Acked-by: N"K.Prasad" <prasad@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      LKML-Reference: <20091005102306.GA7889@linux-sh.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      41a48d14
  3. 15 10月, 2009 1 次提交
  4. 14 10月, 2009 2 次提交
    • F
      tracing: Move syscalls metadata handling from arch to core · c44fc770
      Frederic Weisbecker 提交于
      Most of the syscalls metadata processing is done from arch.
      But these operations are mostly generic accross archs. Especially now
      that we have a common variable name that expresses the number of
      syscalls supported by an arch: NR_syscalls, the only remaining bits
      that need to reside in arch is the syscall nr to addr translation.
      
      v2: Compare syscalls symbols only after the "sys" prefix so that we
          avoid spurious mismatches with archs that have syscalls wrappers,
          in which case syscalls symbols have "SyS" prefixed aliases.
          (Reported by: Heiko Carstens)
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      c44fc770
    • S
      function-graph/x86: Replace unbalanced ret with jmp · 194ec341
      Steven Rostedt 提交于
      The function graph tracer replaces the return address with a hook
      to trace the exit of the function call. This hook will finish by
      returning to the real location the function should return to.
      
      But the current implementation uses a ret to jump to the real
      return location. This causes a imbalance between calls and ret.
      That is the original function does a call, the ret goes to the
      handler and then the handler does a ret without a matching call.
      
      Although the function graph tracer itself still breaks the branch
      predictor by replacing the original ret, by using a second ret and
      causing an imbalance, it breaks the predictor even more.
      
      This patch replaces the ret with a jmp to keep the calls and ret
      balanced. I tested this on one box and it showed a 1.7% increase in
      performance. Another box only showed a small 0.3% increase. But no
      box that I tested this on showed a decrease in performance by
      making this change.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <20091013203425.042034383@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      194ec341
  5. 13 10月, 2009 4 次提交
    • J
      x86/paravirt: Use normal calling sequences for irq enable/disable · 71999d98
      Jeremy Fitzhardinge 提交于
      Bastian Blank reported a boot crash with stackprotector enabled,
      and debugged it back to edx register corruption.
      
      For historical reasons irq enable/disable/save/restore had special
      calling sequences to make them more efficient.  With the more
      recent introduction of higher-level and more general optimisations
      this is no longer necessary so we can just use the normal PVOP_
      macros.
      
      This fixes some residual bugs in the old implementations which left
      edx liable to inadvertent clobbering. Also, fix some bugs in
      __PVOP_VCALLEESAVE which were revealed by actual use.
      Reported-by: NBastian Blank <bastian@waldi.eu.org>
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: Stable Kernel <stable@kernel.org>
      Cc: Xen-devel <xen-devel@lists.xensource.com>
      LKML-Reference: <4AD3BC9B.7040501@goop.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      71999d98
    • I
      perf_events, x86: Fix event constraints code · 7a693d3f
      Ingo Molnar 提交于
      There was namespace overlap due to a rename i did - this caused
      the following build warning, reported by Stephen Rothwell against
      linux-next x86_64 allmodconfig:
      
        arch/x86/kernel/cpu/perf_event.c: In function 'intel_get_event_idx':
        arch/x86/kernel/cpu/perf_event.c:1445: warning: 'event_constraint' is used uninitialized in this function
      
      This is a real bug not just a warning: fix it by renaming the
      global event-constraints table pointer to 'event_constraints'.
      Reported-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Cc: Stephane Eranian <eranian@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <20091013144223.369d616d.sfr@canb.auug.org.au>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7a693d3f
    • H
      x86: fix kernel panic on 32 bits when profiling · d1705c55
      H. Peter Anvin 提交于
      Latest kernel has a kernel panic in booting on i386 machine when
      profile=2 setting in cmdline.  It is due to 'sp' being incorrect in
      profile_pc().
      
      BUG: unable to handle kernel NULL pointer dereference at 00000246
      IP: [<c01288b6>] profile_pc+0x2a/0x48
      *pde = 00000000
      Oops: 0000 [#1] SMP
      
      This differs from the original version by Alex Shi in that we use the
      kernel_stack_pointer() inline already defined in <asm/ptrace.h> for
      this purpose, instead of #ifdef.
      Originally-by: NAlex Shi <alex.shi@intel.com>
      Cc: "Chen, Tim C" <tim.c.chen@intel.com>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      d1705c55
    • J
      x86: Fix Suspend to RAM freeze on Acer Aspire 1511Lmi laptop · 7a4b7e5e
      Jan Beulich 提交于
      Move the trampoline and accessors back out of .cpuinit.* for the
      case of 64-bits+ACPI_SLEEP.
      
      This solves s2ram hangs reported in:
      
        http://bugzilla.kernel.org/show_bug.cgi?id=14279Reported-and-bisected-by: NChristian Casteyde <casteyde.christian@free.fr>
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Cc: <bugzilla-daemon@bugzilla.kernel.org>
      Cc: "Andrew Morton" <akpm@linux-foundation.org>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7a4b7e5e
  6. 12 10月, 2009 5 次提交
  7. 09 10月, 2009 5 次提交
  8. 08 10月, 2009 1 次提交
    • A
      x86, timers: Check for pending timers after (device) interrupts · 9bcbdd9c
      Arjan van de Ven 提交于
      Now that range timers and deferred timers are common, I found a
      problem with these using the "perf timechart" tool. Frans Pop also
      reported high scheduler latencies via LatencyTop, when using
      iwlagn.
      
      It turns out that on x86, these two 'opportunistic' timers only get
      checked when another "real" timer happens. These opportunistic
      timers have the objective to save power by hitchhiking on other
      wakeups, as to avoid CPU wakeups by themselves as much as possible.
      
      The change in this patch runs this check not only at timer
      interrupts, but at all (device) interrupts. The effect is that:
      
       1) the deferred timers/range timers get delayed less
      
       2) the range timers cause less wakeups by themselves because
          the percentage of hitchhiking on existing wakeup events goes up.
      
      I've verified the working of the patch using "perf timechart", the
      original exposed bug is gone with this patch. Frans also reported
      success - the latencies are now down in the expected ~10 msec
      range.
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      Tested-by: NFrans Pop <elendil@planet.nl>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mike Galbraith <efault@gmx.de>
      LKML-Reference: <20091008064041.67219b13@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9bcbdd9c
  9. 04 10月, 2009 9 次提交
  10. 03 10月, 2009 1 次提交
    • A
      x86: Simplify bound checks in the MTRR code · 11879ba5
      Arjan van de Ven 提交于
      The current bound checks for copy_from_user in the MTRR driver are
      not as obvious as they could be, and gcc agrees with that.
      
      This patch simplifies the boundary checks to the point that gcc can
      now prove to itself that the copy_from_user() is never going past
      its bounds.
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <20090926205150.30797709@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      11879ba5
  11. 02 10月, 2009 3 次提交
  12. 01 10月, 2009 7 次提交