1. 10 3月, 2010 6 次提交
  2. 08 3月, 2010 2 次提交
  3. 07 3月, 2010 1 次提交
  4. 06 3月, 2010 1 次提交
  5. 04 3月, 2010 3 次提交
  6. 03 3月, 2010 1 次提交
    • F
      x86/stacktrace: Don't dereference bad frame pointers · 29044ad1
      Frederic Weisbecker 提交于
      Callers of a stacktrace might pass bad frame pointers. Those
      are usually checked for safety in stack walking helpers before
      any dereferencing, but this is not the case when we need to go
      through one more frame pointer that backlinks the irq stack to
      the previous one, as we don't have any reliable address boudaries
      to compare this frame pointer against.
      
      This raises crashes when we record callchains for ftrace events
      with perf because we don't use the right helpers to capture
      registers there. We get wrong frame pointers as we call
      task_pt_regs() even on kernel threads, which is a wrong thing
      as it gives us the initial state of any kernel threads freshly
      created. This is even not what we want for user tasks. What we want
      is a hot snapshot of registers when the ftrace event triggers, not
      the state before a task entered the kernel.
      
      This requires more thoughts to do it correctly though.
      So first put a guardian to ensure the given frame pointer
      can be dereferenced to avoid crashes. We'll think about how to fix
      the callers in a subsequent patch.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: 2.6.33.x <stable@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      29044ad1
  7. 02 3月, 2010 2 次提交
  8. 01 3月, 2010 3 次提交
    • S
      x86: Raise vsyscall priority on hotplug notifier chain · be43f83d
      Sheng Yang 提交于
      KVM need vsyscall_init() to initialize MSR_TSC_AUX before it read the value.
      Per Avi's suggestion, this patch raised vsyscall priority on hotplug notifier
      chain, to 30.
      
      CC: Ingo Molnar <mingo@elte.hu>
      CC: linux-kernel@vger.kernel.org
      Signed-off-by: NSheng Yang <sheng@linux.intel.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      be43f83d
    • R
      perf, x86: rename macro in ARCH_PERFMON_EVENTSEL_ENABLE · bb1165d6
      Robert Richter 提交于
      For consistency reasons this patch renames
      ARCH_PERFMON_EVENTSEL0_ENABLE to ARCH_PERFMON_EVENTSEL_ENABLE.
      
      The following is performed:
      
       $ sed -i -e s/ARCH_PERFMON_EVENTSEL0_ENABLE/ARCH_PERFMON_EVENTSEL_ENABLE/g \
         arch/x86/include/asm/perf_event.h arch/x86/kernel/cpu/perf_event.c \
         arch/x86/kernel/cpu/perf_event_p6.c \
         arch/x86/kernel/cpu/perfctr-watchdog.c \
         arch/x86/oprofile/op_model_amd.c arch/x86/oprofile/op_model_ppro.c
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      bb1165d6
    • F
      hw-breakpoints: Remove stub unthrottle callback · 1e259e0a
      Frederic Weisbecker 提交于
      We support event unthrottling in breakpoint events. It means
      that if we have more than sysctl_perf_event_sample_rate/HZ,
      perf will throttle, ignoring subsequent events until the next
      tick.
      
      So if ptrace exceeds this max rate, it will omit events, which
      breaks the ptrace determinism that is supposed to report every
      triggered breakpoints. This is likely to happen if we set
      sysctl_perf_event_sample_rate to 1.
      
      This patch removes support for unthrottling in breakpoint
      events to break throttling and restore ptrace determinism.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: 2.6.33.x <stable@kernel.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: K.Prasad <prasad@linux.vnet.ibm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      1e259e0a
  9. 28 2月, 2010 4 次提交
    • E
      x86: Fix out of order of gsi · fad53995
      Eric W. Biederman 提交于
      Iranna D Ankad reported that IBM x3950 systems have boot
      problems after this commit:
      
       |
       | commit b9c61b70
       |
       |    x86/pci: update pirq_enable_irq() to setup io apic routing
       |
      
      The problem is that with the patch, the machine freezes when
      console=ttyS0,... kernel serial parameter is passed.
      
      It seem to freeze at DVD initialization and the whole problem
      seem to be DVD/pata related, but somehow exposed through the
      serial parameter.
      
      Such apic problems can expose really weird behavior:
      
        ACPI: IOAPIC (id[0x10] address[0xfecff000] gsi_base[0])
        IOAPIC[0]: apic_id 16, version 0, address 0xfecff000, GSI 0-2
        ACPI: IOAPIC (id[0x0f] address[0xfec00000] gsi_base[3])
        IOAPIC[1]: apic_id 15, version 0, address 0xfec00000, GSI 3-38
        ACPI: IOAPIC (id[0x0e] address[0xfec01000] gsi_base[39])
        IOAPIC[2]: apic_id 14, version 0, address 0xfec01000, GSI 39-74
        ACPI: INT_SRC_OVR (bus 0 bus_irq 1 global_irq 4 dfl dfl)
        ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 5 dfl dfl)
        ACPI: INT_SRC_OVR (bus 0 bus_irq 3 global_irq 6 dfl dfl)
        ACPI: INT_SRC_OVR (bus 0 bus_irq 4 global_irq 7 dfl dfl)
        ACPI: INT_SRC_OVR (bus 0 bus_irq 6 global_irq 9 dfl dfl)
        ACPI: INT_SRC_OVR (bus 0 bus_irq 7 global_irq 10 dfl dfl)
        ACPI: INT_SRC_OVR (bus 0 bus_irq 8 global_irq 11 low edge)
        ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 12 dfl dfl)
        ACPI: INT_SRC_OVR (bus 0 bus_irq 12 global_irq 15 dfl dfl)
        ACPI: INT_SRC_OVR (bus 0 bus_irq 13 global_irq 16 dfl dfl)
        ACPI: INT_SRC_OVR (bus 0 bus_irq 14 global_irq 17 low edge)
        ACPI: INT_SRC_OVR (bus 0 bus_irq 15 global_irq 18 dfl dfl)
      
      It turns out that the system has three io apic controllers, but
      boot ioapic routing is in the second one, and that gsi_base is
      not 0 - it is using a bunch of INT_SRC_OVR...
      
      So these recent changes:
      
       1. one set routing for first io apic controller
       2. assume irq = gsi
      
      ... will break that system.
      
      So try to remap those gsis, need to seperate boot_ioapic_idx
      detection out of enable_IO_APIC() and call them early.
      
      So introduce boot_ioapic_idx, and remap_ioapic_gsi()...
      
       -v2: shift gsi with delta instead of gsi_base of boot_ioapic_idx
      
       -v3: double check with find_isa_irq_apic(0, mp_INT) to get right
            boot_ioapic_idx
      
       -v4: nr_legacy_irqs
      
       -v5: add print out for boot_ioapic_idx, and also make it could be
            applied for current kernel and previous kernel
      
       -v6: add bus_irq, in acpi_sci_ioapic_setup, so can get overwride
            for sci right mapping...
      
       -v7: looks like pnpacpi get irq instead of gsi, so need to revert
            them back...
      
       -v8: split into two patches
      
       -v9: according to Eric, use fixed 16 for shifting instead of remap
      
       -v10: still need to touch rsparser.c
      
       -v11: just revert back to way Eric suggest...
            anyway the ioapic in first ioapic is blocked by second...
      
       -v12: two patches, this one will add more loop but check apic_id and irq > 16
      Reported-by: NIranna D Ankad <iranna.ankad@in.ibm.com>
      Bisected-by: NIranna D Ankad <iranna.ankad@in.ibm.com>
      Tested-by: NGary Hade <garyhade@us.ibm.com>
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Thomas Renninger <trenn@suse.de>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: len.brown@intel.com
      LKML-Reference: <4B8A321A.1000008@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fad53995
    • I
      x86, paravirt: Remove kmap_atomic_pte paravirt op. · dad52fc0
      Ian Campbell 提交于
      Now that both Xen and VMI disable allocations of PTE pages from high
      memory this paravirt op serves no further purpose.
      
      This effectively reverts ce6234b5 "add kmap_atomic_pte for mapping
      highpte pages".
      Signed-off-by: NIan Campbell <ian.campbell@citrix.com>
      LKML-Reference: <1267204562-11844-3-git-send-email-ian.campbell@citrix.com>
      Acked-by: NAlok Kataria <akataria@vmware.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      dad52fc0
    • I
      x86, vmi: Disable highmem PTE allocation even when CONFIG_HIGHPTE=y · 3249b7e1
      Ian Campbell 提交于
      Preventing HIGHPTE allocations under VMI will allow us to remove the
      kmap_atomic_pte paravirt op.
      Signed-off-by: NIan Campbell <ian.campbell@citrix.com>
      LKML-Reference: <1267204562-11844-2-git-send-email-ian.campbell@citrix.com>
      Acked-by: NAlok Kataria <akataria@vmware.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      3249b7e1
    • F
      x86/hw-breakpoints: Remove the name field · 3d083407
      Frederic Weisbecker 提交于
      Remove the name field from the arch_hw_breakpoint. We never deal
      with target symbols in the arch level, neither do we need to ever
      store it. It's a legacy for the previous version of the x86
      breakpoint backend.
      
      Let's remove it.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: K.Prasad <prasad@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      3d083407
  10. 27 2月, 2010 3 次提交
    • I
      x86: apic: Fix mismerge, add arch_probe_nr_irqs() again · 21c2fd99
      Ingo Molnar 提交于
      Merge commit aef55d49 mis-merged io_apic.c so we lost the
      arch_probe_nr_irqs() method.
      
      This caused subtle boot breakages (udev confusion likely
      due to missing drivers) with certain configs.
      
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      LKML-Reference: <20100207210250.GB8256@jenkins.home.ifup.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      21c2fd99
    • R
      x86: Enable NMI on all cpus on UV · 78c06176
      Russ Anderson 提交于
      Enable NMI on all cpus in UV system and add an NMI handler
      to dump_stack on each cpu.
      
      By default on x86 all the cpus except the boot cpu have NMI
      masked off.  This patch enables NMI on all cpus in UV system
      and adds an NMI handler to dump_stack on each cpu.  This
      way if a system hangs we can NMI the machine and get a
      backtrace from all the cpus.
      
      Version 2: Use x86_platform driver mechanism for nmi init, per
                 Ingo's suggestion.
      
      Version 3: Clean up Ingo's nits.
      Signed-off-by: NRuss Anderson <rja@sgi.com>
      LKML-Reference: <20100226164912.GA24439@sgi.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      78c06176
    • P
      perf_event, amd: Fix spinlock initialization · 1dd2980d
      Peter Zijlstra 提交于
      Avoid kernels from exploding on AMD machines when they have any
      lock debugging bits enabled.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1dd2980d
  11. 26 2月, 2010 12 次提交
    • P
      perf_events, x86: Split PMU definitions into separate files · f22f54f4
      Peter Zijlstra 提交于
      Split amd,p6,intel into separate files so that we can easily deal with
      CONFIG_CPU_SUP_* things, needed to make things build now that perf_event.c
      relies on symbols from amd.c
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f22f54f4
    • P
      perf_events, x86: Remove superflous MSR writes · 6667661d
      Peter Zijlstra 提交于
      We re-program the event control register every time we reset the count,
      this appears to be superflous, hence remove it.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6667661d
    • P
      perf_events: Simplify code by removing cpu argument to hw_perf_group_sched_in() · 6e37738a
      Peter Zijlstra 提交于
      Since the cpu argument to hw_perf_group_sched_in() is always
      smp_processor_id(), simplify the code a little by removing this argument
      and using the current cpu where needed.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: David Miller <davem@davemloft.net>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <1265890918.5396.3.camel@laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6e37738a
    • S
      perf_events, x86: AMD event scheduling · 38331f62
      Stephane Eranian 提交于
      This patch adds correct AMD NorthBridge event scheduling.
      
      NB events are events measuring L3 cache, Hypertransport traffic. They are
      identified by an event code >= 0xe0. They measure events on the
      Northbride which is shared by all cores on a package. NB events are
      counted on a shared set of counters. When a NB event is programmed in a
      counter, the data actually comes from a shared counter. Thus, access to
      those counters needs to be synchronized.
      
      We implement the synchronization such that no two cores can be measuring
      NB events using the same counters. Thus, we maintain a per-NB allocation
      table. The available slot is propagated using the event_constraint
      structure.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <4b703957.0702d00a.6bf2.7b7d@mx.google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      38331f62
    • S
      perf_events: Add new start/stop PMU callbacks · d76a0812
      Stephane Eranian 提交于
      In certain situations, the kernel may need to stop and start the same
      event rapidly. The current PMU callbacks do not distinguish between stop
      and release (i.e., stop + free the resource). Thus, a counter may be
      released, then it will be immediately re-acquired. Event scheduling will
      again take place with no guarantee to assign the same counter. On some
      processors, this may event yield to failure to assign the event back due
      to competion between cores.
      
      This patch is adding a new pair of callback to stop and restart a counter
      without actually release the underlying counter resource. On stop, the
      counter is stopped, its values saved and that's it. On start, the value
      is reloaded and counter is restarted (on x86, actual restart is delayed
      until perf_enable()).
      Signed-off-by: NStephane Eranian <eranian@google.com>
      [ added fallback to ->enable/->disable for all other PMUs
        fixed x86_pmu_start() to call x86_pmu.enable()
        merged __x86_pmu_disable into x86_pmu_stop() ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <4b703875.0a04d00a.7896.ffffb824@mx.google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d76a0812
    • Y
      early_res: Add free_early_partial() · fb90ef93
      Yinghai Lu 提交于
      To free partial areas in pcpu_setup...
      Reported-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      LKML-Reference: <4B85E245.5030001@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fb90ef93
    • T
      x86, olpc: Use pci subarch init for OLPC · d5d0e88c
      Thomas Gleixner 提交于
      Replace the #ifdef'ed OLPC-specific init functions by a conditional
      x86_init function.  If the function returns 0 we leave pci_arch_init,
      otherwise we continue.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
      Cc: Andres Salomon <dilinger@collabora.co.uk>
      LKML-Reference: <43F901BD926A4E43B106BF17856F0755A318CE89@orsmsx508.amr.corp.intel.com>
      Signed-off-by: NJacob Pan <jacob.jun.pan@intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      d5d0e88c
    • M
      kprobes/x86: Support kprobes jump optimization on x86 · c0f7ac3a
      Masami Hiramatsu 提交于
      Introduce x86 arch-specific optimization code, which supports
      both of x86-32 and x86-64.
      
      This code also supports safety checking, which decodes whole of
      a function in which probe is inserted, and checks following
      conditions before optimization:
       - The optimized instructions which will be replaced by a jump instruction
         don't straddle the function boundary.
       - There is no indirect jump instruction, because it will jumps into
         the address range which is replaced by jump operand.
       - There is no jump/loop instruction which jumps into the address range
         which is replaced by jump operand.
       - Don't optimize kprobes if it is in functions into which fixup code will
         jumps.
      
      This uses text_poke_multibyte() which doesn't support modifying
      code on NMI/MCE handler. However, since kprobes itself doesn't
      support NMI/MCE code probing, it's not a problem.
      
      Changes in v9:
       - Use *_text_reserved() for checking the probe can be optimized.
       - Verify jump address range is in 2G range when preparing slot.
       - Backup original code when switching optimized buffer, instead of
         preparing buffer, because there can be int3 of other probes in
         preparing phase.
       - Check kprobe is disabled in arch_check_optimized_kprobe().
       - Strictly check indirect jump opcodes (ff /4, ff /5).
      
      Changes in v6:
       - Split stop_machine-based jump patching code.
       - Update comments and coding style.
      
      Changes in v5:
       - Introduce stop_machine-based jump replacing.
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Cc: systemtap <systemtap@sources.redhat.com>
      Cc: DLE <dle-develop@lists.sourceforge.net>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Jim Keniston <jkenisto@us.ibm.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Anders Kaseorg <andersk@ksplice.com>
      Cc: Tim Abbott <tabbott@ksplice.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      LKML-Reference: <20100225133446.6725.78994.stgit@localhost6.localdomain6>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c0f7ac3a
    • M
      x86: Add text_poke_smp for SMP cross modifying code · 3d55cc8a
      Masami Hiramatsu 提交于
      Add generic text_poke_smp for SMP which uses stop_machine()
      to synchronize modifying code.
      This stop_machine() method is officially described at "7.1.3
      Handling Self- and Cross-Modifying Code" on the intel's
      software developer's manual 3A.
      
      Since stop_machine() can't protect code against NMI/MCE, this
      function can not modify those handlers. And also, this function
      is basically for modifying multibyte-single-instruction. For
      modifying multibyte-multi-instructions, we need another special
      trap & detour code.
      
      This code originaly comes from immediate values with
      stop_machine() version. Thanks Jason and Mathieu!
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Cc: systemtap <systemtap@sources.redhat.com>
      Cc: DLE <dle-develop@lists.sourceforge.net>
      Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Jim Keniston <jkenisto@us.ibm.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Anders Kaseorg <andersk@ksplice.com>
      Cc: Tim Abbott <tabbott@ksplice.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      LKML-Reference: <20100225133438.6725.80273.stgit@localhost6.localdomain6>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3d55cc8a
    • M
      kprobes/x86: Cleanup save/restore registers · f007ea26
      Masami Hiramatsu 提交于
      Introduce SAVE/RESOTRE_REGS_STRING for cleanup
      kretprobe-trampoline asm code. These macros will be used for
      emulating interruption.
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Cc: systemtap <systemtap@sources.redhat.com>
      Cc: DLE <dle-develop@lists.sourceforge.net>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Jim Keniston <jkenisto@us.ibm.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Anders Kaseorg <andersk@ksplice.com>
      Cc: Tim Abbott <tabbott@ksplice.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      LKML-Reference: <20100225133430.6725.83342.stgit@localhost6.localdomain6>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f007ea26
    • M
      kprobes/x86: Boost probes when reentering · 0f94eb63
      Masami Hiramatsu 提交于
      Integrate prepare_singlestep() into setup_singlestep() to boost
      up reenter probes, if possible.
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Cc: systemtap <systemtap@sources.redhat.com>
      Cc: DLE <dle-develop@lists.sourceforge.net>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Jim Keniston <jkenisto@us.ibm.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Anders Kaseorg <andersk@ksplice.com>
      Cc: Tim Abbott <tabbott@ksplice.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      LKML-Reference: <20100225133423.6725.12071.stgit@localhost6.localdomain6>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0f94eb63
    • M
      kprobes/x86: Cleanup RELATIVEJUMP_INSTRUCTION to RELATIVEJUMP_OPCODE · d498f763
      Masami Hiramatsu 提交于
      Change RELATIVEJUMP_INSTRUCTION macro to RELATIVEJUMP_OPCODE
      since it represents just the opcode byte.
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: systemtap <systemtap@sources.redhat.com>
      Cc: DLE <dle-develop@lists.sourceforge.net>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Jim Keniston <jkenisto@us.ibm.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Anders Kaseorg <andersk@ksplice.com>
      Cc: Tim Abbott <tabbott@ksplice.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      LKML-Reference: <20100225133349.6725.99302.stgit@localhost6.localdomain6>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d498f763
  12. 25 2月, 2010 2 次提交
    • S
      ftrace: Remove memory barriers from NMI code when not needed · 0c54dd34
      Steven Rostedt 提交于
      The code in stop_machine that modifies the kernel text has a bit
      of logic to handle the case of NMIs. stop_machine does not prevent
      NMIs from executing, and if an NMI were to trigger on another CPU
      as the modifying CPU is changing the NMI text, a GPF could result.
      
      To prevent the GPF, the NMI calls ftrace_nmi_enter() which may
      modify the code first, then any other NMIs will just change the
      text to the same content which will do no harm. The code that
      stop_machine called must wait for NMIs to finish while it changes
      each location in the kernel. That code may also change the text
      to what the NMI changed it to. The key is that the text will never
      change content while another CPU is executing it.
      
      To make the above work, the call to ftrace_nmi_enter() must also
      do a smp_mb() as well as atomic_inc().  But for applications like
      perf that require a high number of NMIs for profiling, this can have
      a dramatic effect on the system. Not only is it doing a full memory
      barrier on both nmi_enter() as well as nmi_exit() it is also
      modifying a global variable with an atomic operation. This kills
      performance on large SMP machines.
      
      Since the memory barriers are only needed when ftrace is in the
      process of modifying the text (which is seldom), this patch
      adds a "modifying_code" variable that gets set before stop machine
      is executed and cleared afterwards.
      
      The NMIs will check this variable and store it in a per CPU
      "save_modifying_code" variable that it will use to check if it
      needs to do the memory barriers and atomic dec on NMI exit.
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0c54dd34
    • T
      x86: Do not reserve brk for DMI if it's not going to be used · e808bae2
      Thadeu Lima de Souza Cascardo 提交于
      This will save 64K bytes from memory when loading linux if DMI is
      disabled, which is good for embedded systems.
      Signed-off-by: NThadeu Lima de Souza Cascardo <cascardo@holoscopio.com>
      LKML-Reference: <1265758732-19320-1-git-send-email-cascardo@holoscopio.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      e808bae2