1. 15 3月, 2011 2 次提交
  2. 12 3月, 2011 9 次提交
  3. 11 3月, 2011 2 次提交
  4. 10 3月, 2011 2 次提交
    • S
      ftrace/graph: Trace function entry before updating index · 722b3c74
      Steven Rostedt 提交于
      Currently the index to the ret_stack is updated and the real return address
      is saved in the ret_stack. Then we call the trace function. The trace
      function could decide that it doesn't want to trace this function
      (ex. set_graph_function does not match) and it will return 0 which means
      not to trace this call.
      
      The normal function graph tracer has this code:
      
      	if (!(trace->depth || ftrace_graph_addr(trace->func)) ||
      	      ftrace_graph_ignore_irqs())
      		return 0;
      
      What this states is, if the trace depth (which is curr_ret_stack)
      is zero (top of nested functions) then test if we want to trace this
      function. If this function is not to be traced, then return  0 and
      the rest of the function graph tracer logic will not trace this function.
      
      The problem arises when an interrupt comes in after we updated the
      curr_ret_stack. The next function that gets called will have a trace->depth
      of 1. Which fools this trace code into thinking that we are in a nested
      function, and that we should trace. This causes interrupts to be traced
      when they should not be.
      
      The solution is to trace the function first and then update the ret_stack.
      Reported-by: Nzhiping zhong <xzhong86@163.com>
      Reported-by: Nwu zhangjin <wuzhangjin@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      722b3c74
    • N
      [CPUFREQ] pcc-cpufreq: don't load driver if get_freq fails during init. · 1f858ef2
      Naga Chumbalkar 提交于
      Return 0 on failure. This will cause the initialization of the driver
      to fail and prevent the driver from loading if the BIOS cannot handle
      the PCC interface command to "get frequency". Otherwise, the driver
      will load and display a very high value like "4294967274" (which is
      actually -EINVAL) for frequency:
      
      # cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_cur_freq
      4294967274
      Signed-off-by: NNaga Chumbalkar <nagananda.chumbalkar@hp.com>
      CC: stable@kernel.org
      Signed-off-by: NDave Jones <davej@redhat.com>
      1f858ef2
  5. 09 3月, 2011 4 次提交
    • N
      x86: Don't check for BIOS corruption in first 64K when there's no need to · a7bd1daf
      Naga Chumbalkar 提交于
      Due to commit 781c5a67 it is
      likely that the number of areas to scan for BIOS corruption is 0
       -- especially when the first 64K is already reserved
      (X86_RESERVE_LOW is 64K by default).
      
      If that's the case then don't set up the scan.
      Signed-off-by: NNaga Chumbalkar <nagananda.chumbalkar@hp.com>
      Cc: <stable@kernel.org>
      LKML-Reference: <20110225202838.2229.71011.sendpatchset@nchumbalkar.americas.hpqcorp.net>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a7bd1daf
    • S
      x86: Fix binutils-2.21 symbol related build failures · 2ae9d293
      Sedat Dilek 提交于
      New binutils version 2.21.0.20110302-1 started checking that the symbol
      parameter to the .size directive matches the entry name's
      symbol parameter, unearthing two mismatches:
      
        AS      arch/x86/kernel/acpi/wakeup_rm.o
        arch/x86/kernel/acpi/wakeup_rm.S: Assembler messages:
        arch/x86/kernel/acpi/wakeup_rm.S:12: Error: .size expression with symbol `wakeup_code_start' does not evaluate to a constant
      
        arch/x86/kernel/entry_32.S: Assembler messages:
        arch/x86/kernel/entry_32.S:1421: Error: .size expression with
        symbol `apf_page_fault' does not evaluate to a constant
      
      The problem was discovered while using Debian's binutils
      (2.21.0.20110302-1) and experimenting with binutils from
      upstream.
      
      Thanks Alexander and H.J. for the vital help.
      Signed-off-by: NSedat Dilek <sedat.dilek@gmail.com>
      Cc: Alexander van Heukelum <heukelum@fastmail.fm>
      Cc: H.J. Lu <hjl.tools@gmail.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Rafael J. Wysocki <rjw@sisk.pl>
      LKML-Reference: <1299620364-21644-1-git-send-email-sedat.dilek@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2ae9d293
    • J
      kprobes: Disabling optimized kprobes for entry text section · 2a8247a2
      Jiri Olsa 提交于
      You can crash the kernel (with root/admin privileges) using kprobe tracer by running:
      
       echo "p system_call_after_swapgs" > ./kprobe_events
       echo 1 > ./events/kprobes/enable
      
      The reason is that at the system_call_after_swapgs label, the
      kernel stack is not set up. If optimized kprobes are enabled,
      the user space stack is being used in this case (see optimized
      kprobe template) and this might result in a crash.
      
      There are several places like this over the entry code
      (entry_$BIT). As it seems there's no any reasonable/maintainable
      way to disable only those places where the stack is not ready, I
      switched off the whole entry code from kprobe optimizing.
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: acme@redhat.com
      Cc: fweisbec@gmail.com
      Cc: ananth@in.ibm.com
      Cc: davem@davemloft.net
      Cc: a.p.zijlstra@chello.nl
      Cc: eric.dumazet@gmail.com
      Cc: 2nddept-manager@sdl.hitachi.co.jp
      LKML-Reference: <1298298313-5980-3-git-send-email-jolsa@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2a8247a2
    • J
      x86: Separate out entry text section · ea714547
      Jiri Olsa 提交于
      Put x86 entry code into a separate link section: .entry.text.
      
      Separating the entry text section seems to have performance
      benefits - caused by more efficient instruction cache usage.
      
      Running hackbench with perf stat --repeat showed that the change
      compresses the icache footprint. The icache load miss rate went
      down by about 15%:
      
       before patch:
               19417627  L1-icache-load-misses      ( +-   0.147% )
      
       after patch:
               16490788  L1-icache-load-misses      ( +-   0.180% )
      
      The motivation of the patch was to fix a particular kprobes
      bug that relates to the entry text section, the performance
      advantage was discovered accidentally.
      
      Whole perf output follows:
      
       - results for current tip tree:
      
        Performance counter stats for './hackbench/hackbench 10' (500 runs):
      
               19417627  L1-icache-load-misses      ( +-   0.147% )
             2676914223  instructions             #      0.497 IPC     ( +- 0.079% )
             5389516026  cycles                     ( +-   0.144% )
      
            0.206267711  seconds time elapsed   ( +-   0.138% )
      
       - results for current tip tree with the patch applied:
      
        Performance counter stats for './hackbench/hackbench 10' (500 runs):
      
               16490788  L1-icache-load-misses      ( +-   0.180% )
             2717734941  instructions             #      0.502 IPC     ( +- 0.079% )
             5414756975  cycles                     ( +-   0.148% )
      
            0.206747566  seconds time elapsed   ( +-   0.137% )
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: masami.hiramatsu.pt@hitachi.com
      Cc: ananth@in.ibm.com
      Cc: davem@davemloft.net
      Cc: 2nddept-manager@sdl.hitachi.co.jp
      LKML-Reference: <20110307181039.GB15197@jolsa.redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ea714547
  6. 05 3月, 2011 2 次提交
  7. 04 3月, 2011 4 次提交
    • A
      perf: Fix LLC-* events on Intel Nehalem/Westmere · e994d7d2
      Andi Kleen 提交于
      On Intel Nehalem and Westmere CPUs the generic perf LLC-* events count the
      L2 caches, not the real L3 LLC - this was inconsistent with behavior on
      other CPUs.
      
      Fixing this requires the use of the special OFFCORE_RESPONSE
      events which need a separate mask register.
      
      This has been implemented by the previous patch, now use this infrastructure
      to set correct events for the LLC-* on Nehalem and Westmere.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NLin Ming <ming.m.lin@intel.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1299119690-13991-3-git-send-email-ming.m.lin@intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e994d7d2
    • A
      perf: Add support for supplementary event registers · a7e3ed1e
      Andi Kleen 提交于
      Change logs against Andi's original version:
      
      - Extends perf_event_attr:config to config{,1,2} (Peter Zijlstra)
      - Fixed a major event scheduling issue. There cannot be a ref++ on an
        event that has already done ref++ once and without calling
        put_constraint() in between. (Stephane Eranian)
      - Use thread_cpumask for percore allocation. (Lin Ming)
      - Use MSR names in the extra reg lists. (Lin Ming)
      - Remove redundant "c = NULL" in intel_percore_constraints
      - Fix comment of perf_event_attr::config1
      
      Intel Nehalem/Westmere have a special OFFCORE_RESPONSE event
      that can be used to monitor any offcore accesses from a core.
      This is a very useful event for various tunings, and it's
      also needed to implement the generic LLC-* events correctly.
      
      Unfortunately this event requires programming a mask in a separate
      register. And worse this separate register is per core, not per
      CPU thread.
      
      This patch:
      
      - Teaches perf_events that OFFCORE_RESPONSE needs extra parameters.
        The extra parameters are passed by user space in the
        perf_event_attr::config1 field.
      
      - Adds support to the Intel perf_event core to schedule per
        core resources. This adds fairly generic infrastructure that
        can be also used for other per core resources.
        The basic code has is patterned after the similar AMD northbridge
        constraints code.
      
      Thanks to Stephane Eranian who pointed out some problems
      in the original version and suggested improvements.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NLin Ming <ming.m.lin@intel.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1299119690-13991-2-git-send-email-ming.m.lin@intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a7e3ed1e
    • S
      perf_events: Update PEBS event constraints · 17e31629
      Stephane Eranian 提交于
      This patch updates PEBS event constraints for Intel Atom, Nehalem, Westmere.
      
      This patch also reorganizes the PEBS format/constraint detection code. It is
      now based on processor model and not PEBS format. Two processors may use the
      same PEBS format without have the same list of PEBS events.
      
      In this second version, we simplified the initialization of the PEBS
      constraints by leveraging the existing switch() statement in perf_event_intel.c.
      We also renamed the constraint tables to be more consistent with regular
      constraints.
      
      In this 3rd version, we drop BR_INST_RETIRED.MISPRED from Intel Atom as it does
      not seem to work. Use MISPREDICTED_BRANCH_RETIRED instead. Also add FP_ASSIST.*
      o both Intel Nehalem and Westmere. I misssed those in the earlier patches.
      Events were tested using libpfm4 perf_examples.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <4d6e6b02.815bdf0a.637b.07a7@mx.google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      17e31629
    • T
      x86-64, NUMA: Revert NUMA affine page table allocation · f8911250
      Tejun Heo 提交于
      This patch reverts NUMA affine page table allocation added by commit
      1411e0ec (x86-64, numa: Put pgtable to local node memory).
      
      The commit made an undocumented change where the kernel linear mapping
      strictly follows intersection of e820 memory map and NUMA
      configuration.  If the physical memory configuration has holes or NUMA
      nodes are not properly aligned, this leads to using unnecessarily
      smaller mapping size which leads to increased TLB pressure.  For
      details,
      
        http://thread.gmane.org/gmane.linux.kernel/1104672
      
      Patches to fix the problem have been proposed but the underlying code
      needs more cleanup and the approach itself seems a bit heavy handed
      and it has been determined to revert the feature for now and come back
      to it in the next developement cycle.
      
        http://thread.gmane.org/gmane.linux.kernel/1105959
      
      As init_memory_mapping_high() callsites have been consolidated since
      the commit, reverting is done manually.  Also, the RED-PEN comment in
      arch/x86/mm/init.c is not restored as the problem no longer exists
      with memblock based top-down early memory allocation.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      f8911250
  8. 02 3月, 2011 4 次提交
  9. 01 3月, 2011 1 次提交
  10. 26 2月, 2011 1 次提交
  11. 25 2月, 2011 2 次提交
  12. 24 2月, 2011 7 次提交