1. 23 9月, 2009 1 次提交
    • P
      perf_event, x86: Fix 'perf sched record' crashing the machine · 7d428966
      Peter Zijlstra 提交于
      Chris Malley reported that 'perf sched record' sometimes
      crashes his box with:
      
      [  389.272175] BUG: unable to handle kernel paging request at ffffb300
      [  389.272294] IP: [<c011b0bd>] default_send_IPI_self+0x1d/0x50
      [  389.272366] *pde = 0073f067 *pte = 00000000
      [  389.274708] Call Trace:
      [  389.274752]  [<c010e3b4>] ?  set_perf_event_pending+0x14/0x20
      [  389.274801]  [<c01b9751>] ?  perf_output_unlock+0x121/0x1a0
      [  389.274848]  [<c01b981a>] ? perf_output_end+0x4a/0x70
      [  389.274893]  [<c01ba690>] ?  __perf_event_overflow+0x240/0x2f0
      [  389.274942]  [<c030963e>] ? atomic64_cmpxchg+0x1e/0x30
      [  389.274988]  [<c01ba8f4>] ?  perf_swevent_ctx_event+0x1b4/0x1c0
      [  389.275035]  [<c01ba773>] ?  perf_swevent_ctx_event+0x33/0x1c0
      [  389.275081]  [<c01ba9a7>] ? do_perf_sw_event+0xa7/0x160
      [  389.275127]  [<c01baae2>] ? perf_tp_event+0x82/0xa0
      [  389.275174]  [<c012e9c6>] ?  ftrace_profile_sched_stat_runtime+0xe6/0x120
      [  389.275224]  [<c012e8e0>] ?  ftrace_profile_sched_stat_runtime+0x0/0x120
      [  389.275273]  [<c013c85a>] ? update_curr+0x18a/0x230
      [  389.275318]  [<c013cdc5>] ?  put_prev_task_fair+0x155/0x160
      [  389.275366]  [<c01618b5>] ? sched_clock_cpu+0xd5/0x110
      [  389.275413]  [<c04e7525>] ? _spin_lock_irq+0x45/0x50
      [  389.275458]  [<c04e424e>] ? schedule+0x20e/0xb10
      
      The problem is that the box has no lapic enabled:
      
        [    0.042445] Local APIC not detected. Using dummy APIC emulation.
      
      The below seems like the best fix. We disabled all lapic bits, except
      the self-IPI-resend logic.
      Reported-by: NChris Malley <mail@chrismalley.co.uk>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <7863dc4c0909221409v7893bfd3o4b590d5951a233ba@mail.gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7d428966
  2. 21 9月, 2009 4 次提交
    • I
      perf: Tidy up after the big rename · 57c0c15b
      Ingo Molnar 提交于
       - provide compatibility Kconfig entry for existing PERF_COUNTERS .config's
      
       - provide courtesy copy of old perf_counter.h, for user-space projects
      
       - small indentation fixups
      
       - fix up MAINTAINERS
      
       - fix small x86 printout fallout
      
       - fix up small PowerPC comment fallout (use 'counter' as in register)
      Reviewed-by: NArjan van de Ven <arjan@linux.intel.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      57c0c15b
    • I
      perf: Do the big rename: Performance Counters -> Performance Events · cdd6c482
      Ingo Molnar 提交于
      Bye-bye Performance Counters, welcome Performance Events!
      
      In the past few months the perfcounters subsystem has grown out its
      initial role of counting hardware events, and has become (and is
      becoming) a much broader generic event enumeration, reporting, logging,
      monitoring, analysis facility.
      
      Naming its core object 'perf_counter' and naming the subsystem
      'perfcounters' has become more and more of a misnomer. With pending
      code like hw-breakpoints support the 'counter' name is less and
      less appropriate.
      
      All in one, we've decided to rename the subsystem to 'performance
      events' and to propagate this rename through all fields, variables
      and API names. (in an ABI compatible fashion)
      
      The word 'event' is also a bit shorter than 'counter' - which makes
      it slightly more convenient to write/handle as well.
      
      Thanks goes to Stephane Eranian who first observed this misnomer and
      suggested a rename.
      
      User-space tooling and ABI compatibility is not affected - this patch
      should be function-invariant. (Also, defconfigs were not touched to
      keep the size down.)
      
      This patch has been generated via the following script:
      
        FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')
      
        sed -i \
          -e 's/PERF_EVENT_/PERF_RECORD_/g' \
          -e 's/PERF_COUNTER/PERF_EVENT/g' \
          -e 's/perf_counter/perf_event/g' \
          -e 's/nb_counters/nb_events/g' \
          -e 's/swcounter/swevent/g' \
          -e 's/tpcounter_event/tp_event/g' \
          $FILES
      
        for N in $(find . -name perf_counter.[ch]); do
          M=$(echo $N | sed 's/perf_counter/perf_event/g')
          mv $N $M
        done
      
        FILES=$(find . -name perf_event.*)
      
        sed -i \
          -e 's/COUNTER_MASK/REG_MASK/g' \
          -e 's/COUNTER/EVENT/g' \
          -e 's/\<event\>/event_id/g' \
          -e 's/counter/event/g' \
          -e 's/Counter/Event/g' \
          $FILES
      
      ... to keep it as correct as possible. This script can also be
      used by anyone who has pending perfcounters patches - it converts
      a Linux kernel tree over to the new naming. We tried to time this
      change to the point in time where the amount of pending patches
      is the smallest: the end of the merge window.
      
      Namespace clashes were fixed up in a preparatory patch - and some
      stylistic fallout will be fixed up in a subsequent patch.
      
      ( NOTE: 'counters' are still the proper terminology when we deal
        with hardware registers - and these sed scripts are a bit
        over-eager in renaming them. I've undone some of that, but
        in case there's something left where 'counter' would be
        better than 'event' we can undo that on an individual basis
        instead of touching an otherwise nicely automated patch. )
      Suggested-by: NStephane Eranian <eranian@google.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Reviewed-by: NArjan van de Ven <arjan@linux.intel.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Kyle McMartin <kyle@mcmartin.ca>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: <linux-arch@vger.kernel.org>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cdd6c482
    • I
      perf_counter: Rename 'event' to event_id/hw_event · dfc65094
      Ingo Molnar 提交于
      In preparation to the renames, to avoid a namespace clash.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      dfc65094
    • P
      perf_counter: x86: Fix PMU resource leak · a1792cda
      Peter Zijlstra 提交于
      Dave noticed that we leak the PMU resource reservations when we
      fail the hardware counter init.
      Reported-by: NDavid Miller <davem@davemloft.net>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NDavid Miller <davem@davemloft.net>
      LKML-Reference: <1252483487.7746.164.camel@twins>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a1792cda
  3. 19 9月, 2009 1 次提交
  4. 04 9月, 2009 3 次提交
  5. 12 8月, 2009 1 次提交
    • I
      perf_counter, x86: Fix/improve apic fallback · 04da8a43
      Ingo Molnar 提交于
      Johannes Stezenbach reported that his Pentium-M based
      laptop does not have the local APIC enabled by default,
      and hence perfcounters do not get initialized.
      
      Add a fallback for this case: allow non-sampled counters
      and return with an error on sampled counters. This allows
      'perf stat' to work out of box - and allows 'perf top'
      and 'perf record' to fall back on a hrtimer based sampling
      method.
      
      ( Passing 'lapic' on the boot line will allow hardware
        sampling to occur - but if the APIC is disabled
        permanently by the hardware then this fallback still
        allows more systems to use perfcounters. )
      
      Also decouple perfcounter support from X86_LOCAL_APIC.
      
      -v2: fix typo breaking counters on all other systems ...
      Reported-by: NJohannes Stezenbach <js@sig21.net>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      04da8a43
  6. 11 8月, 2009 2 次提交
    • I
      perf_counter, x86: Fix generic cache events on P6-mobile CPUs · f64ccccb
      Ingo Molnar 提交于
      Johannes Stezenbach reported that 'perf stat' does not count
      cache-miss and cache-references events on his Pentium-M based
      laptop.
      
      This is because we left them blank in p6_perfmon_event_map[],
      fill them in.
      Reported-by: NJohannes Stezenbach <js@sig21.net>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f64ccccb
    • I
      perf_counter, x86: Fix lapic printk message · 3c581a7f
      Ingo Molnar 提交于
      Instead of this garbled bootup on UP Pentium-M systems:
      
      [    0.015048] Performance Counters:
      [    0.016004] no Local APIC, try rebooting with lapicno PMU driver, software counters only.
      
      Print:
      
      [    0.015050] Performance Counters:
      [    0.016004] no APIC, boot with the "lapic" boot parameter to force-enable it.
      [    0.017003] no PMU driver, software counters only.
      
      Cf: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3c581a7f
  7. 09 8月, 2009 1 次提交
  8. 23 7月, 2009 1 次提交
  9. 13 7月, 2009 1 次提交
    • D
      perf_counter, x86: Extend perf_counter Pentium M support · f1c6a581
      Daniel Qarras 提交于
      I've attached a patch to remove the Pentium M special casing of
      EMON and as noticed at least with my Pentium M the hardware PMU
      now works:
      
       Performance counter stats for '/bin/ls /var/tmp':
      
             1.809988  task-clock-msecs         #      0.125 CPUs
                    1  context-switches         #      0.001 M/sec
                    0  CPU-migrations           #	 0.000 M/sec
                  224  page-faults              #	 0.124 M/sec
              1425648  cycles                   #    787.656 M/sec
               912755  instructions             #	 0.640 IPC
      
      Vince suggested that this code was trying to address erratum
      Y17 in Pentium-M's:
      
        http://download.intel.com/support/processors/mobile/pm/sb/25266532.pdf
      
      But that erratum (related to IA32_MISC_ENABLES.7) does not
      affect perfcounters as we dont use this toggle to disable RDPMC
      and WRMSR/RDMSR access to performance counters. We keep cr4's
      bit 8 (X86_CR4_PCE) clear so unprivileged RDPMC access is not
      allowed anyway.
      
      Cc: Vince Weaver <vince@deater.net>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Stephane Eranian <eranian@googlemail.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f1c6a581
  10. 10 7月, 2009 3 次提交
    • P
      perf_counter: Clean up global vs counter enable · 984b838c
      Peter Zijlstra 提交于
      Ingo noticed that both AMD and P6 call
      x86_pmu_disable_counter() on *_pmu_enable_counter(). This is
      because we rely on the side effect of that call to program
      the event config but not touch the EN bit.
      
      We change that for AMD by having enable_all() simply write
      the full config in, and for P6 by explicitly coding it.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      984b838c
    • P
      perf_counter: Fix up P6 PMU details · 9c74fb50
      Peter Zijlstra 提交于
      The P6 doesn't seem to support cache ref/hit/miss counts, so
      we extend the generic hardware event codes to have 0 and -1
      mean the same thing as for the generic cache events.
      
      Furthermore, it turns out the 0 event does not count
      (that is, its reported that on PPro it actually does count
      something), therefore use a event configuration that's
      specified not to count to disable the counters.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9c74fb50
    • V
      perf_counter: Add P6 PMU support · 11d1578f
      Vince Weaver 提交于
      Add basic P6 PMU support. The P6 uses the EVNTSEL0 EN bit to
      enable/disable both its counters. We use this for the
      global enable/disable, and clear all config bits (except EN)
      to disable individual counters.
      
      Actual ia32 hardware doesn't support lfence, so use a locked
      op without side-effect to implement a full barrier.
      
      perf stat and perf record seem to function correctly.
      
      [a.p.zijlstra@chello.nl: cleanups and complete the enable/disable code]
      Signed-off-by: NVince Weaver <vince@deater.net>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <Pine.LNX.4.64.0907081718450.2715@pianoman.cluster.toy>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      11d1578f
  11. 02 7月, 2009 1 次提交
    • F
      perf_counter: Ignore the nmi call frames in the x86-64 backtraces · 0406ca6d
      Frederic Weisbecker 提交于
      About every callchains recorded with perf record are filled up
      including the internal perfcounter nmi frame:
      
       perf_callchain
       perf_counter_overflow
       intel_pmu_handle_irq
       perf_counter_nmi_handler
       notifier_call_chain
       atomic_notifier_call_chain
       notify_die
       do_nmi
       nmi
      
      We want ignore this frame as it's not interesting for
      instrumentation. To solve this, we simply ignore every frames
      from nmi context.
      
      New example of "perf report -s sym -c" after this patch:
      
      9.59%  [k] search_by_key
                   4.88%
                      search_by_key
                      reiserfs_read_locked_inode
                      reiserfs_iget
                      reiserfs_lookup
                      do_lookup
                      __link_path_walk
                      path_walk
                      do_path_lookup
                      user_path_at
                      vfs_fstatat
                      vfs_lstat
                      sys_newlstat
                      system_call_fastpath
                      __lxstat
                      0x406fb1
      
                   3.19%
                      search_by_key
                      search_by_entry_key
                      reiserfs_find_entry
                      reiserfs_lookup
                      do_lookup
                      __link_path_walk
                      path_walk
                      do_path_lookup
                      user_path_at
                      vfs_fstatat
                      vfs_lstat
                      sys_newlstat
                      system_call_fastpath
                      __lxstat
                      0x406fb1
      [...]
      
      For now this patch only solves the problem in x86-64.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <1246474930-6088-1-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0406ca6d
  12. 29 6月, 2009 1 次提交
  13. 26 6月, 2009 1 次提交
  14. 24 6月, 2009 3 次提交
    • Y
      perf_counter, x86: Set global control MSR correctly · c14dab5c
      Yong Wang 提交于
      Previous code made an assumption that the power on value of global
      control MSR has enabled all fixed and general purpose counters properly.
      
      However, this is not the case for certain Intel processors, such as
      Atom - and it might also be firmware dependent.
      
      Each enable bit in IA32_PERF_GLOBAL_CTRL is AND'ed with the
      enable bits for all privilege levels in the respective IA32_PERFEVTSELx
      or IA32_PERF_FIXED_CTR_CTRL MSRs to start/stop the counting of
      respective counters. Counting is enabled if the AND'ed results is true;
      counting is disabled when the result is false.
      
      The end result is that all fixed counters are always disabled on Atom
      processors because the assumption is just invalid.
      
      Fix this by not initializing the ctrl-mask out of the global MSR,
      but setting it to perf_counter_mask.
      Reported-by: NStephane Eranian <eranian@googlemail.com>
      Signed-off-by: NYong Wang <yong.y.wang@intel.com>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20090624021324.GA2788@ywang-moblin2.bj.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c14dab5c
    • T
      percpu: clean up percpu variable definitions · 245b2e70
      Tejun Heo 提交于
      Percpu variable definition is about to be updated such that all percpu
      symbols including the static ones must be unique.  Update percpu
      variable definitions accordingly.
      
      * as,cfq: rename ioc_count uniquely
      
      * cpufreq: rename cpu_dbs_info uniquely
      
      * xen: move nesting_count out of xen_evtchn_do_upcall() and rename it
      
      * mm: move ratelimits out of balance_dirty_pages_ratelimited_nr() and
        rename it
      
      * ipv4,6: rename cookie_scratch uniquely
      
      * x86 perf_counter: rename prev_left to pmc_prev_left, irq_entry to
        pmc_irq_entry and nmi_entry to pmc_nmi_entry
      
      * perf_counter: rename disable_count to perf_disable_count
      
      * ftrace: rename test_event_disable to ftrace_test_event_disable
      
      * kmemleak: rename test_pointer to kmemleak_test_pointer
      
      * mce: rename next_interval to mce_next_interval
      
      [ Impact: percpu usage cleanups, no duplicate static percpu var names ]
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: linux-mm <linux-mm@kvack.org>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      245b2e70
    • T
      percpu: cleanup percpu array definitions · 204fba4a
      Tejun Heo 提交于
      Currently, the following three different ways to define percpu arrays
      are in use.
      
      1. DEFINE_PER_CPU(elem_type[array_len], array_name);
      2. DEFINE_PER_CPU(elem_type, array_name[array_len]);
      3. DEFINE_PER_CPU(elem_type, array_name)[array_len];
      
      Unify to #1 which correctly separates the roles of the two parameters
      and thus allows more flexibility in the way percpu variables are
      defined.
      
      [ Impact: cleanup ]
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: linux-mm@kvack.org
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: David S. Miller <davem@davemloft.net>
      204fba4a
  15. 21 6月, 2009 1 次提交
    • J
      perf_counter, x8: Fix L1-data-Cache-Store-Referencees for AMD · d9f2a5ec
      Jaswinder Singh Rajput 提交于
      Fix AMD's Data Cache Refills from System event.
      
      After this patch :
      
       ./tools/perf/perf stat -e l1d -e l1d-misses -e l1d-write -e l1d-prefetch -e l1d-prefetch-miss -e l1i -e l1i-misses -e l1i-prefetch -e l2 -e l2-misses -e l2-write -e dtlb -e dtlb-misses -e itlb -e itlb-misses -e bpu -e bpu-misses ls /dev/ > /dev/null
      
       Performance counter stats for 'ls /dev/':
      
              2499484  L1-data-Cache-Load-Referencees             (scaled from 3.97%)
                70347  L1-data-Cache-Load-Misses                  (scaled from 7.30%)
                 9360  L1-data-Cache-Store-Referencees            (scaled from 8.64%)
                32804  L1-data-Cache-Prefetch-Referencees         (scaled from 17.72%)
                 7693  L1-data-Cache-Prefetch-Misses              (scaled from 22.97%)
              2180945  L1-instruction-Cache-Load-Referencees      (scaled from 28.48%)
                14518  L1-instruction-Cache-Load-Misses           (scaled from 35.00%)
                 2405  L1-instruction-Cache-Prefetch-Referencees  (scaled from 34.89%)
                71387  L2-Cache-Load-Referencees                  (scaled from 34.94%)
                18732  L2-Cache-Load-Misses                       (scaled from 34.92%)
                79918  L2-Cache-Store-Referencees                 (scaled from 36.02%)
              1295294  Data-TLB-Cache-Load-Referencees            (scaled from 35.99%)
                30896  Data-TLB-Cache-Load-Misses                 (scaled from 33.36%)
              1222030  Instruction-TLB-Cache-Load-Referencees     (scaled from 29.46%)
                  357  Instruction-TLB-Cache-Load-Misses          (scaled from 20.46%)
               530888  Branch-Cache-Load-Referencees              (scaled from 11.48%)
                 8638  Branch-Cache-Load-Misses                   (scaled from 5.09%)
      
          0.011295149  seconds time elapsed.
      
      Earlier it always shows value 0.
      Signed-off-by: NJaswinder Singh Rajput <jaswinderrajput@gmail.com>
      LKML-Reference: <1245484165.3102.6.camel@localhost.localdomain>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d9f2a5ec
  16. 19 6月, 2009 1 次提交
    • P
      perf_counter: Make callchain samples extensible · f9188e02
      Peter Zijlstra 提交于
      Before exposing upstream tools to a callchain-samples ABI, tidy it
      up to make it more extensible in the future:
      
      Use markers in the IP chain to denote context, use (u64)-1..-4095 range
      for these context markers because we use them for ERR_PTR(), so these
      addresses are unlikely to be mapped.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f9188e02
  17. 18 6月, 2009 1 次提交
  18. 15 6月, 2009 3 次提交
    • P
      perf_counter: x86: Fix call-chain support to use NMI-safe methods · 74193ef0
      Peter Zijlstra 提交于
      __copy_from_user_inatomic() isn't NMI safe in that it can trigger
      the page fault handler which is another trap and its return path
      invokes IRET which will also close the NMI context.
      
      Therefore use a GUP based approach to copy the stack frames over.
      
      We tried an alternative solution as well: we used a forward ported
      version of Mathieu Desnoyers's "NMI safe INT3 and Page Fault" patch
      that modifies the exception return path to use an open-coded IRET with
      explicit stack unrolling and TF checking.
      
      This didnt work as it interacted with faulting user-space instructions,
      causing them not to restart properly, which corrupts user-space
      registers.
      
      Solving that would probably involve disassembling those instructions
      and backtracing the RIP. But even without that, the code was deemed
      rather complex to the already non-trivial x86 entry assembly code,
      so instead we went for this GUP based method that does a
      software-walk of the pagetables.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Vegard Nossum <vegard.nossum@gmail.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      74193ef0
    • I
      perf_counter, x86: Fix kernel-space call-chains · 038e836e
      Ingo Molnar 提交于
      Kernel-space call-chains were trimmed at the first entry because
      we never processed anything beyond the first stack context.
      
      Allow the backtrace to jump from NMI to IRQ stack then to task stack
      and finally user-space stack.
      
      Also calculate the stack and bp variables correctly so that the
      stack walker does not exit early.
      
      We can get deep traces as a result, visible in perf report -D output:
      
      0x32af0 [0xe0]: PERF_EVENT (IP, 5): 15134: 0xffffffff815225fd period: 1
      ... chain: u:2, k:22, nr:24
      .....  0: 0xffffffff815225fd
      .....  1: 0xffffffff810ac51c
      .....  2: 0xffffffff81018e29
      .....  3: 0xffffffff81523939
      .....  4: 0xffffffff81524b8f
      .....  5: 0xffffffff81524bd9
      .....  6: 0xffffffff8105e498
      .....  7: 0xffffffff8152315a
      .....  8: 0xffffffff81522c3a
      .....  9: 0xffffffff810d9b74
      ..... 10: 0xffffffff810dbeec
      ..... 11: 0xffffffff810dc3fb
      
      This is a 22-entries kernel-space chain.
      
      (We still only record reliable stack entries.)
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      038e836e
    • I
      perf_counter, x86: Fix call-chain walking · 5a6cec3a
      Ingo Molnar 提交于
      Fix the ptregs variant when we hit user-mode tasks.
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5a6cec3a
  19. 13 6月, 2009 2 次提交
  20. 12 6月, 2009 1 次提交
  21. 11 6月, 2009 4 次提交
  22. 10 6月, 2009 2 次提交
  23. 09 6月, 2009 1 次提交