1. 19 8月, 2010 2 次提交
    • F
      perf: Generalize callchain_store() · 70791ce9
      Frederic Weisbecker 提交于
      callchain_store() is the same on every archs, inline it in
      perf_event.h and rename it to perf_callchain_store() to avoid
      any collision.
      
      This removes repetitive code.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Tested-by: NWill Deacon <will.deacon@arm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Borislav Petkov <bp@amd64.org>
      70791ce9
    • F
      perf: Drop unappropriate tests on arch callchains · c1a65932
      Frederic Weisbecker 提交于
      Drop the TASK_RUNNING test on user tasks for callchains as
      this check doesn't seem to make any sense.
      
      Also remove the tests for !current that is not supposed to
      happen and current->pid as this should be handled at the
      generic level, with exclude_idle attribute.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Tested-by: NWill Deacon <will.deacon@arm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Borislav Petkov <bp@amd64.org>
      c1a65932
  2. 28 1月, 2010 1 次提交
    • A
      perf: Fix inconsistency between IP and callchain sampling · 339ce1a4
      Anton Blanchard 提交于
      When running perf across all cpus with backtracing (-a -g), sometimes we
      get samples without associated backtraces:
      
          23.44%         init  [kernel]                     [k] restore
          11.46%         init                       eeba0c  [k] 0x00000000eeba0c
           6.77%      swapper  [kernel]                     [k] .perf_ctx_adjust_freq
           5.73%         init  [kernel]                     [k] .__trace_hcall_entry
           4.69%         perf  libc-2.9.so                  [.] 0x0000000006bb8c
                             |
                             |--11.11%-- 0xfffa941bbbc
      
      It turns out the backtrace code has a check for the idle task and the IP
      sampling does not. This creates problems when profiling an interrupt
      heavy workload (in my case 10Gbit ethernet) since we get no backtraces
      for interrupts received while idle (ie most of the workload).
      
      Right now x86 and sh check that current is not NULL, which should never
      happen so remove that too.
      
      Idle task's exclusion must be performed from the core code, on top
      of perf_event_attr:exclude_idle.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      LKML-Reference: <20100118054707.GT12666@kryten>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      339ce1a4
  3. 05 11月, 2009 1 次提交