1. 29 1月, 2010 23 次提交
  2. 28 1月, 2010 1 次提交
    • A
      perf: Fix inconsistency between IP and callchain sampling · 339ce1a4
      Anton Blanchard 提交于
      When running perf across all cpus with backtracing (-a -g), sometimes we
      get samples without associated backtraces:
      
          23.44%         init  [kernel]                     [k] restore
          11.46%         init                       eeba0c  [k] 0x00000000eeba0c
           6.77%      swapper  [kernel]                     [k] .perf_ctx_adjust_freq
           5.73%         init  [kernel]                     [k] .__trace_hcall_entry
           4.69%         perf  libc-2.9.so                  [.] 0x0000000006bb8c
                             |
                             |--11.11%-- 0xfffa941bbbc
      
      It turns out the backtrace code has a check for the idle task and the IP
      sampling does not. This creates problems when profiling an interrupt
      heavy workload (in my case 10Gbit ethernet) since we get no backtraces
      for interrupts received while idle (ie most of the workload).
      
      Right now x86 and sh check that current is not NULL, which should never
      happen so remove that too.
      
      Idle task's exclusion must be performed from the core code, on top
      of perf_event_attr:exclude_idle.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      LKML-Reference: <20100118054707.GT12666@kryten>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      339ce1a4
  3. 27 1月, 2010 6 次提交
  4. 21 1月, 2010 1 次提交
    • A
      perf buildid-cache: Add new command to manage build-id cache · ef12a141
      Arnaldo Carvalho de Melo 提交于
      For now it just has operations to examine a given file, find its
      build-id and add or remove it to/from the cache.
      
      Useful, for instance, when adding binaries sent together with a
      perf.data file, so that we can add them to the cache and have
      the tools find it when resolving symbols.
      
      It'll also manage the size of the cache like 'ccache' does.
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frédéric Weisbecker <fweisbec@gmail.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <1264008525-29025-1-git-send-email-acme@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ef12a141
  5. 20 1月, 2010 6 次提交
  6. 18 1月, 2010 1 次提交
  7. 17 1月, 2010 2 次提交
    • F
      perf: Better order flexible and pinned scheduling · 329c0e01
      Frederic Weisbecker 提交于
      When a task gets scheduled in. We don't touch the cpu bound events
      so the priority order becomes:
      
      	cpu pinned, cpu flexible, task pinned, task flexible.
      
      So schedule out cpu flexibles when a new task context gets in
      and correctly order the groups to schedule in:
      
      	task pinned, cpu flexible, task flexible.
      
      Cpu pinned groups don't need to be touched at this time.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      329c0e01
    • F
      perf: Don't schedule out/in pinned events on task tick · 7defb0f8
      Frederic Weisbecker 提交于
      We don't need to schedule in/out pinned events on task tick,
      now that pinned and flexible groups can be scheduled separately.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      7defb0f8