1. 11 8月, 2011 1 次提交
    • J
      perf report: Use properly build_id kernel binaries · f57b05ed
      Jiri Olsa 提交于
      If we bring the recorded perf data together with kernel binary from another
      machine using:
      
      	on server A:
      	perf archive
      
      	on server B:
      	tar xjvf perf.data.tar.bz2 -C ~/.debug
      
      the build_id kernel dso is not properly recognized during the "perf report"
      command on server B.
      
      The reason is, that build_id dsos are added during the session initialization,
      while the kernel maps are created during the sample event processing.
      
      The machine__create_kernel_maps functions ends up creating new dso object for
      kernel, but it does not check if we already have one added by build_id
      processing.
      
      Also the build_id reading ABI quirk added in commit:
      
       - commit b2511481
         perf build-id: Add quirk to deal with perf.data file format breakage
      
      populates the "struct build_id_event::pid" with 0, which
      is later interpreted as DEFAULT_GUEST_KERNEL_ID.
      
      This is not always correct, so it's better to guess the pid
      value based on the "struct build_id_event::header::misc" value.
      
      - Tested with data generated on x86 kernel version v2.6.34
        and reported back on x86_64 current kernel.
      - Not tested for guest kernel case.
      
      Note the problem stays for PERF_RECORD_MMAP events recorded by perf that
      does not use proper pid (HOST_KERNEL_ID/DEFAULT_GUEST_KERNEL_ID). They are
      misinterpreted within the current perf code. Probably there's not much we
      can do about that.
      
      Cc: Avi Kivity <avi@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Yanmin Zhang <yanmin_zhang@linux.intel.com>
      Link: http://lkml.kernel.org/r/20110601194346.GB1934@jolsa.brq.redhat.comSigned-off-by: NJiri Olsa <jolsa@redhat.com>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      f57b05ed
  2. 10 8月, 2011 4 次提交
  3. 09 8月, 2011 1 次提交
  4. 08 8月, 2011 5 次提交
  5. 03 8月, 2011 1 次提交
  6. 26 7月, 2011 1 次提交
  7. 25 7月, 2011 1 次提交
  8. 22 7月, 2011 5 次提交
  9. 21 7月, 2011 8 次提交
  10. 16 7月, 2011 11 次提交
  11. 15 7月, 2011 2 次提交
    • S
      ftrace: Fix regression where ftrace breaks when modules are loaded · f7bc8b61
      Steven Rostedt 提交于
      Enabling function tracer to trace all functions, then load a module and
      then disable function tracing will cause ftrace to fail.
      
      This can also happen by enabling function tracing on the command line:
      
        ftrace=function
      
      and during boot up, modules are loaded, then you disable function tracing
      with 'echo nop > current_tracer' you will trigger a bug in ftrace that
      will shut itself down.
      
      The reason is, the new ftrace code keeps ref counts of all ftrace_ops that
      are registered for tracing. When one or more ftrace_ops are registered,
      all the records that represent the functions that the ftrace_ops will
      trace have a ref count incremented. If this ref count is not zero,
      when the code modification runs, that function will be enabled for tracing.
      If the ref count is zero, that function will be disabled from tracing.
      
      To make sure the accounting was working, FTRACE_WARN_ON()s were added
      to updating of the ref counts.
      
      If the ref count hits its max (> 2^30 ftrace_ops added), or if
      the ref count goes below zero, a FTRACE_WARN_ON() is triggered which
      disables all modification of code.
      
      Since it is common for ftrace_ops to trace all functions in the kernel,
      instead of creating > 20,000 hash items for the ftrace_ops, the hash
      count is just set to zero, and it represents that the ftrace_ops is
      to trace all functions. This is where the issues arrise.
      
      If you enable function tracing to trace all functions, and then add
      a module, the modules function records do not get the ref count updated.
      When the function tracer is disabled, all function records ref counts
      are subtracted. Since the modules never had their ref counts incremented,
      they go below zero and the FTRACE_WARN_ON() is triggered.
      
      The solution to this is rather simple. When modules are loaded, and
      their functions are added to the the ftrace pool, look to see if any
      ftrace_ops are registered that trace all functions. And for those,
      update the ref count for the module function records.
      Reported-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f7bc8b61
    • M
      tracing/kprobes: Rename probe_* to trace_probe_* · 7143f168
      Masami Hiramatsu 提交于
      Rename probe_* to trace_probe_* for avoiding namespace
      confliction. This also fixes improper names of find_probe_event()
      and cleanup_all_probes() to find_trace_probe() and
      release_all_trace_probes() respectively.
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Link: http://lkml.kernel.org/r/20110627072636.6528.60374.stgit@fedora15Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      7143f168