1. 11 5月, 2010 2 次提交
    • A
      perf hist: Calculate max_sym name len and nr_entries · fefb0b94
      Arnaldo Carvalho de Melo 提交于
      Better done when we are adding entries, be it initially of when we're
      re-sorting the histograms.
      
      Cc: Frédéric Weisbecker <fweisbec@gmail.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      fefb0b94
    • A
      perf hist: Introduce hists class and move lots of methods to it · 1c02c4d2
      Arnaldo Carvalho de Melo 提交于
      In cbbc79a5 we introduced support for multiple events by introducing a
      new "event_stat_id" struct and then made several perf_session methods
      receive a point to it instead of a pointer to perf_session, and kept the
      event_stats and hists rb_tree in perf_session.
      
      While working on the new newt based browser, I realised that it would be
      better to introduce a new class, "hists" (short for "histograms"),
      renaming the "event_stat_id" struct and the perf_session methods that
      were really "hists" methods, as they manipulate only struct hists
      members, not touching anything in the other perf_session members.
      
      Other optimizations, such as calculating the maximum lenght of a symbol
      name present in an hists instance will be possible as we add them,
      avoiding a re-traversal just for finding that information.
      
      The rationale for the name "hists" to replace "event_stat_id" is that we
      may have multiple sets of hists for the same event_stat id, as, for
      instance, the 'perf diff' tool has, so event stat id is not what
      characterizes what this struct and the functions that manipulate it do.
      
      Cc: Eric B Munson <ebmunson@us.ibm.com>
      Cc: Frédéric Weisbecker <fweisbec@gmail.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      1c02c4d2
  2. 10 5月, 2010 9 次提交
  3. 09 5月, 2010 3 次提交
    • T
      perf/live-mode: Handle payload-less events · 794e43b5
      Tom Zanussi 提交于
      Some events, such as the PERF_RECORD_FINISHED_ROUND event consist of
      only an event header and no data.  In this case, a 0-length payload
      will be read, and the 0 return value will be wrongly interpreted as an
      'unexpected end of event stream'.
      
      This patch allows for proper handling of data-less events by skipping
      0-length reads.
      Signed-off-by: NTom Zanussi <tzanussi@gmail.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      LKML-Reference: <1273038527.6383.51.camel@tropicana>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      794e43b5
    • F
      perf: Provide a new deterministic events reordering algorithm · d6b17beb
      Frederic Weisbecker 提交于
      The current events reordering algorithm is based on a heuristic that
      gets broken once we deal with a very fast flow of events.
      
      Indeed the time period based flushing is not suitable anymore
      in the following case, assuming we have a flush period of two
      seconds.
      
          CPU 0           |        CPU 1
                          |
        cnt1 timestamps   |      cnt1 timestamps
                          |
          0               |         0
          1               |         1
          2               |         2
          3               |         3
          [...]           |        [...]
          4 seconds later
      
      If we spend too much time to read the buffers (case of a lot of
      events to record in each buffers or when we have a lot of CPU buffers
      to read), in the next pass the CPU 0 buffer could contain a slice
      of several seconds of events. We'll read them all and notice we've
      reached the period to flush. In the above example we flush the first
      half of the CPU 0 buffer, then we read the CPU 1 buffer where we
      have events that were on the flush slice and then the reordering
      fails.
      
      It's simple to reproduce with:
      
      	perf lock record perf bench sched messaging
      
      To solve this, we use a new solution that doesn't rely on an
      heuristical time slice period anymore but on a deterministic basis
      based on how perf record does its job.
      
      perf record saves the buffers through passes. A pass is a tour
      on every buffers from every CPUs. This is made in order: for
      each CPU we read the buffers of every counters. So the more
      buffers we visit, the later will be the timstamps of their events.
      
      When perf record finishes a pass it records a
      PERF_RECORD_FINISHED_ROUND pseudo event.
      We record the max timestamp t found in the pass n. Assuming these
      timestamps are monotonic across cpus, we know that if a buffer
      still has events with timestamps below t, they will be all available
      and then read in the pass n + 1.
      Hence when we start to read the pass n + 2, we can safely flush every
      events with timestamps below t.
      
            ============ PASS n =================
               CPU 0         |   CPU 1
                             |
            cnt1 timestamps  |   cnt2 timestamps
                  1          |         2
                  2          |         3
                  -          |         4  <--- max recorded
      
            ============ PASS n + 1 ==============
               CPU 0         |   CPU 1
                             |
            cnt1 timestamps  |   cnt2 timestamps
                  3          |         5
                  4          |         6
                  5          |         7 <---- max recorded
      
              Flush every events below timestamp 4
      
            ============ PASS n + 2 ==============
               CPU 0         |   CPU 1
                             |
            cnt1 timestamps  |   cnt2 timestamps
                  6          |         8
                  7          |         9
                  -          |         10
      
              Flush every events below timestamp 7
              etc...
      
      It also works on perf.data versions that don't have
      PERF_RECORD_FINISHED_ROUND pseudo events. The difference is that
      the events will be only flushed in the end of the perf.data
      processing. It will then consume more memory and scale less with
      large perf.data files.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      d6b17beb
    • F
      perf: Introduce a new "round of buffers read" pseudo event · 98402807
      Frederic Weisbecker 提交于
      In order to provide a more rubust and deterministic reordering
      algorithm, we need to know when we reach a point where we just
      did a pass through over every counter buffers to read every thing
      they had.
      
      This patch introduces a new PERF_RECORD_FINISHED_ROUND pseudo event
      that only consist in an event header and doesn't need to contain
      anything.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      98402807
  4. 08 5月, 2010 1 次提交
    • A
      perf list: Improve the raw hw event descriptor documentation · 1cf4a063
      Arnaldo Carvalho de Melo 提交于
      It was x86 specific and imcomplete at that, improve the situation by
      making it clear where the example provided applies and by adding the
      URLs for the Intel and AMD manuals where this is discussed in depth.
      Acked-by: NRobert Richter <robert.richter@amd.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Frédéric Weisbecker <fweisbec@gmail.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Reported-by: Robert Richter <robert.richter@amd.com
      LKML-Reference: <new-submission>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      1cf4a063
  5. 07 5月, 2010 1 次提交
    • P
      perf, x86: Improve the PEBS ABI · ab608344
      Peter Zijlstra 提交于
      Rename perf_event_attr::precise to perf_event_attr::precise_ip and
      widen it to 2 bits. This new field describes the required precision of
      the PERF_SAMPLE_IP field:
      
        0 - SAMPLE_IP can have arbitrary skid
        1 - SAMPLE_IP must have constant skid
        2 - SAMPLE_IP requested to have 0 skid
        3 - SAMPLE_IP must have 0 skid
      
      And modify the Intel PEBS code accordingly. The PEBS implementation
      now supports up to precise_ip == 2, where we perform the IP fixup.
      
      Also s/PERF_RECORD_MISC_EXACT/&_IP/ to clarify its meaning, this bit
      should be set for each PERF_SAMPLE_IP field known to match the actual
      instruction triggering the event.
      
      This new scheme allows for a PEBS mode that uses the buffer for more
      than a single event.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Stephane Eranian <eranian@google.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ab608344
  6. 05 5月, 2010 3 次提交
  7. 04 5月, 2010 1 次提交
    • A
      perf: Fix performance issue with perf report · 02bf60aa
      Anton Blanchard 提交于
      On a large machine we spend a lot of time in perf_header__find_attr when
      running perf report.
      
      If we are parsing a file without PERF_SAMPLE_ID then for each sample we call
      perf_header__find_attr and loop through all counter IDs, never finding a match.
      As the machine gets larger there are more per cpu counters and we spend an
      awful lot of time in there.
      
      The patch below initialises each sample id to -1ULL and checks for this in
      perf_header__find_attr. We may need to do something more intelligent eventually
      (eg a hash lookup from counter id to attr) but this at least fixes the most
      common usage of perf report.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Eric B Munson <ebmunson@us.ibm.com>
      Acked-by: NEric B Munson <ebmunson@us.ibm.com>
      LKML-Reference: <20100504111915.GB14636@kryten>
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      --
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      02bf60aa
  8. 03 5月, 2010 2 次提交
    • T
      perf: record TRACE_INFO only if using tracepoints and SAMPLE_RAW · 63e0c771
      Tom Zanussi 提交于
      The current perf code implicitly assumes SAMPLE_RAW means tracepoints
      are being used, but doesn't check for that.  It happily records the
      TRACE_INFO even if SAMPLE_RAW is used without tracepoints, but when the
      perf data is read it won't go any further when it finds TRACE_INFO but
      no tracepoints, and displays misleading errors.
      
      This adds a check for both in perf-record, and won't record TRACE_INFO
      unless both are true.  This at least allows perf report -D to dump raw
      events, and avoids triggering a misleading error condition in perf
      trace.  It doesn't actually enable the non-tracepoint raw events to be
      displayed in perf trace, since perf trace currently only deals with
      tracepoint events.
      
      Cc: Frédéric Weisbecker <fweisbec@gmail.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1272865861.7932.16.camel@tropicana>
      Signed-off-by: NTom Zanussi <tzanussi@gmail.com>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      63e0c771
    • T
      perf: add perf-inject builtin · 454c407e
      Tom Zanussi 提交于
      Currently, perf 'live mode' writes build-ids at the end of the
      session, which isn't actually useful for processing live mode events.
      
      What would be better would be to have the build-ids sent before any of
      the samples that reference them, which can be done by processing the
      event stream and retrieving the build-ids on the first hit.  Doing
      that in perf-record itself, however, is off-limits.
      
      This patch introduces perf-inject, which does the same job while
      leaving perf-record untouched.  Normal mode perf still records the
      build-ids at the end of the session as it should, but for live mode,
      perf-inject can be injected in between the record and report steps
      e.g.:
      
      perf record -o - ./hackbench 10 | perf inject -v -b | perf report -v -i -
      
      perf-inject reads a perf-record event stream and repipes it to stdout.
      At any point the processing code can inject other events into the
      event stream - in this case build-ids (-b option) are read and
      injected as needed into the event stream.
      
      Build-ids are just the first user of perf-inject - potentially
      anything that needs userspace processing to augment the trace stream
      with additional information could make use of this facility.
      
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frédéric Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <1272696080-16435-3-git-send-email-tzanussi@gmail.com>
      Signed-off-by: NTom Zanussi <tzanussi@gmail.com>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      454c407e
  9. 02 5月, 2010 2 次提交
  10. 01 5月, 2010 1 次提交
    • F
      perf: Fix warning while reading ring buffer headers · d00a47cc
      Frederic Weisbecker 提交于
      commit e9e94e3b
      "perf trace: Ignore "overwrite" field if present in
      /events/header_page" makes perf trace launching spurious warnings
      about unexpected tokens read:
      
      	Warning: Error: expected type 6 but read 4
      
      This change tries to handle the overcommit field in the header_page
      file whenever this field is present or not.
      
      The problem is that if this field is not present, we try to find it
      and give up in the middle of the line when we realize we are actually
      dealing with another field, which is the "data" one. And this failure
      abandons the file pointer in the middle of the "data" description
      line:
      
      	field: u64 timestamp;	offset:0;	size:8;	signed:0;
      	field: local_t commit;	offset:8;	size:8;	signed:1;
      	field: char data;	offset:16;	size:4080;	signed:1;
                            ^^^
                            Here
      
      What happens next is that we want to read this line to parse the data
      field, but we fail because the pointer is not in the beginning of the
      line.
      
      We could probably fix that by rewinding the pointer. But in fact we
      don't care much about these headers that only concern the ftrace
      ring-buffer. We don't use them from perf.
      
      Just skip this part of perf.data, but don't remove it from recording
      to stay compatible with olders perf.data
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      d00a47cc
  11. 30 4月, 2010 1 次提交
  12. 28 4月, 2010 4 次提交
    • A
      perf machines: Make the machines class adopt the dsos__fprintf methods · cbf69680
      Arnaldo Carvalho de Melo 提交于
      Now those methods don't operate on a global list of dsos, but on lists
      of machines, so make this clear by renaming the functions.
      
      Cc: Avi Kivity <avi@redhat.com>
      Cc: Frédéric Weisbecker <fweisbec@gmail.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Zhang, Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      cbf69680
    • A
      perf machine: Adopt some map_groups functions · d28c6223
      Arnaldo Carvalho de Melo 提交于
      Those functions operated on members now grouped in 'struct machine', so
      move those methods to this new class.
      
      The changes made to 'perf probe' shows that using this abstraction
      inserting probes on guests almost got supported for free.
      
      Cc: Avi Kivity <avi@redhat.com>
      Cc: Frédéric Weisbecker <fweisbec@gmail.com>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Zhang, Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      d28c6223
    • A
      perf machine: Pass buffer size to machine__mmap_name · 48ea8f54
      Arnaldo Carvalho de Melo 提交于
      Don't blindly assume that the size of the buffer is enough, use
      snprintf.
      
      Cc: Avi Kivity <avi@redhat.com>
      Cc: Frédéric Weisbecker <fweisbec@gmail.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Zhang, Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      48ea8f54
    • A
      perf tools: Rename "kernel_info" to "machine" · 23346f21
      Arnaldo Carvalho de Melo 提交于
      struct kernel_info and kerninfo__ are too vague, what they really
      describe are machines, virtual ones or hosts.
      
      There are more changes to introduce helpers to shorten function calls
      and to make more clear what is really being done, but I left that for
      subsequent patches.
      
      Cc: Avi Kivity <avi@redhat.com>
      Cc: Frédéric Weisbecker <fweisbec@gmail.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Zhang, Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      23346f21
  13. 27 4月, 2010 4 次提交
  14. 24 4月, 2010 2 次提交
    • F
      perf: Generalize perf lock's sample event reordering to the session layer · c61e52ee
      Frederic Weisbecker 提交于
      The sample events recorded by perf record are not time ordered
      because we have one buffer per cpu for each event (even demultiplexed
      per task/per cpu for task bound events). But when we read trace events
      we want them to be ordered by time because many state machines are
      involved.
      
      There are currently two ways perf tools deal with that:
      
      - use -M to multiplex every buffers (perf sched, perf kmem)
        But this creates a lot of contention in SMP machines on
        record time.
      
      - use a post-processing time reordering (perf timechart, perf lock)
        The reordering used by timechart is simple but doesn't scale well
        with huge flow of events, in terms of performance and memory use
        (unusable with perf lock for example).
        Perf lock has its own samples reordering that flushes its memory
        use in a regular basis and that uses a sorting based on the
        previous event queued (a new event to be queued is close to the
        previous one most of the time).
      
      This patch proposes to export perf lock's samples reordering facility
      to the session layer that reads the events. So if a tool wants to
      get ordered sample events, it needs to set its
      struct perf_event_ops::ordered_samples to true and that's it.
      
      This prepares tracing based perf tools to get rid of the need to
      use buffers multiplexing (-M) or to implement their own
      reordering.
      
      Also lower the flush period to 2 as it's sufficient already.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      c61e52ee
    • S
      perf: Fix initialization bug in parse_single_tracepoint_event() · 5710fcad
      Stephane Eranian 提交于
      The parse_single_tracepoint_event() was setting some attributes
      before it validated the event was indeed a tracepoint event. This
      caused problems with other initialization routines like in the
      builtin-top.c module whereby sample_period is not set if not 0.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      LKML-Reference: <4bcf232b.698fd80a.6fbe.ffffb737@mx.google.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      5710fcad
  15. 22 4月, 2010 1 次提交
  16. 21 4月, 2010 1 次提交
    • F
      perf: Fix perf probe build error · 6eca8cc3
      Frederic Weisbecker 提交于
      When we run into dry run mode, we want to make
      write_kprobe_trace_event to succeed on writing the event. Let's
      initialize it to 0.
      
      Fixes the following build error:
      	util/probe-event.c:1266: attention : «ret» may be used uninitialized in this function
      	util/probe-event.c:1266: note: «ret» was declared here
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <1271808065-25290-1-git-send-regression-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6eca8cc3
  17. 19 4月, 2010 1 次提交
  18. 15 4月, 2010 1 次提交
    • F
      perf: Make the trace events sample period default to 1 · f9212819
      Frederic Weisbecker 提交于
      Trace events are mostly used for tracing and then require not to
      be lost when possible. As opposite to hardware events that really
      require to trigger after a given sample period, trace events mostly
      need to trigger everytime.
      
      It is a frustrating experience to trace with perf and realize we
      lost a lot of events because we forgot the "-c 1" option.
      
      Then default sample_period to 1 for trace events but let the user
      override it.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      f9212819