1. 24 5月, 2012 1 次提交
  2. 23 5月, 2012 4 次提交
    • J
      Revert "sched, perf: Use a single callback into the scheduler" · ab0cce56
      Jiri Olsa 提交于
      This reverts commit cb04ff9a ("sched, perf: Use a single
      callback into the scheduler").
      
      Before this change was introduced, the process switch worked
      like this (wrt. to perf event schedule):
      
           schedule (prev, next)
             - schedule out all perf events for prev
             - switch to next
             - schedule in all perf events for current (next)
      
      After the commit, the process switch looks like:
      
           schedule (prev, next)
             - schedule out all perf events for prev
             - schedule in all perf events for (next)
             - switch to next
      
      The problem is, that after we schedule perf events in, the pmu
      is enabled and we can receive events even before we make the
      switch to next - so "current" still being prev process (event
      SAMPLE data are filled based on the value of the "current"
      process).
      
      Thats exactly what we see for test__PERF_RECORD test. We receive
      SAMPLES with PID of the process that our tracee is scheduled
      from.
      
      Discussed with Peter Zijlstra:
      
       > Bah!, yeah I guess reverting is the right thing for now. Sad
       > though.
       >
       > So by having the two hooks we have a black-spot between them
       > where we receive no events at all, this black-spot covers the
       > hand-over of current and we thus don't receive the 'wrong'
       > events.
       >
       > I rather liked we could do away with both that black-spot and
       > clean up the code a little, but apparently people rely on it.
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: acme@redhat.com
      Cc: paulus@samba.org
      Cc: cjashfor@linux.vnet.ibm.com
      Cc: fweisbec@gmail.com
      Cc: eranian@google.com
      Link: http://lkml.kernel.org/r/20120523111302.GC1638@m.brq.redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ab0cce56
    • A
      perf evlist: Show event attribute details · 26252ea6
      Arnaldo Carvalho de Melo 提交于
      There was no easy way to see the frequency used, and with the change of
      default, we better provide one.
      
      [root@sandy linux]# perf evlist -F
      cycles: sample_freq=4000
      [root@sandy linux]# perf evlist -v
      cycles: sample_freq=4000, size: 80, sample_type: 391, read_format: 7, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, sample_id_all: 1, exclude_guest: 1
      [root@sandy linux]#
      
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lkml.kernel.org/n/tip-e1p9poez3nwrgycbmwqmhlsu@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      26252ea6
    • A
      perf tools: Bump default sample freq to 4 kHz · 447a6013
      Arnaldo Carvalho de Melo 提交于
      Quoting Ingo:
      
      "While at it I'd also suggest increasing the default sampling frequency,
      from 1000 Hz per CPU to at least 4Khz auto-freq or so - this should work
      well all across the board I think. CPUs are getting faster and command/app
      run times are getting shorter, 1Khz is a bit low IMO."
      Requested-by: NIngo Molnar <mingo@kernel.org>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lkml.kernel.org/n/tip-2jafa6mkrufyekny9ei59lpu@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      447a6013
    • S
      perf buildid-list: Work better with pipe mode · 299c3452
      Stephane Eranian 提交于
      In order for perf buildid-list to work with pipe-mode files, it needs to
      process buildids and event attr structs.
      
      $ perf record -o - noploop 2 | ./perf inject -b | perf buildid-list -i - -H
      noploop for 2 seconds
      [ perf record: Woken up 1 times to write data ]
      [ perf record: Captured and wrote 0.084 MB - (~3678 samples) ]
      0000000000000000000000000000000000000000 [kernel.kallsyms]
      3a0d0629efe74a8da3eeba372cdbd74ad9b8f5d5 /usr/local/bin/noploop
      
      The reason [kernel.kallsyms] shows a 0 build-id comes from the
      way buildids are injected in the stream.
      
      The buildid for the kernel is provided by a BUILD_ID record. The
      [kernel.kallsyms] is provided by a MMAP record. There is no clean and
      obvious way to link the two, unfortunately.
      
      In regular mode, the kernel buildid is generated from reading the ELF
      image or kallsyms and perf knows to associate [kernel.kallsyms] to it.
      Later on, when perf processes the [kernel.kallsyms] MMAP record, it will
      already have a dso for it.
      
      So for now, make sure perf buildid-list shows the buildids for
      everything but the kernel image.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1337081295-10303-6-git-send-email-eranian@google.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      299c3452
  3. 22 5月, 2012 16 次提交
  4. 21 5月, 2012 3 次提交
  5. 19 5月, 2012 5 次提交
  6. 18 5月, 2012 2 次提交
    • J
      perf tools: Split term type into value type and term type · 16fa7e82
      Jiri Olsa 提交于
      Introducing type_val and type_term for term instead of a single type
      value. Currently the term type marked out the value type as well.
      
      With this change we can have future string term values being specified
      by user and translated into proper number along the processing.
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lkml.kernel.org/r/1335371102-11358-2-git-send-email-jolsa@redhat.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      16fa7e82
    • J
      perf hists: Fix callchain ip printf format · a0187060
      Jiri Olsa 提交于
      The callchain address is stored as u64. Current code uses following
      format string to display callchain address:
      
        "%p\n", (void *)(long)chain->ip
      
      This way we lose upper 32 bits if we report 64 bit addresses in 32 bit
      environment. Fixing this to always display whole 64 bits.
      
      Note, running following to test perf endianity handling:
      test 1)
        - origin system:
          # perf record -a -- sleep 10 (any perf record will do)
          # perf report > report.origin
          # perf archive perf.data
      
        - copy the perf.data, report.origin and perf.data.tar.bz2
          to a target system and run:
          # tar xjvf perf.data.tar.bz2 -C ~/.debug
          # perf report > report.target
          # diff -u report.origin report.target
      
        - the diff should produce no output
          (besides some white space stuff and possibly different
           date/TZ output)
      
      test 2)
        - origin system:
          # perf record -ag -fo /tmp/perf.data -- sleep 1
        - mount origin system root to the target system on /mnt/origin
        - target system:
          # perf script --symfs /mnt/origin -I -i /mnt/origin/tmp/perf.data \
           --kallsyms /mnt/origin/proc/kallsyms
        - complete perf.data header is displayed
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1337151548-2396-8-git-send-email-jolsa@redhat.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      a0187060
  7. 17 5月, 2012 9 次提交
    • N
      perf target: Add uses_mmap field · d1cb9fce
      Namhyung Kim 提交于
      If perf doesn't mmap on event (like perf stat), it should not create
      per-task-per-cpu events. So just use a dummy cpu map to create a
      per-task event for this case.
      Signed-off-by: NNamhyung Kim <namhyung.kim@lge.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1337161549-9870-3-git-send-email-namhyung.kim@lge.com
      [ committer note: renamed .need_mmap to .uses_mmap ]
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      d1cb9fce
    • S
      ftrace: Remove selecting FRAME_POINTER with FUNCTION_TRACER · b732d439
      Steven Rostedt 提交于
      The function tracer will enable the -pg option with gcc, which requires
      that frame pointers. When FRAME_POINTER is defined in the kernel config
      it adds the gcc option -fno-omit-frame-pointer which causes some problems
      on some architectures. For those architectures, the FRAME_POINTER select
      was not set.
      
      When FUNCTION_TRACER was selected on these architectures that can not have
      -fno-omit-frame-pointer, the -pg option is still set. But when
      FRAME_POINTER is not selected, the kernel config would add the gcc option
      -fomit-frame-pointer. Adding this option is incompatible with -pg
      even on archs that do not need frame pointers with -pg.
      
      The answer to this was to just not add either -fno-omit-frame-pointer
      or -fomit-frame-pointer on these archs that want function tracing
      but do not set FRAME_POINTER.
      
      As it turns out, for archs that require frame pointers for function
      tracing, the same can be used. If gcc requires frame pointers with
      -pg, it will simply add it. The best thing to do is not select FRAME_POINTER
      when function tracing is selected, and let gcc add it if needed.
      
      Only add the -fno-omit-frame-pointer when something else selects
      FRAME_POINTER, but do not add -fomit-frame-pointer if function tracing
      is selected.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b732d439
    • S
      ftrace/x86: Have x86 ftrace use the ftrace_modify_all_code() · e4f5d544
      Steven Rostedt 提交于
      To remove duplicate code, have the ftrace arch_ftrace_update_code()
      use the generic ftrace_modify_all_code(). This requires that the
      default ftrace_replace_code() becomes a weak function so that an
      arch may override it.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e4f5d544
    • S
      ftrace: Make ftrace_modify_all_code() global for archs to use · 8ed3e2cf
      Steven Rostedt 提交于
      Rename __ftrace_modify_code() to ftrace_modify_all_code() and make
      it global for all archs to use. This will remove the duplication
      of code, as archs that can modify code without stop_machine()
      can use it directly outside of the stop_machine() call.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8ed3e2cf
    • S
      ftrace: Return record ip addr for ftrace_location() · f0cf973a
      Steven Rostedt 提交于
      ftrace_location() is passed an addr, and returns 1 if the addr is
      on a ftrace nop (or caller to ftrace_caller), and 0 otherwise.
      
      To let kprobes know if it should move a breakpoint or not, it
      must return the actual addr that is the start of the ftrace nop.
      This way a kprobe placed on the location of a ftrace nop, can
      instead be placed on the instruction after the nop. Even if the
      probe addr is on the second or later byte of the nop, it can
      simply be moved forward.
      
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f0cf973a
    • S
      ftrace: Consolidate ftrace_location() and ftrace_text_reserved() · a650e02a
      Steven Rostedt 提交于
      Both ftrace_location() and ftrace_text_reserved() do basically the same thing.
      They search to see if an address is in the ftace table (contains an address
      that may change from nop to call ftrace_caller). The difference is
      that ftrace_location() searches a single address, but ftrace_text_reserved()
      searches a range.
      
      This also makes the ftrace_text_reserved() faster as it now uses a bsearch()
      instead of linearly searching all the addresses within a page.
      
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a650e02a
    • S
      ftrace: Speed up search by skipping pages by address · 9644302e
      Steven Rostedt 提交于
      As all records in a page of the ftrace table are sorted, we can
      speed up the search algorithm by checking if the address to look for
      falls in between the first and last record ip on the page.
      
      This speeds up both the ftrace_location() and ftrace_text_reserved()
      algorithms, as it can skip full pages when the search address is
      not in them.
      
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      9644302e
    • S
      ftrace: Remove extra helper functions · 706c81f8
      Steven Rostedt 提交于
      The ftrace_record_ip() and ftrace_alloc_dyn_node() were from the
      time of the ftrace daemon. Although they were still used, they
      still make things a bit more complex than necessary.
      
      Move the code into the one function that uses it, and remove the
      helper functions.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      706c81f8
    • S
      ftrace: Sort all function addresses, not just per page · 9fd49328
      Steven Rostedt 提交于
      Instead of just sorting the ip's of the functions per ftrace page,
      sort the entire list before adding them to the ftrace pages.
      
      This will allow the bsearch algorithm to be sped up as it can
      also sort by pages, not just records within a page.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      9fd49328