1. 03 3月, 2016 11 次提交
    • M
      tools build: Use .s extension for preprocessed assembler code · 67678793
      Masahiro Yamada 提交于
      The "man gcc" says .i extension represents the file is C source code
      that should not be preprocessed.  Here, .s should be used.
      
      For clarification,
        .c  ---(preprocess)--->  .i
        .S  ---(preprocess)--->  .s
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Acked-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Aaro Koskinen <aaro.koskinen@nokia.com>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Lukas Wunner <lukas@wunner.de>
      Link: http://lkml.kernel.org/r/1454263140-19670-1-git-send-email-yamada.masahiro@socionext.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      67678793
    • A
      perf stat: Support metrics in --per-core/socket mode · 44d49a60
      Andi Kleen 提交于
      Enable metrics printing in --per-core / --per-socket mode. We need to
      save the shadow metrics in a unique place. Always use the first CPU in
      the aggregation. Then use the same CPU to retrieve the shadow value
      later.
      
      Example output:
      
        % perf stat --per-core -a ./BC1s
      
         Performance counter stats for 'system wide':
      
        S0-C0 2   2966.020381 task-clock (msec) #   2.004 CPUs utilized  (100.00%)
        S0-C0 2            49 context-switches  #   0.017 K/sec          (100.00%)
        S0-C0 2             4 cpu-migrations    #   0.001 K/sec          (100.00%)
        S0-C0 2           467 page-faults       #   0.157 K/sec
        S0-C0 2 4,599,061,773 cycles            #   1.551 GHz            (100.00%)
        S0-C0 2 9,755,886,883 instructions      #   2.12  insn per cycle (100.00%)
        S0-C0 2 1,906,272,125 branches          # 642.704 M/sec          (100.00%)
        S0-C0 2    81,180,867 branch-misses     #   4.26% of all branches
        S0-C1 2   2965.995373 task-clock (msec) #   2.003 CPUs utilized  (100.00%)
        S0-C1 2            62 context-switches  #   0.021 K/sec          (100.00%)
        S0-C1 2             8 cpu-migrations    #   0.003 K/sec          (100.00%)
        S0-C1 2           281 page-faults       #   0.095 K/sec
        S0-C1 2     6,347,290 cycles            #   0.002 GHz            (100.00%)
        S0-C1 2     4,654,156 instructions      #   0.73  insn per cycle (100.00%)
        S0-C1 2       947,121 branches          #   0.319 M/sec          (100.00%)
        S0-C1 2        37,322 branch-misses     #   3.94% of all branches
      
               1.480409747 seconds time elapsed
      
      v2: Rebase to older patches
      v3: Document shadow cpus. Fix aggr_get_id argument. Fix -A shadows (Jiri)
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Acked-by: NJiri Olsa <jolsa@kernel.org>
      Link: http://lkml.kernel.org/r/1456785386-19481-4-git-send-email-andi@firstfloor.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      44d49a60
    • A
      perf stat: Implement CSV metrics output · 92a61f64
      Andi Kleen 提交于
      Now support CSV output for metrics. With the new output callbacks this
      is relatively straight forward by creating new callbacks.
      
      This allows to easily plot metrics from CSV files.
      
      The new line callback needs to know the number of fields to skip them
      correctly
      
      Example output before:
      
        % perf stat -x, true
        0.200687,,task-clock,200687,100.00
        0,,context-switches,200687,100.00
        0,,cpu-migrations,200687,100.00
        40,,page-faults,200687,100.00
        730871,,cycles,203601,100.00
        551056,,stalled-cycles-frontend,203601,100.00
        <not supported>,,stalled-cycles-backend,0,100.00
        385523,,instructions,203601,100.00
        78028,,branches,203601,100.00
        3946,,branch-misses,203601,100.00
      
      After:
      
        % perf stat -x, true
        .502457,,task-clock,502457,100.00,0.485,CPUs utilized
        0,,context-switches,502457,100.00,0.000,K/sec
        0,,cpu-migrations,502457,100.00,0.000,K/sec
        45,,page-faults,502457,100.00,0.090,M/sec
        644692,,cycles,509102,100.00,1.283,GHz
        423470,,stalled-cycles-frontend,509102,100.00,65.69,frontend cycles idle
        <not supported>,,stalled-cycles-backend,0,100.00,,,,
        492701,,instructions,509102,100.00,0.76,insn per cycle
        ,,,,,0.86,stalled cycles per insn
        97767,,branches,509102,100.00,194.578,M/sec
        4788,,branch-misses,509102,100.00,4.90,of all branches
      
      or easier readable
      
        $ perf stat  -x, -o x.csv true
        $ column -s, -t x.csv
        0.490635        task-clock              490635 100.00 0.489   CPUs utilized
        0               context-switches        490635 100.00 0.000   K/sec
        0               cpu-migrations          490635 100.00 0.000   K/sec
        45              page-faults             490635 100.00 0.092   M/sec
        629080          cycles                  497698 100.00 1.282   GHz
        409498          stalled-cycles-frontend 497698 100.00 65.09   frontend cycles idle
        <not supported> stalled-cycles-backend  0      100.00
        491424          instructions            497698 100.00 0.78    insn per cycle
                                                              0.83    stalled cycles per insn
        97278           branches                497698 100.00 198.270 M/sec
        4569            branch-misses           497698 100.00 4.70    of all branches
      
      Two new fields are added: metric value and metric name.
      
      v2: Split out function argument changes
      v3: Reenable metrics for real.
      v4: Fix wrong hunk from refactoring.
      v5: Remove extra "noise" printing (Jiri), but add it to the not counted case.
      Print empty metrics for not counted.
      v6: Avoid outputting metric on empty format.
      v7: Print metric at the end
      v8: Remove extra run, ena fields
      v9: Avoid extra new line for unsupported counters
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Acked-by: NJiri Olsa <jolsa@kernel.org>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Link: http://lkml.kernel.org/r/1456785386-19481-3-git-send-email-andi@firstfloor.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      92a61f64
    • W
      perf record: Ensure return non-zero rc when mmap fail · 95c36561
      Wang Nan 提交于
      perf_evlist__mmap_ex() can fail without setting errno (for example, fail
      in condition checking. In this case all syscall is success).
      
      If this happen, record__open() incorrectly returns 0. Force setting rc
      is a quick way to avoid this problem, or we have to follow all possible
      code path in perf_evlist__mmap_ex() to make sure there's at least one
      system call before returning an error.
      Signed-off-by: NWang Nan <wangnan0@huawei.com>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: He Kuang <hekuang@huawei.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Zefan Li <lizefan@huawei.com>
      Cc: pi3orama@163.com
      Link: http://lkml.kernel.org/r/1456479154-136027-30-git-send-email-wangnan0@huawei.comSigned-off-by: NHe Kuang <hekuang@huawei.com>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      95c36561
    • W
      perf record: Introduce record__finish_output() to finish a perf.data · e1ab48ba
      Wang Nan 提交于
      Move code for finalizing 'perf.data' to record__finish_output(). It will
      be used by following commits to split output to multiple files.
      Signed-off-by: NHe Kuang <hekuang@huawei.com>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: He Kuang <hekuang@huawei.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Zefan Li <lizefan@huawei.com>
      Cc: pi3orama@163.com
      Link: http://lkml.kernel.org/r/1456479154-136027-23-git-send-email-wangnan0@huawei.comSigned-off-by: NWang Nan <wangnan0@huawei.com>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      e1ab48ba
    • W
      perf record: Extract synthesize code to record__synthesize() · c45c86eb
      Wang Nan 提交于
      Create record__synthesize(). It can be used to create tracking events
      for each perf.data after perf supporting splitting into multiple
      outputs.
      Signed-off-by: NHe Kuang <hekuang@huawei.com>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: He Kuang <hekuang@huawei.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Zefan Li <lizefan@huawei.com>
      Cc: pi3orama@163.com
      Link: http://lkml.kernel.org/r/1456479154-136027-20-git-send-email-wangnan0@huawei.comSigned-off-by: NWang Nan <wangnan0@huawei.com>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      c45c86eb
    • W
      perf record: Use WARN_ONCE to replace 'if' condition · d8871ea7
      Wang Nan 提交于
      Commits in a BPF patchkit will extract kernel and module synthesizing
      code into a separated function and call it multiple times. This patch
      replace 'if (err < 0)' using WARN_ONCE, makes sure the error message
      show one time.
      Signed-off-by: NWang Nan <wangnan0@huawei.com>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: He Kuang <hekuang@huawei.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Zefan Li <lizefan@huawei.com>
      Cc: pi3orama@163.com
      Link: http://lkml.kernel.org/r/1456479154-136027-19-git-send-email-wangnan0@huawei.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      d8871ea7
    • W
      perf data: Explicitly set byte order for integer types · f8dd2d5f
      Wang Nan 提交于
      After babeltrace commit 5cec03e402aa ("ir: copy variants and sequences
      when setting a field path"), 'perf data convert' gets incorrect result
      if there's bpf output data. For example:
      
       # perf data convert --to-ctf ./out.ctf
       # babeltrace ./out.ctf
       [10:44:31.186045346] (+?.?????????) evt: { cpu_id = 0 }, { perf_ip = 0xFFFFFFFF810E7DD1, perf_tid = 23819, perf_pid = 23819, perf_id = 518, raw_len = 3, raw_data = [ [0] = 0xC028E32F, [1] = 0x815D0100, [2] = 0x1000000 ] }
       [10:44:31.286101003] (+0.100055657) evt: { cpu_id = 0 }, { perf_ip = 0xFFFFFFFF8105B609, perf_tid = 23819, perf_pid = 23819, perf_id = 518, raw_len = 3, raw_data = [ [0] = 0x35D9F1EB, [1] = 0x15D81, [2] = 0x2 ] }
      
      The expected result of the first sample should be:
      
       raw_data = [ [0] = 0x2FE328C0, [1] = 0x15D81, [2] = 0x1 ] }
      
      however, 'perf data convert' output big endian value to resuling CTF
      file.
      
      The reason is a internal change (or a bug?) of babeltrace.
      
      Before this patch, at the first add_bpf_output_values(), byte order of
      all integer type is uncertain (is 0, neither 1234 (le) nor 4321 (be)).
      It would be fixed by:
      
      perf_evlist__deliver_sample
       -> process_sample_event
         -> ctf_stream
            ...
            ->bt_ctf_trace_add_stream_class
              ->bt_ctf_field_type_structure_set_byte_order
                ->bt_ctf_field_type_integer_set_byte_order
      
      during creating the stream.
      
      However, the babeltrace commit mentioned above duplicates types in
      sequence to prevent potential conflict in following call stack and link
      the newly allocated type into the 'raw_data' sequence:
      
      perf_evlist__deliver_sample
       -> process_sample_event
         -> ctf_stream
            ...
            -> bt_ctf_trace_add_stream_class
              -> bt_ctf_stream_class_resolve_types
                 ...
                 -> bt_ctf_field_type_sequence_copy
                   ->bt_ctf_field_type_integer_copy
      
      This happens before byte order setting, so only the newly allocated
      type is initialized, the byte order of original type perf choose to
      create the first raw_data is still uncertain.
      
      Byte order in CTF output is not related to byte order in perf.data.
      Setting it to anything other than BT_CTF_BYTE_ORDER_NATIVE solves this
      problem (only BT_CTF_BYTE_ORDER_NATIVE needs to be fixed). To reduce
      behavior changing, set byte order according to compiling options.
      Signed-off-by: NWang Nan <wangnan0@huawei.com>
      Cc: Jeremie Galarneau <jeremie.galarneau@efficios.com>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Jérémie Galarneau <jeremie.galarneau@efficios.com>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Zefan Li <lizefan@huawei.com>
      Cc: pi3orama@163.com
      Link: http://lkml.kernel.org/r/1456479154-136027-10-git-send-email-wangnan0@huawei.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      f8dd2d5f
    • W
      perf data: Support converting data from bpf_perf_event_output() · 6122d57e
      Wang Nan 提交于
      bpf_perf_event_output() outputs data through sample->raw_data. This
      patch adds support to convert those data into CTF. A python script then
      can be used to process output data from BPF programs.
      
      Test result:
      
        # cat ./test_bpf_output_2.c
        /************************ BEGIN **************************/
        #include <uapi/linux/bpf.h>
        struct bpf_map_def {
       	unsigned int type;
       	unsigned int key_size;
       	unsigned int value_size;
       	unsigned int max_entries;
        };
        #define SEC(NAME) __attribute__((section(NAME), used))
        static u64 (*ktime_get_ns)(void) =
       	(void *)BPF_FUNC_ktime_get_ns;
        static int (*trace_printk)(const char *fmt, int fmt_size, ...) =
       	(void *)BPF_FUNC_trace_printk;
        static int (*get_smp_processor_id)(void) =
       	(void *)BPF_FUNC_get_smp_processor_id;
        static int (*perf_event_output)(void *, struct bpf_map_def *, int, void *, unsigned long) =
       	(void *)BPF_FUNC_perf_event_output;
      
        struct bpf_map_def SEC("maps") channel = {
       	.type = BPF_MAP_TYPE_PERF_EVENT_ARRAY,
       	.key_size = sizeof(int),
       	.value_size = sizeof(u32),
       	.max_entries = __NR_CPUS__,
        };
      
        static inline int __attribute__((always_inline))
        func(void *ctx, int type)
        {
       	struct {
       		u64 ktime;
       		int type;
       	} __attribute__((packed)) output_data;
       	char error_data[] = "Error: failed to output\n";
       	int err;
      
       	output_data.type = type;
       	output_data.ktime = ktime_get_ns();
       	err = perf_event_output(ctx, &channel, get_smp_processor_id(),
       				&output_data, sizeof(output_data));
       	if (err)
       		trace_printk(error_data, sizeof(error_data));
       	return 0;
        }
        SEC("func_begin=sys_nanosleep")
        int func_begin(void *ctx) {return func(ctx, 1);}
        SEC("func_end=sys_nanosleep%return")
        int func_end(void *ctx) { return func(ctx, 2);}
        char _license[] SEC("license") = "GPL";
        int _version SEC("version") = LINUX_VERSION_CODE;
        /************************* END ***************************/
      
        # ./perf record -e bpf-output/no-inherit,name=evt/ \
                       -e ./test_bpf_output_2.c/map:channel.event=evt/ \
                       usleep 100000
        [ perf record: Woken up 1 times to write data ]
        [ perf record: Captured and wrote 0.012 MB perf.data (2 samples) ]
      
        # ./perf script
                usleep 14942 92503.198504: evt:  ffffffff810e0ba1 sys_nanosleep (/lib/modules/4.3.0....
                usleep 14942 92503.298562: evt:  ffffffff810585e9 kretprobe_trampoline_holder (/lib....
      
        # ./perf data convert --to-ctf ./out.ctf
        [ perf data convert: Converted 'perf.data' into CTF data './out.ctf' ]
        [ perf data convert: Converted and wrote 0.000 MB (2 samples) ]
      
        # babeltrace ./out.ctf
        [01:41:43.198504134] (+?.?????????) evt: { cpu_id = 0 }, { perf_ip = 0xFFFFFFFF810E0BA1, perf_tid = 14942, perf_pid = 14942, perf_id = 1044, raw_len = 3, raw_data = [ [0] = 0x32C0C07B, [1] = 0x5421, [2] = 0x1 ] }
        [01:41:43.298562257] (+0.100058123) evt: { cpu_id = 0 }, { perf_ip = 0xFFFFFFFF810585E9, perf_tid = 14942, perf_pid = 14942, perf_id = 1044, raw_len = 3, raw_data = [ [0] = 0x38B77FAA, [1] = 0x5421, [2] = 0x2 ] }
      
        # cat ./test_bpf_output_2.py
        from babeltrace import TraceCollection
        tc = TraceCollection()
        tc.add_trace('./out.ctf', 'ctf')
        d = {1:[], 2:[]}
        for event in tc.events:
           if not event.name.startswith('evt'):
               continue
           raw_data = event['raw_data']
           (time, type) = ((raw_data[0] + (raw_data[1] << 32)), raw_data[2])
           d[type].append(time)
        print(list(map(lambda i: d[2][i] - d[1][i], range(len(d[1])))));
      
        # python3 ./test_bpf_output_2.py
        [100056879]
      
      Committer note:
      
      Make sure you have python3-devel installed, not python-devel, which may
      be for python2, which will lead to some "PyInstance_Type" errors. Also
      make sure that you use the right libbabeltrace, because it is shipped
      in Fedora, for instance, but an older version.
      
      To build libbabeltrace's python binding one also needs to use:
      
       ./configure --enable-python-bindings
      
      And then set PYTHONPATH=/usr/local/lib64/python3.4/site-packages/.
      Signed-off-by: NWang Nan <wangnan0@huawei.com>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Acked-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Zefan Li <lizefan@huawei.com>
      Cc: pi3orama@163.com
      Link: http://lkml.kernel.org/r/1456479154-136027-9-git-send-email-wangnan0@huawei.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      6122d57e
    • A
      perf stat: Check existence of frontend/backed stalled cycles · 9dec4473
      Andi Kleen 提交于
      Only put the frontend/backend stalled cycles into the default perf stat
      events when the CPU actually supports them.
      
      This avoids empty columns with --metric-only on newer Intel CPUs.
      
      Committer note:
      
      Before:
      
        $ perf stat ls
      
          Performance counter stats for 'ls':
      
                1.080893     task-clock (msec)      #    0.619 CPUs utilized
                       0     context-switches       #    0.000 K/sec
                       0     cpu-migrations         #    0.000 K/sec
                      97     page-faults            #    0.090 M/sec
               3,327,741     cycles                 #    3.079 GHz
         <not supported>     stalled-cycles-frontend
         <not supported>     stalled-cycles-backend
               1,609,544     instructions           #    0.48  insn per cycle
                 319,117     branches               #  295.235 M/sec
                  12,246     branch-misses          #    3.84% of all branches
      
             0.001746508 seconds time elapsed
        $
      
      After:
      
        $ perf stat ls
      
          Performance counter stats for 'ls':
      
                0.693948     task-clock (msec)      #    0.662 CPUs utilized
                       0     context-switches       #    0.000 K/sec
                       0     cpu-migrations         #    0.000 K/sec
                      95     page-faults            #    0.137 M/sec
               1,792,509     cycles                 #    2.583 GHz
               1,599,047     instructions           #    0.89  insn per cycle
                 316,328     branches               #  455.838 M/sec
                  12,453     branch-misses          #    3.94% of all branches
      
             0.001048987 seconds time elapsed
        $
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Acked-by: NJiri Olsa <jolsa@kernel.org>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lkml.kernel.org/r/1456532881-26621-2-git-send-email-andi@firstfloor.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      9dec4473
    • J
      perf tools: Fix locale handling in pmu parsing · f9a5978a
      Jiri Olsa 提交于
      Ingo reported regression on display format of big numbers, which is
      missing separators (in default perf stat output).
      
       triton:~/tip> perf stat -a sleep 1
               ...
               127008602      cycles                    #    0.011 GHz
               279538533      stalled-cycles-frontend   #  220.09% frontend cycles idle
               119213269      instructions              #    0.94  insn per cycle
      
      This is caused by recent change:
      
        perf stat: Check existence of frontend/backed stalled cycles
      
      that added call to pmu_have_event, that subsequently calls
      perf_pmu__parse_scale, which has a bug in locale handling.
      
      The lc string returned from setlocale, that we use to store old locale
      value, may be allocated in static storage. Getting a dynamic copy to
      make it survive another setlocale call.
      
        $ perf stat ls
               ...
               2,360,602      cycles                    #    3.080 GHz
               2,703,090      instructions              #    1.15  insn per cycle
                 546,031      branches                  #  712.511 M/sec
      
      Committer note:
      
      Since the patch introducing the regression didn't made to perf/core,
      move it to just before where the regression was introduced, so that we
      don't break bisection for this feature.
      Reported-by: NIngo Molnar <mingo@redhat.com>
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/20160303095348.GA24511@krava.redhat.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      f9a5978a
  2. 29 2月, 2016 29 次提交