提交 e9c84892 编写于 作者: I Ingo Molnar

Merge tag 'perf-c2c-for-mingo-20161021' of...

Merge tag 'perf-c2c-for-mingo-20161021' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core

Pull new 'perf c2c' tool from Arnaldo Carvalho de Melo:

- The 'perf c2c' tool provides means for Shared Data C2C/HITM analysis.

  It allows you to track down cacheline contention. The tool is based
  on x86's load latency and precise store facility events provided by
  Intel CPUs.

  It was tested by Joe Mario and has proven to be useful, finding some
  cacheline contentions. Joe also wrote a blog about c2c tool with
  examples:

    https://joemario.github.io/blog/2016/09/01/c2c-blog/

  Excerpt of the content on this site:

  ---
    At a high level, “perf c2c” will show you:

    * The cachelines where false sharing was detected.
    * The readers and writers to those cachelines, and the offsets where those accesses occurred.
    * The pid, tid, instruction addr, function name, binary object name for those readers and writers.
    * The source file and line number for each reader and writer.
    * The average load latency for the loads to those cachelines.
    * Which numa nodes the samples a cacheline came from and which CPUs were involved.

    Using perf c2c is similar to using the Linux perf tool today.
    First collect data with “perf c2c record” Then generate a report output with “perf c2c report”
  ---

  There one finds extensive details on using the tool, with tips on
  reducing the volume of samples while still capturing enough to do
  its job. (Dick Fowles, Joe Mario, Don Zickus, Jiri Olsa)
Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: NIngo Molnar <mingo@kernel.org>
......@@ -21,6 +21,7 @@ perf-y += builtin-inject.o
perf-y += builtin-mem.o
perf-y += builtin-data.o
perf-y += builtin-version.o
perf-y += builtin-c2c.o
perf-$(CONFIG_AUDIT) += builtin-trace.o
perf-$(CONFIG_LIBELF) += builtin-probe.o
......
perf-c2c(1)
===========
NAME
----
perf-c2c - Shared Data C2C/HITM Analyzer.
SYNOPSIS
--------
[verse]
'perf c2c record' [<options>] <command>
'perf c2c record' [<options>] -- [<record command options>] <command>
'perf c2c report' [<options>]
DESCRIPTION
-----------
C2C stands for Cache To Cache.
The perf c2c tool provides means for Shared Data C2C/HITM analysis. It allows
you to track down the cacheline contentions.
The tool is based on x86's load latency and precise store facility events
provided by Intel CPUs. These events provide:
- memory address of the access
- type of the access (load and store details)
- latency (in cycles) of the load access
The c2c tool provide means to record this data and report back access details
for cachelines with highest contention - highest number of HITM accesses.
The basic workflow with this tool follows the standard record/report phase.
User uses the record command to record events data and report command to
display it.
RECORD OPTIONS
--------------
-e::
--event=::
Select the PMU event. Use 'perf mem record -e list'
to list available events.
-v::
--verbose::
Be more verbose (show counter open errors, etc).
-l::
--ldlat::
Configure mem-loads latency.
-k::
--all-kernel::
Configure all used events to run in kernel space.
-u::
--all-user::
Configure all used events to run in user space.
REPORT OPTIONS
--------------
-k::
--vmlinux=<file>::
vmlinux pathname
-v::
--verbose::
Be more verbose (show counter open errors, etc).
-i::
--input::
Specify the input file to process.
-N::
--node-info::
Show extra node info in report (see NODE INFO section)
-c::
--coalesce::
Specify sorintg fields for single cacheline display.
Following fields are available: tid,pid,iaddr,dso
(see COALESCE)
-g::
--call-graph::
Setup callchains parameters.
Please refer to perf-report man page for details.
--stdio::
Force the stdio output (see STDIO OUTPUT)
--stats::
Display only statistic tables and force stdio mode.
--full-symbols::
Display full length of symbols.
--no-source::
Do not display Source:Line column.
--show-all::
Show all captured HITM lines, with no regard to HITM % 0.0005 limit.
C2C RECORD
----------
The perf c2c record command setup options related to HITM cacheline analysis
and calls standard perf record command.
Following perf record options are configured by default:
(check perf record man page for details)
-W,-d,--sample-cpu
Unless specified otherwise with '-e' option, following events are monitored by
default:
cpu/mem-loads,ldlat=30/P
cpu/mem-stores/P
User can pass any 'perf record' option behind '--' mark, like (to enable
callchains and system wide monitoring):
$ perf c2c record -- -g -a
Please check RECORD OPTIONS section for specific c2c record options.
C2C REPORT
----------
The perf c2c report command displays shared data analysis. It comes in two
display modes: stdio and tui (default).
The report command workflow is following:
- sort all the data based on the cacheline address
- store access details for each cacheline
- sort all cachelines based on user settings
- display data
In general perf report output consist of 2 basic views:
1) most expensive cachelines list
2) offsets details for each cacheline
For each cacheline in the 1) list we display following data:
(Both stdio and TUI modes follow the same fields output)
Index
- zero based index to identify the cacheline
Cacheline
- cacheline address (hex number)
Total records
- sum of all cachelines accesses
Rmt/Lcl Hitm
- cacheline percentage of all Remote/Local HITM accesses
LLC Load Hitm - Total, Lcl, Rmt
- count of Total/Local/Remote load HITMs
Store Reference - Total, L1Hit, L1Miss
Total - all store accesses
L1Hit - store accesses that hit L1
L1Hit - store accesses that missed L1
Load Dram
- count of local and remote DRAM accesses
LLC Ld Miss
- count of all accesses that missed LLC
Total Loads
- sum of all load accesses
Core Load Hit - FB, L1, L2
- count of load hits in FB (Fill Buffer), L1 and L2 cache
LLC Load Hit - Llc, Rmt
- count of LLC and Remote load hits
For each offset in the 2) list we display following data:
HITM - Rmt, Lcl
- % of Remote/Local HITM accesses for given offset within cacheline
Store Refs - L1 Hit, L1 Miss
- % of store accesses that hit/missed L1 for given offset within cacheline
Data address - Offset
- offset address
Pid
- pid of the process responsible for the accesses
Tid
- tid of the process responsible for the accesses
Code address
- code address responsible for the accesses
cycles - rmt hitm, lcl hitm, load
- sum of cycles for given accesses - Remote/Local HITM and generic load
cpu cnt
- number of cpus that participated on the access
Symbol
- code symbol related to the 'Code address' value
Shared Object
- shared object name related to the 'Code address' value
Source:Line
- source information related to the 'Code address' value
Node
- nodes participating on the access (see NODE INFO section)
NODE INFO
---------
The 'Node' field displays nodes that accesses given cacheline
offset. Its output comes in 3 flavors:
- node IDs separated by ','
- node IDs with stats for each ID, in following format:
Node{cpus %hitms %stores}
- node IDs with list of affected CPUs in following format:
Node{cpu list}
User can switch between above flavors with -N option or
use 'n' key to interactively switch in TUI mode.
COALESCE
--------
User can specify how to sort offsets for cacheline.
Following fields are available and governs the final
output fields set for caheline offsets output:
tid - coalesced by process TIDs
pid - coalesced by process PIDs
iaddr - coalesced by code address, following fields are displayed:
Code address, Code symbol, Shared Object, Source line
dso - coalesced by shared object
By default the coalescing is setup with 'pid,tid,iaddr'.
STDIO OUTPUT
------------
The stdio output displays data on standard output.
Following tables are displayed:
Trace Event Information
- overall statistics of memory accesses
Global Shared Cache Line Event Information
- overall statistics on shared cachelines
Shared Data Cache Line Table
- list of most expensive cachelines
Shared Cache Line Distribution Pareto
- list of all accessed offsets for each cacheline
TUI OUTPUT
----------
The TUI output provides interactive interface to navigate
through cachelines list and to display offset details.
For details please refer to the help window by pressing '?' key.
CREDITS
-------
Although Don Zickus, Dick Fowles and Joe Mario worked together
to get this implemented, we got lots of early help from Arnaldo
Carvalho de Melo, Stephane Eranian, Jiri Olsa and Andi Kleen.
C2C BLOG
--------
Check Joe's blog on c2c tool for detailed use case explanation:
https://joemario.github.io/blog/2016/09/01/c2c-blog/
SEE ALSO
--------
linkperf:perf-record[1], linkperf:perf-mem[1]
此差异已折叠。
......@@ -18,6 +18,7 @@ int cmd_bench(int argc, const char **argv, const char *prefix);
int cmd_buildid_cache(int argc, const char **argv, const char *prefix);
int cmd_buildid_list(int argc, const char **argv, const char *prefix);
int cmd_config(int argc, const char **argv, const char *prefix);
int cmd_c2c(int argc, const char **argv, const char *prefix);
int cmd_diff(int argc, const char **argv, const char *prefix);
int cmd_evlist(int argc, const char **argv, const char *prefix);
int cmd_help(int argc, const char **argv, const char *prefix);
......
......@@ -43,6 +43,7 @@ static struct cmd_struct commands[] = {
{ "buildid-cache", cmd_buildid_cache, 0 },
{ "buildid-list", cmd_buildid_list, 0 },
{ "config", cmd_config, 0 },
{ "c2c", cmd_c2c, 0 },
{ "diff", cmd_diff, 0 },
{ "evlist", cmd_evlist, 0 },
{ "help", cmd_help, 0 },
......
......@@ -30,7 +30,7 @@ static struct rb_node *hists__filter_entries(struct rb_node *nd,
static bool hist_browser__has_filter(struct hist_browser *hb)
{
return hists__has_filter(hb->hists) || hb->min_pcnt || symbol_conf.has_filter;
return hists__has_filter(hb->hists) || hb->min_pcnt || symbol_conf.has_filter || hb->c2c_filter;
}
static int hist_browser__get_folding(struct hist_browser *browser)
......
......@@ -18,6 +18,7 @@ struct hist_browser {
u64 nr_non_filtered_entries;
u64 nr_hierarchy_entries;
u64 nr_callchain_rows;
bool c2c_filter;
/* Get title string. */
int (*title)(struct hist_browser *browser,
......
......@@ -1195,6 +1195,7 @@ static void hist_entry__check_and_remove_filter(struct hist_entry *he,
case HIST_FILTER__GUEST:
case HIST_FILTER__HOST:
case HIST_FILTER__SOCKET:
case HIST_FILTER__C2C:
default:
return;
}
......
......@@ -22,6 +22,7 @@ enum hist_filter {
HIST_FILTER__GUEST,
HIST_FILTER__HOST,
HIST_FILTER__SOCKET,
HIST_FILTER__C2C,
};
enum hist_column {
......
......@@ -9,6 +9,7 @@
#include "mem-events.h"
#include "debug.h"
#include "symbol.h"
#include "sort.h"
unsigned int perf_mem_events__loads_ldlat = 30;
......@@ -268,3 +269,130 @@ int perf_script__meminfo_scnprintf(char *out, size_t sz, struct mem_info *mem_in
return i;
}
int c2c_decode_stats(struct c2c_stats *stats, struct mem_info *mi)
{
union perf_mem_data_src *data_src = &mi->data_src;
u64 daddr = mi->daddr.addr;
u64 op = data_src->mem_op;
u64 lvl = data_src->mem_lvl;
u64 snoop = data_src->mem_snoop;
u64 lock = data_src->mem_lock;
int err = 0;
#define P(a, b) PERF_MEM_##a##_##b
stats->nr_entries++;
if (lock & P(LOCK, LOCKED)) stats->locks++;
if (op & P(OP, LOAD)) {
/* load */
stats->load++;
if (!daddr) {
stats->ld_noadrs++;
return -1;
}
if (lvl & P(LVL, HIT)) {
if (lvl & P(LVL, UNC)) stats->ld_uncache++;
if (lvl & P(LVL, IO)) stats->ld_io++;
if (lvl & P(LVL, LFB)) stats->ld_fbhit++;
if (lvl & P(LVL, L1 )) stats->ld_l1hit++;
if (lvl & P(LVL, L2 )) stats->ld_l2hit++;
if (lvl & P(LVL, L3 )) {
if (snoop & P(SNOOP, HITM))
stats->lcl_hitm++;
else
stats->ld_llchit++;
}
if (lvl & P(LVL, LOC_RAM)) {
stats->lcl_dram++;
if (snoop & P(SNOOP, HIT))
stats->ld_shared++;
else
stats->ld_excl++;
}
if ((lvl & P(LVL, REM_RAM1)) ||
(lvl & P(LVL, REM_RAM2))) {
stats->rmt_dram++;
if (snoop & P(SNOOP, HIT))
stats->ld_shared++;
else
stats->ld_excl++;
}
}
if ((lvl & P(LVL, REM_CCE1)) ||
(lvl & P(LVL, REM_CCE2))) {
if (snoop & P(SNOOP, HIT))
stats->rmt_hit++;
else if (snoop & P(SNOOP, HITM))
stats->rmt_hitm++;
}
if ((lvl & P(LVL, MISS)))
stats->ld_miss++;
} else if (op & P(OP, STORE)) {
/* store */
stats->store++;
if (!daddr) {
stats->st_noadrs++;
return -1;
}
if (lvl & P(LVL, HIT)) {
if (lvl & P(LVL, UNC)) stats->st_uncache++;
if (lvl & P(LVL, L1 )) stats->st_l1hit++;
}
if (lvl & P(LVL, MISS))
if (lvl & P(LVL, L1)) stats->st_l1miss++;
} else {
/* unparsable data_src? */
stats->noparse++;
return -1;
}
if (!mi->daddr.map || !mi->iaddr.map) {
stats->nomap++;
return -1;
}
#undef P
return err;
}
void c2c_add_stats(struct c2c_stats *stats, struct c2c_stats *add)
{
stats->nr_entries += add->nr_entries;
stats->locks += add->locks;
stats->store += add->store;
stats->st_uncache += add->st_uncache;
stats->st_noadrs += add->st_noadrs;
stats->st_l1hit += add->st_l1hit;
stats->st_l1miss += add->st_l1miss;
stats->load += add->load;
stats->ld_excl += add->ld_excl;
stats->ld_shared += add->ld_shared;
stats->ld_uncache += add->ld_uncache;
stats->ld_io += add->ld_io;
stats->ld_miss += add->ld_miss;
stats->ld_noadrs += add->ld_noadrs;
stats->ld_fbhit += add->ld_fbhit;
stats->ld_l1hit += add->ld_l1hit;
stats->ld_l2hit += add->ld_l2hit;
stats->ld_llchit += add->ld_llchit;
stats->lcl_hitm += add->lcl_hitm;
stats->rmt_hitm += add->rmt_hitm;
stats->rmt_hit += add->rmt_hit;
stats->lcl_dram += add->lcl_dram;
stats->rmt_dram += add->rmt_dram;
stats->nomap += add->nomap;
stats->noparse += add->noparse;
}
......@@ -2,6 +2,10 @@
#define __PERF_MEM_EVENTS_H
#include <stdbool.h>
#include <stdint.h>
#include <stdio.h>
#include <linux/types.h>
#include "stat.h"
struct perf_mem_event {
bool record;
......@@ -33,4 +37,37 @@ int perf_mem__lck_scnprintf(char *out, size_t sz, struct mem_info *mem_info);
int perf_script__meminfo_scnprintf(char *bf, size_t size, struct mem_info *mem_info);
struct c2c_stats {
u32 nr_entries;
u32 locks; /* count of 'lock' transactions */
u32 store; /* count of all stores in trace */
u32 st_uncache; /* stores to uncacheable address */
u32 st_noadrs; /* cacheable store with no address */
u32 st_l1hit; /* count of stores that hit L1D */
u32 st_l1miss; /* count of stores that miss L1D */
u32 load; /* count of all loads in trace */
u32 ld_excl; /* exclusive loads, rmt/lcl DRAM - snp none/miss */
u32 ld_shared; /* shared loads, rmt/lcl DRAM - snp hit */
u32 ld_uncache; /* loads to uncacheable address */
u32 ld_io; /* loads to io address */
u32 ld_miss; /* loads miss */
u32 ld_noadrs; /* cacheable load with no address */
u32 ld_fbhit; /* count of loads hitting Fill Buffer */
u32 ld_l1hit; /* count of loads that hit L1D */
u32 ld_l2hit; /* count of loads that hit L2D */
u32 ld_llchit; /* count of loads that hit LLC */
u32 lcl_hitm; /* count of loads with local HITM */
u32 rmt_hitm; /* count of loads with remote HITM */
u32 rmt_hit; /* count of loads with remote hit clean; */
u32 lcl_dram; /* count of loads miss to local DRAM */
u32 rmt_dram; /* count of loads miss to remote DRAM */
u32 nomap; /* count of load/stores with no phys adrs */
u32 noparse; /* count of unparsable data sources */
};
struct hist_entry;
int c2c_decode_stats(struct c2c_stats *stats, struct mem_info *mi);
void c2c_add_stats(struct c2c_stats *stats, struct c2c_stats *add);
#endif /* __PERF_MEM_EVENTS_H */
......@@ -315,7 +315,7 @@ struct sort_entry sort_sym = {
/* --sort srcline */
static char *hist_entry__get_srcline(struct hist_entry *he)
char *hist_entry__get_srcline(struct hist_entry *he)
{
struct map *map = he->ms.map;
......
......@@ -280,4 +280,5 @@ int64_t
sort__daddr_cmp(struct hist_entry *left, struct hist_entry *right);
int64_t
sort__dcacheline_cmp(struct hist_entry *left, struct hist_entry *right);
char *hist_entry__get_srcline(struct hist_entry *he);
#endif /* __PERF_SORT_H */
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册