提交 4a00c16e 编写于 作者: S Sukadev Bhattiprolu 提交者: Ingo Molnar

perf/core: Define PERF_PMU_TXN_READ interface

Define a new PERF_PMU_TXN_READ interface to read a group of counters
at once.

        pmu->start_txn()                // Initialize before first event

        for each event in group
                pmu->read(event);       // Queue each event to be read

        rc = pmu->commit_txn()          // Read/update all queued counters

Note that we use this interface with all PMUs.  PMUs that implement this
interface use the ->read() operation to _queue_ the counters to be read
and use ->commit_txn() to actually read all the queued counters at once.

PMUs that don't implement PERF_PMU_TXN_READ ignore ->start_txn() and
->commit_txn() and continue to read counters one at a time.

Thanks to input from Peter Zijlstra.
Signed-off-by: NSukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Link: http://lkml.kernel.org/r/1441336073-22750-9-git-send-email-sukadev@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
上级 7d88962e
...@@ -202,6 +202,7 @@ struct perf_event; ...@@ -202,6 +202,7 @@ struct perf_event;
#define PERF_EVENT_TXN 0x1 #define PERF_EVENT_TXN 0x1
#define PERF_PMU_TXN_ADD 0x1 /* txn to add/schedule event on PMU */ #define PERF_PMU_TXN_ADD 0x1 /* txn to add/schedule event on PMU */
#define PERF_PMU_TXN_READ 0x2 /* txn to read event group from PMU */
/** /**
* pmu::capabilities flags * pmu::capabilities flags
......
...@@ -3199,6 +3199,7 @@ static void __perf_event_read(void *info) ...@@ -3199,6 +3199,7 @@ static void __perf_event_read(void *info)
struct perf_event *sub, *event = data->event; struct perf_event *sub, *event = data->event;
struct perf_event_context *ctx = event->ctx; struct perf_event_context *ctx = event->ctx;
struct perf_cpu_context *cpuctx = __get_cpu_context(ctx); struct perf_cpu_context *cpuctx = __get_cpu_context(ctx);
struct pmu *pmu = event->pmu;
/* /*
* If this is a task context, we need to check whether it is * If this is a task context, we need to check whether it is
...@@ -3217,18 +3218,31 @@ static void __perf_event_read(void *info) ...@@ -3217,18 +3218,31 @@ static void __perf_event_read(void *info)
} }
update_event_times(event); update_event_times(event);
if (event->state == PERF_EVENT_STATE_ACTIVE) if (event->state != PERF_EVENT_STATE_ACTIVE)
event->pmu->read(event); goto unlock;
if (!data->group) if (!data->group) {
pmu->read(event);
data->ret = 0;
goto unlock; goto unlock;
}
pmu->start_txn(pmu, PERF_PMU_TXN_READ);
pmu->read(event);
list_for_each_entry(sub, &event->sibling_list, group_entry) { list_for_each_entry(sub, &event->sibling_list, group_entry) {
update_event_times(sub); update_event_times(sub);
if (sub->state == PERF_EVENT_STATE_ACTIVE) if (sub->state == PERF_EVENT_STATE_ACTIVE) {
/*
* Use sibling's PMU rather than @event's since
* sibling could be on different (eg: software) PMU.
*/
sub->pmu->read(sub); sub->pmu->read(sub);
}
} }
data->ret = 0;
data->ret = pmu->commit_txn(pmu);
unlock: unlock:
raw_spin_unlock(&ctx->lock); raw_spin_unlock(&ctx->lock);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册