提交 bd903afe 编写于 作者: S Song Liu 提交者: Ingo Molnar

perf/core: Fix ctx_event_type in ctx_resched()

In ctx_resched(), EVENT_FLEXIBLE should be sched_out when EVENT_PINNED is
added. However, ctx_resched() calculates ctx_event_type before checking
this condition. As a result, pinned events will NOT get higher priority
than flexible events.

The following shows this issue on an Intel CPU (where ref-cycles can
only use one hardware counter).

  1. First start:
       perf stat -C 0 -e ref-cycles  -I 1000
  2. Then, in the second console, run:
       perf stat -C 0 -e ref-cycles:D -I 1000

The second perf uses pinned events, which is expected to have higher
priority. However, because it failed in ctx_resched(). It is never
run.

This patch fixes this by calculating ctx_event_type after re-evaluating
event_type.
Reported-by: NEphraim Park <ephiepark@fb.com>
Signed-off-by: NSong Liu <songliubraving@fb.com>
Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: <jolsa@redhat.com>
Cc: <kernel-team@fb.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Fixes: 487f05e1 ("perf/core: Optimize event rescheduling on active contexts")
Link: http://lkml.kernel.org/r/20180306055504.3283731-1-songliubraving@fb.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
上级 629ae2ee
...@@ -2246,7 +2246,7 @@ static void ctx_resched(struct perf_cpu_context *cpuctx, ...@@ -2246,7 +2246,7 @@ static void ctx_resched(struct perf_cpu_context *cpuctx,
struct perf_event_context *task_ctx, struct perf_event_context *task_ctx,
enum event_type_t event_type) enum event_type_t event_type)
{ {
enum event_type_t ctx_event_type = event_type & EVENT_ALL; enum event_type_t ctx_event_type;
bool cpu_event = !!(event_type & EVENT_CPU); bool cpu_event = !!(event_type & EVENT_CPU);
/* /*
...@@ -2256,6 +2256,8 @@ static void ctx_resched(struct perf_cpu_context *cpuctx, ...@@ -2256,6 +2256,8 @@ static void ctx_resched(struct perf_cpu_context *cpuctx,
if (event_type & EVENT_PINNED) if (event_type & EVENT_PINNED)
event_type |= EVENT_FLEXIBLE; event_type |= EVENT_FLEXIBLE;
ctx_event_type = event_type & EVENT_ALL;
perf_pmu_disable(cpuctx->ctx.pmu); perf_pmu_disable(cpuctx->ctx.pmu);
if (task_ctx) if (task_ctx)
task_ctx_sched_out(cpuctx, task_ctx, event_type); task_ctx_sched_out(cpuctx, task_ctx, event_type);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册