提交 da7c3b46 编写于 作者: R Riccardo Mancini 提交者: Arnaldo Carvalho de Melo

perf evsel: Move ignore_missing_thread() to fallback code

This patch moves ignore_missing_thread outside the perf_event_open loop.

Doing so, we need to move the retry_open flag a few places higher, with
minimal impact. Furthermore, thread need not be decreased since it won't
get increased by the for loop (since we're jumping back inside), but we
need to check that the nthreads decrease didn't put thread out of range.

The goal is to have fallbacks handled in one place only, since in the
future parallel code, these would be handled separately.
Signed-off-by: NRiccardo Mancini <rickyman7@gmail.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/4eca51443c786baaf6811b7cd8e73aafd97f7606.1629490974.git.rickyman7@gmail.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
上级 71efc48a
......@@ -1656,7 +1656,7 @@ static int update_fds(struct evsel *evsel,
return 0;
}
static bool ignore_missing_thread(struct evsel *evsel,
bool evsel__ignore_missing_thread(struct evsel *evsel,
int nr_cpus, int cpu,
struct perf_thread_map *threads,
int thread, int err)
......@@ -1993,12 +1993,15 @@ static int evsel__open_cpu(struct evsel *evsel, struct perf_cpu_map *cpus,
for (thread = 0; thread < nthreads; thread++) {
int fd, group_fd;
retry_open:
if (thread >= nthreads)
break;
if (!evsel->cgrp && !evsel->core.system_wide)
pid = perf_thread_map__pid(threads, thread);
group_fd = get_group_fd(evsel, cpu, thread);
retry_open:
test_attr__ready();
fd = perf_event_open(evsel, pid, cpus->map[cpu],
......@@ -2016,20 +2019,6 @@ static int evsel__open_cpu(struct evsel *evsel, struct perf_cpu_map *cpus,
if (fd < 0) {
err = -errno;
if (ignore_missing_thread(evsel, cpus->nr, cpu, threads, thread, err)) {
/*
* We just removed 1 thread, so take a step
* back on thread index and lower the upper
* nthreads limit.
*/
nthreads--;
thread--;
/* ... and pretend like nothing have happened. */
err = 0;
continue;
}
pr_debug2_peo("\nsys_perf_event_open failed, error %d\n",
err);
goto try_fallback;
......@@ -2069,6 +2058,14 @@ static int evsel__open_cpu(struct evsel *evsel, struct perf_cpu_map *cpus,
return 0;
try_fallback:
if (evsel__ignore_missing_thread(evsel, cpus->nr, cpu, threads, thread, err)) {
/* We just removed 1 thread, so lower the upper nthreads limit. */
nthreads--;
/* ... and pretend like nothing have happened. */
err = 0;
goto retry_open;
}
/*
* perf stat needs between 5 and 22 fds per CPU. When we run out
* of them try to increase the limits.
......
......@@ -294,6 +294,11 @@ bool evsel__detect_missing_features(struct evsel *evsel);
enum rlimit_action { NO_CHANGE, SET_TO_MAX, INCREASED_MAX };
bool evsel__increase_rlimit(enum rlimit_action *set_rlimit);
bool evsel__ignore_missing_thread(struct evsel *evsel,
int nr_cpus, int cpu,
struct perf_thread_map *threads,
int thread, int err);
struct perf_sample;
void *evsel__rawptr(struct evsel *evsel, struct perf_sample *sample, const char *name);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册