perf mmap: Be consistent when checking for an unmaped ring buffer

The previous patch is insufficient to cure the reported 'perf trace'
segfault, as it only cures the perf_mmap__read_done() case, moving the
segfault to perf_mmap__read_init() functio, fix it by doing the same
refcount check.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Fixes: 8872481b ("perf mmap: Introduce perf_mmap__read_init()")
Link: https://lkml.kernel.org/r/20180326144127.GF18897@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
上级 f58385f6
......@@ -234,7 +234,7 @@ static int overwrite_rb_find_range(void *buf, int mask, u64 *start, u64 *end)
/*
* Report the start and end of the available data in ringbuffer
*/
int perf_mmap__read_init(struct perf_mmap *md)
static int __perf_mmap__read_init(struct perf_mmap *md)
{
u64 head = perf_mmap__read_head(md);
u64 old = md->prev;
......@@ -268,6 +268,17 @@ int perf_mmap__read_init(struct perf_mmap *md)
return 0;
}
int perf_mmap__read_init(struct perf_mmap *map)
{
/*
* Check if event was unmapped due to a POLLHUP/POLLERR.
*/
if (!refcount_read(&map->refcnt))
return -ENOENT;
return __perf_mmap__read_init(map);
}
int perf_mmap__push(struct perf_mmap *md, void *to,
int push(void *to, void *buf, size_t size))
{
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册