提交 ae646f0b 编写于 作者: J Jeffrey Hugo 提交者: Linus Torvalds

init: fix false positives in W+X checking

load_module() creates W+X mappings via __vmalloc_node_range() (from
layout_and_allocate()->move_module()->module_alloc()) by using
PAGE_KERNEL_EXEC.  These mappings are later cleaned up via
"call_rcu_sched(&freeinit->rcu, do_free_init)" from do_init_module().

This is a problem because call_rcu_sched() queues work, which can be run
after debug_checkwx() is run, resulting in a race condition.  If hit,
the race results in a nasty splat about insecure W+X mappings, which
results in a poor user experience as these are not the mappings that
debug_checkwx() is intended to catch.

This issue is observed on multiple arm64 platforms, and has been
artificially triggered on an x86 platform.

Address the race by flushing the queued work before running the
arch-defined mark_rodata_ro() which then calls debug_checkwx().

Link: http://lkml.kernel.org/r/1525103946-29526-1-git-send-email-jhugo@codeaurora.org
Fixes: e1a58320 ("x86/mm: Warn on W^X mappings")
Signed-off-by: NJeffrey Hugo <jhugo@codeaurora.org>
Reported-by: NTimur Tabi <timur@codeaurora.org>
Reported-by: NJan Glauber <jan.glauber@caviumnetworks.com>
Acked-by: NKees Cook <keescook@chromium.org>
Acked-by: NIngo Molnar <mingo@kernel.org>
Acked-by: NWill Deacon <will.deacon@arm.com>
Acked-by: NLaura Abbott <labbott@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Stephen Smalley <sds@tycho.nsa.gov>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
上级 4ba281d5
...@@ -1034,6 +1034,13 @@ __setup("rodata=", set_debug_rodata); ...@@ -1034,6 +1034,13 @@ __setup("rodata=", set_debug_rodata);
static void mark_readonly(void) static void mark_readonly(void)
{ {
if (rodata_enabled) { if (rodata_enabled) {
/*
* load_module() results in W+X mappings, which are cleaned up
* with call_rcu_sched(). Let's make sure that queued work is
* flushed so that we don't hit false positives looking for
* insecure pages which are W+X.
*/
rcu_barrier_sched();
mark_rodata_ro(); mark_rodata_ro();
rodata_test(); rodata_test();
} else } else
......
...@@ -3517,6 +3517,11 @@ static noinline int do_init_module(struct module *mod) ...@@ -3517,6 +3517,11 @@ static noinline int do_init_module(struct module *mod)
* walking this with preempt disabled. In all the failure paths, we * walking this with preempt disabled. In all the failure paths, we
* call synchronize_sched(), but we don't want to slow down the success * call synchronize_sched(), but we don't want to slow down the success
* path, so use actual RCU here. * path, so use actual RCU here.
* Note that module_alloc() on most architectures creates W+X page
* mappings which won't be cleaned up until do_free_init() runs. Any
* code such as mark_rodata_ro() which depends on those mappings to
* be cleaned up needs to sync with the queued work - ie
* rcu_barrier_sched()
*/ */
call_rcu_sched(&freeinit->rcu, do_free_init); call_rcu_sched(&freeinit->rcu, do_free_init);
mutex_unlock(&module_mutex); mutex_unlock(&module_mutex);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册