提交 15a601eb 编写于 作者: M Mathieu Desnoyers 提交者: Ingo Molnar

x86: fix test_poke for vmalloced pages

* Ingo Molnar (mingo@elte.hu) wrote:
>
> * Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> wrote:
>
> > The shadow vmap for DEBUG_RODATA kernel text modification uses
> > virt_to_page to get the pages from the pointer address.
> >
> > However, I think vmalloc_to_page would be required in case the page is
> > used for modules.
> >
> > Since only the core kernel text is marked read-only, use
> > kernel_text_address() to make sure we only shadow map the core kernel
> > text, not modules.
>
> actually, i think we should mark module text readonly too.
>

Yes, but in the meantime, the x86 tree would need this patch to make
kprobes work correctly on modules.

I suspect that without this fix, with the enhanced hotplug and kprobes
patch, kprobes will use text_poke to insert breakpoints in modules
(vmalloced pages used), which will map the wrong pages and corrupt
random kernel locations instead of updating the correct page.

Work that would write protect the module pages should clearly be done,
but it can come in a later time. We have to make sure we interact
correctly with the page allocation debugging, as an example.

Here is the patch against x86.git 2.6.25-rc5 :

The shadow vmap for DEBUG_RODATA kernel text modification uses virt_to_page to
get the pages from the pointer address.

However, I think vmalloc_to_page would be required in case the page is used for
modules.

Since only the core kernel text is marked read-only, use kernel_text_address()
to make sure we only shadow map the core kernel text, not modules.
Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: akpm@linux-foundation.org
Signed-off-by: NIngo Molnar <mingo@elte.hu>
上级 e5699a82
...@@ -515,7 +515,7 @@ void *__kprobes text_poke(void *addr, const void *opcode, size_t len) ...@@ -515,7 +515,7 @@ void *__kprobes text_poke(void *addr, const void *opcode, size_t len)
BUG_ON(len > sizeof(long)); BUG_ON(len > sizeof(long));
BUG_ON((((long)addr + len - 1) & ~(sizeof(long) - 1)) BUG_ON((((long)addr + len - 1) & ~(sizeof(long) - 1))
- ((long)addr & ~(sizeof(long) - 1))); - ((long)addr & ~(sizeof(long) - 1)));
{ if (kernel_text_address((unsigned long)addr)) {
struct page *pages[2] = { virt_to_page(addr), struct page *pages[2] = { virt_to_page(addr),
virt_to_page(addr + PAGE_SIZE) }; virt_to_page(addr + PAGE_SIZE) };
if (!pages[1]) if (!pages[1])
...@@ -526,6 +526,13 @@ void *__kprobes text_poke(void *addr, const void *opcode, size_t len) ...@@ -526,6 +526,13 @@ void *__kprobes text_poke(void *addr, const void *opcode, size_t len)
memcpy(&vaddr[(unsigned long)addr & ~PAGE_MASK], opcode, len); memcpy(&vaddr[(unsigned long)addr & ~PAGE_MASK], opcode, len);
local_irq_restore(flags); local_irq_restore(flags);
vunmap(vaddr); vunmap(vaddr);
} else {
/*
* modules are in vmalloc'ed memory, always writable.
*/
local_irq_save(flags);
memcpy(addr, opcode, len);
local_irq_restore(flags);
} }
sync_core(); sync_core();
/* Could also do a CLFLUSH here to speed up CPU recovery; but /* Could also do a CLFLUSH here to speed up CPU recovery; but
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册