提交 448694a1 编写于 作者: J Jan Glauber 提交者: Rusty Russell

module: undo module RONX protection correctly.

While debugging I stumbled over two problems in the code that protects module
pages.

First issue is that disabling the protection before freeing init or unload of
a module is not symmetric with the enablement. For instance, if pages are set
to RO the page range from module_core to module_core + core_ro_size is
protected. If a module is unloaded the page range from module_core to
module_core + core_size is set back to RW.
So pages that were not set to RO are also changed to RW.
This is not critical but IMHO it should be symmetric.

Second issue is that while set_memory_rw & set_memory_ro are used for
RO/RW changes only set_memory_nx is involved for NX/X. One would await that
the inverse function is called when the NX protection should be removed,
which is not the case here, unless I'm missing something.
Signed-off-by: NJan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
上级 4d10380e
...@@ -11,5 +11,6 @@ void kernel_map_pages(struct page *page, int numpages, int enable); ...@@ -11,5 +11,6 @@ void kernel_map_pages(struct page *page, int numpages, int enable);
int set_memory_ro(unsigned long addr, int numpages); int set_memory_ro(unsigned long addr, int numpages);
int set_memory_rw(unsigned long addr, int numpages); int set_memory_rw(unsigned long addr, int numpages);
int set_memory_nx(unsigned long addr, int numpages); int set_memory_nx(unsigned long addr, int numpages);
int set_memory_x(unsigned long addr, int numpages);
#endif /* _S390_CACHEFLUSH_H */ #endif /* _S390_CACHEFLUSH_H */
...@@ -54,3 +54,8 @@ int set_memory_nx(unsigned long addr, int numpages) ...@@ -54,3 +54,8 @@ int set_memory_nx(unsigned long addr, int numpages)
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(set_memory_nx); EXPORT_SYMBOL_GPL(set_memory_nx);
int set_memory_x(unsigned long addr, int numpages)
{
return 0;
}
...@@ -1607,22 +1607,23 @@ static void set_section_ro_nx(void *base, ...@@ -1607,22 +1607,23 @@ static void set_section_ro_nx(void *base,
} }
} }
/* Setting memory back to RW+NX before releasing it */ /* Setting memory back to W+X before releasing it */
void unset_section_ro_nx(struct module *mod, void *module_region) void unset_section_ro_nx(struct module *mod, void *module_region)
{ {
unsigned long total_pages;
if (mod->module_core == module_region) { if (mod->module_core == module_region) {
/* Set core as NX+RW */ set_page_attributes(mod->module_core + mod->core_text_size,
total_pages = MOD_NUMBER_OF_PAGES(mod->module_core, mod->core_size); mod->module_core + mod->core_size,
set_memory_nx((unsigned long)mod->module_core, total_pages); set_memory_x);
set_memory_rw((unsigned long)mod->module_core, total_pages); set_page_attributes(mod->module_core,
mod->module_core + mod->core_ro_size,
set_memory_rw);
} else if (mod->module_init == module_region) { } else if (mod->module_init == module_region) {
/* Set init as NX+RW */ set_page_attributes(mod->module_init + mod->init_text_size,
total_pages = MOD_NUMBER_OF_PAGES(mod->module_init, mod->init_size); mod->module_init + mod->init_size,
set_memory_nx((unsigned long)mod->module_init, total_pages); set_memory_x);
set_memory_rw((unsigned long)mod->module_init, total_pages); set_page_attributes(mod->module_init,
mod->module_init + mod->init_ro_size,
set_memory_rw);
} }
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册