提交 71394fe5 编写于 作者: A Andrey Ryabinin 提交者: Linus Torvalds

mm: vmalloc: add flag preventing guard hole allocation

For instrumenting global variables KASan will shadow memory backing memory
for modules.  So on module loading we will need to allocate memory for
shadow and map it at address in shadow that corresponds to the address
allocated in module_alloc().

__vmalloc_node_range() could be used for this purpose, except it puts a
guard hole after allocated area.  Guard hole in shadow memory should be a
problem because at some future point we might need to have a shadow memory
at address occupied by guard hole.  So we could fail to allocate shadow
for module_alloc().

Add a new vm_struct flag 'VM_NO_GUARD' indicating that vm area doesn't
have a guard hole.
Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
上级 c420f167
...@@ -16,6 +16,7 @@ struct vm_area_struct; /* vma defining user mapping in mm_types.h */ ...@@ -16,6 +16,7 @@ struct vm_area_struct; /* vma defining user mapping in mm_types.h */
#define VM_USERMAP 0x00000008 /* suitable for remap_vmalloc_range */ #define VM_USERMAP 0x00000008 /* suitable for remap_vmalloc_range */
#define VM_VPAGES 0x00000010 /* buffer for pages was vmalloc'ed */ #define VM_VPAGES 0x00000010 /* buffer for pages was vmalloc'ed */
#define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */
#define VM_NO_GUARD 0x00000040 /* don't add guard page */
/* bits [20..32] reserved for arch specific ioremap internals */ /* bits [20..32] reserved for arch specific ioremap internals */
/* /*
...@@ -96,8 +97,12 @@ void vmalloc_sync_all(void); ...@@ -96,8 +97,12 @@ void vmalloc_sync_all(void);
static inline size_t get_vm_area_size(const struct vm_struct *area) static inline size_t get_vm_area_size(const struct vm_struct *area)
{ {
/* return actual size without guard page */ if (!(area->flags & VM_NO_GUARD))
return area->size - PAGE_SIZE; /* return actual size without guard page */
return area->size - PAGE_SIZE;
else
return area->size;
} }
extern struct vm_struct *get_vm_area(unsigned long size, unsigned long flags); extern struct vm_struct *get_vm_area(unsigned long size, unsigned long flags);
......
...@@ -1324,10 +1324,8 @@ static struct vm_struct *__get_vm_area_node(unsigned long size, ...@@ -1324,10 +1324,8 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
if (unlikely(!area)) if (unlikely(!area))
return NULL; return NULL;
/* if (!(flags & VM_NO_GUARD))
* We always allocate a guard page. size += PAGE_SIZE;
*/
size += PAGE_SIZE;
va = alloc_vmap_area(size, align, start, end, node, gfp_mask); va = alloc_vmap_area(size, align, start, end, node, gfp_mask);
if (IS_ERR(va)) { if (IS_ERR(va)) {
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册