提交 f2b9ba87 编写于 作者: A Ard Biesheuvel 提交者: Will Deacon

arm64/kernel: kaslr: reduce module randomization range to 4 GB

We currently have to rely on the GCC large code model for KASLR for
two distinct but related reasons:
- if we enable full randomization, modules will be loaded very far away
  from the core kernel, where they are out of range for ADRP instructions,
- even without full randomization, the fact that the 128 MB module region
  is now no longer fully reserved for kernel modules means that there is
  a very low likelihood that the normal bottom-up allocation of other
  vmalloc regions may collide, and use up the range for other things.

Large model code is suboptimal, given that each symbol reference involves
a literal load that goes through the D-cache, reducing cache utilization.
But more importantly, literals are not instructions but part of .text
nonetheless, and hence mapped with executable permissions.

So let's get rid of our dependency on the large model for KASLR, by:
- reducing the full randomization range to 4 GB, thereby ensuring that
  ADRP references between modules and the kernel are always in range,
- reduce the spillover range to 4 GB as well, so that we fallback to a
  region that is still guaranteed to be in range
- move the randomization window of the core kernel to the middle of the
  VMALLOC space

Note that KASAN always uses the module region outside of the vmalloc space,
so keep the kernel close to that if KASAN is enabled.
Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: NWill Deacon <will.deacon@arm.com>
上级 5e8307b9
...@@ -1110,7 +1110,6 @@ config ARM64_MODULE_CMODEL_LARGE ...@@ -1110,7 +1110,6 @@ config ARM64_MODULE_CMODEL_LARGE
config ARM64_MODULE_PLTS config ARM64_MODULE_PLTS
bool bool
select ARM64_MODULE_CMODEL_LARGE
select HAVE_MOD_ARCH_SPECIFIC select HAVE_MOD_ARCH_SPECIFIC
config RELOCATABLE config RELOCATABLE
...@@ -1144,12 +1143,12 @@ config RANDOMIZE_BASE ...@@ -1144,12 +1143,12 @@ config RANDOMIZE_BASE
If unsure, say N. If unsure, say N.
config RANDOMIZE_MODULE_REGION_FULL config RANDOMIZE_MODULE_REGION_FULL
bool "Randomize the module region independently from the core kernel" bool "Randomize the module region over a 4 GB range"
depends on RANDOMIZE_BASE depends on RANDOMIZE_BASE
default y default y
help help
Randomizes the location of the module region without considering the Randomizes the location of the module region inside a 4 GB window
location of the core kernel. This way, it is impossible for modules covering the core kernel. This way, it is less likely for modules
to leak information about the location of core kernel data structures to leak information about the location of core kernel data structures
but it does imply that function calls between modules and the core but it does imply that function calls between modules and the core
kernel will need to be resolved via veneers in the module PLT. kernel will need to be resolved via veneers in the module PLT.
......
...@@ -117,13 +117,15 @@ u64 __init kaslr_early_init(u64 dt_phys) ...@@ -117,13 +117,15 @@ u64 __init kaslr_early_init(u64 dt_phys)
/* /*
* OK, so we are proceeding with KASLR enabled. Calculate a suitable * OK, so we are proceeding with KASLR enabled. Calculate a suitable
* kernel image offset from the seed. Let's place the kernel in the * kernel image offset from the seed. Let's place the kernel in the
* lower half of the VMALLOC area (VA_BITS - 2). * middle half of the VMALLOC area (VA_BITS - 2), and stay clear of
* the lower and upper quarters to avoid colliding with other
* allocations.
* Even if we could randomize at page granularity for 16k and 64k pages, * Even if we could randomize at page granularity for 16k and 64k pages,
* let's always round to 2 MB so we don't interfere with the ability to * let's always round to 2 MB so we don't interfere with the ability to
* map using contiguous PTEs * map using contiguous PTEs
*/ */
mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1); mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1);
offset = seed & mask; offset = BIT(VA_BITS - 3) + (seed & mask);
/* use the top 16 bits to randomize the linear region */ /* use the top 16 bits to randomize the linear region */
memstart_offset_seed = seed >> 48; memstart_offset_seed = seed >> 48;
...@@ -134,21 +136,23 @@ u64 __init kaslr_early_init(u64 dt_phys) ...@@ -134,21 +136,23 @@ u64 __init kaslr_early_init(u64 dt_phys)
* vmalloc region, since shadow memory is allocated for each * vmalloc region, since shadow memory is allocated for each
* module at load time, whereas the vmalloc region is shadowed * module at load time, whereas the vmalloc region is shadowed
* by KASAN zero pages. So keep modules out of the vmalloc * by KASAN zero pages. So keep modules out of the vmalloc
* region if KASAN is enabled. * region if KASAN is enabled, and put the kernel well within
* 4 GB of the module region.
*/ */
return offset; return offset % SZ_2G;
if (IS_ENABLED(CONFIG_RANDOMIZE_MODULE_REGION_FULL)) { if (IS_ENABLED(CONFIG_RANDOMIZE_MODULE_REGION_FULL)) {
/* /*
* Randomize the module region independently from the core * Randomize the module region over a 4 GB window covering the
* kernel. This prevents modules from leaking any information * kernel. This reduces the risk of modules leaking information
* about the address of the kernel itself, but results in * about the address of the kernel itself, but results in
* branches between modules and the core kernel that are * branches between modules and the core kernel that are
* resolved via PLTs. (Branches between modules will be * resolved via PLTs. (Branches between modules will be
* resolved normally.) * resolved normally.)
*/ */
module_range = VMALLOC_END - VMALLOC_START - MODULES_VSIZE; module_range = SZ_4G - (u64)(_end - _stext);
module_alloc_base = VMALLOC_START; module_alloc_base = max((u64)_end + offset - SZ_4G,
(u64)MODULES_VADDR);
} else { } else {
/* /*
* Randomize the module region by setting module_alloc_base to * Randomize the module region by setting module_alloc_base to
......
...@@ -55,9 +55,10 @@ void *module_alloc(unsigned long size) ...@@ -55,9 +55,10 @@ void *module_alloc(unsigned long size)
* less likely that the module region gets exhausted, so we * less likely that the module region gets exhausted, so we
* can simply omit this fallback in that case. * can simply omit this fallback in that case.
*/ */
p = __vmalloc_node_range(size, MODULE_ALIGN, VMALLOC_START, p = __vmalloc_node_range(size, MODULE_ALIGN, module_alloc_base,
VMALLOC_END, GFP_KERNEL, PAGE_KERNEL_EXEC, 0, module_alloc_base + SZ_4G, GFP_KERNEL,
NUMA_NO_NODE, __builtin_return_address(0)); PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
__builtin_return_address(0));
if (p && (kasan_module_alloc(p, size) < 0)) { if (p && (kasan_module_alloc(p, size) < 0)) {
vfree(p); vfree(p);
......
...@@ -8,6 +8,8 @@ ...@@ -8,6 +8,8 @@
#ifndef __LINUX_SIZES_H__ #ifndef __LINUX_SIZES_H__
#define __LINUX_SIZES_H__ #define __LINUX_SIZES_H__
#include <linux/const.h>
#define SZ_1 0x00000001 #define SZ_1 0x00000001
#define SZ_2 0x00000002 #define SZ_2 0x00000002
#define SZ_4 0x00000004 #define SZ_4 0x00000004
...@@ -44,4 +46,6 @@ ...@@ -44,4 +46,6 @@
#define SZ_1G 0x40000000 #define SZ_1G 0x40000000
#define SZ_2G 0x80000000 #define SZ_2G 0x80000000
#define SZ_4G _AC(0x100000000, ULL)
#endif /* __LINUX_SIZES_H__ */ #endif /* __LINUX_SIZES_H__ */
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册