1. 12 4月, 2023 1 次提交
  2. 06 4月, 2023 9 次提交
  3. 30 3月, 2023 5 次提交
  4. 23 3月, 2023 10 次提交
  5. 15 3月, 2023 2 次提交
  6. 08 3月, 2023 3 次提交
  7. 28 2月, 2023 2 次提交
    • A
      arm64: kaslr: don't pretend KASLR is enabled if offset < MIN_KIMG_ALIGN · 010338d7
      Ard Biesheuvel 提交于
      Our virtual KASLR displacement is a randomly chosen multiple of
      2 MiB plus an offset that is equal to the physical placement modulo 2
      MiB. This arrangement ensures that we can always use 2 MiB block
      mappings (or contiguous PTE mappings for 16k or 64k pages) to map the
      kernel.
      
      This means that a KASLR offset of less than 2 MiB is simply the product
      of this physical displacement, and no randomization has actually taken
      place. Currently, we use 'kaslr_offset() > 0' to decide whether or not
      randomization has occurred, and so we misidentify this case.
      
      If the kernel image placement is not randomized, modules are allocated
      from a dedicated region below the kernel mapping, which is only used for
      modules and not for other vmalloc() or vmap() calls.
      
      When randomization is enabled, the kernel image is vmap()'ed randomly
      inside the vmalloc region, and modules are allocated in the vicinity of
      this mapping to ensure that relative references are always in range.
      However, unlike the dedicated module region below the vmalloc region,
      this region is not reserved exclusively for modules, and so ordinary
      vmalloc() calls may end up overlapping with it. This should rarely
      happen, given that vmalloc allocates bottom up, although it cannot be
      ruled out entirely.
      
      The misidentified case results in a placement of the kernel image within
      2 MiB of its default address. However, the logic that randomizes the
      module region is still invoked, and this could result in the module
      region overlapping with the start of the vmalloc region, instead of
      using the dedicated region below it. If this happens, a single large
      vmalloc() or vmap() call will use up the entire region, and leave no
      space for loading modules after that.
      
      Since commit 82046702 ("efi/libstub/arm64: Replace 'preferred'
      offset with alignment check"), this is much more likely to occur on
      systems that boot via EFI but lack an implementation of the EFI RNG
      protocol, as in that case, the EFI stub will decide to leave the image
      where it found it, and the EFI firmware uses 64k alignment only.
      
      Fix this, by correctly identifying the case where the virtual
      displacement is a result of the physical displacement only.
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Reviewed-by: NMark Brown <broonie@kernel.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Link: https://lore.kernel.org/r/20230223204101.1500373-1-ardb@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      010338d7
    • M
      arm64: ftrace: forbid CALL_OPS with CC_OPTIMIZE_FOR_SIZE · b3f11af9
      Mark Rutland 提交于
      Florian reports that when building with CONFIG_CC_OPTIMIZE_FOR_SIZE=y,
      he sees "Misaligned patch-site" warnings at boot, e.g.
      
      | Misaligned patch-site bcm2836_arm_irqchip_handle_irq+0x0/0x88
      | WARNING: CPU: 0 PID: 0 at arch/arm64/kernel/ftrace.c:120 ftrace_call_adjust+0x4c/0x70
      
      This is because GCC will silently ignore `-falign-functions=N` when
      passed `-Os`, resulting in functions not being aligned as we expect.
      This is a known issue, and to account for this we modified the kernel to
      avoid `-Os` generally. Unfortunately we forgot to account for
      CONFIG_CC_OPTIMIZE_FOR_SIZE.
      
      Forbid the use of CALL_OPS with CONFIG_CC_OPTIMIZE_FOR_SIZE=y to prevent
      this issue. All exising ftrace features will work as before, though
      without the performance benefit of CALL_OPS.
      Reported-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Link: http://lore.kernel.org/linux-arm-kernel/2d9284c3-3805-402b-5423-520ced56d047@gmail.comSigned-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Stefan Wahren <stefan.wahren@i2se.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Will Deacon <will@kernel.org>
      Tested-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Link: https://lore.kernel.org/r/20230227115819.365630-1-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      b3f11af9
  8. 27 2月, 2023 2 次提交
  9. 24 2月, 2023 2 次提交
  10. 23 2月, 2023 1 次提交
  11. 22 2月, 2023 2 次提交
  12. 21 2月, 2023 1 次提交
    • M
      arm64: fix .idmap.text assertion for large kernels · d5417081
      Mark Rutland 提交于
      When building a kernel with many debug options enabled (which happens in
      test configurations use by myself and syzbot), the kernel can become
      large enough that portions of .text can be more than 128M away from
      .idmap.text (which is placed inside the .rodata section). Where idmap
      code branches into .text, the linker will place veneers in the
      .idmap.text section to make those branches possible.
      
      Unfortunately, as Ard reports, GNU LD has bseen observed to add 4K of
      padding when adding such veneers, e.g.
      
      | .idmap.text    0xffffffc01e48e5c0      0x32c arch/arm64/mm/proc.o
      |                0xffffffc01e48e5c0                idmap_cpu_replace_ttbr1
      |                0xffffffc01e48e600                idmap_kpti_install_ng_mappings
      |                0xffffffc01e48e800                __cpu_setup
      | *fill*         0xffffffc01e48e8ec        0x4
      | .idmap.text.stub
      |                0xffffffc01e48e8f0       0x18 linker stubs
      |                0xffffffc01e48f8f0                __idmap_text_end = .
      |                0xffffffc01e48f000                . = ALIGN (0x1000)
      | *fill*         0xffffffc01e48f8f0      0x710
      |                0xffffffc01e490000                idmap_pg_dir = .
      
      This makes the __idmap_text_start .. __idmap_text_end region bigger than
      the 4K we require it to fit within, and triggers an assertion in arm64's
      vmlinux.lds.S, which breaks the build:
      
      | LD      .tmp_vmlinux.kallsyms1
      | aarch64-linux-gnu-ld: ID map text too big or misaligned
      | make[1]: *** [scripts/Makefile.vmlinux:35: vmlinux] Error 1
      | make: *** [Makefile:1264: vmlinux] Error 2
      
      Avoid this by using an `ADRP+ADD+BLR` sequence for branches out of
      .idmap.text, which avoids the need for veneers. These branches are only
      executed once per boot, and only when the MMU is on, so there should be
      no noticeable performance penalty in replacing `BL` with `ADRP+ADD+BLR`.
      
      At the same time, remove the "x" and "w" attributes when placing code in
      .idmap.text, as these are not necessary, and this will prevent the
      linker from assuming that it is safe to place PLTs into .idmap.text,
      causing it to warn if and when there are out-of-range branches within
      .idmap.text, e.g.
      
      |   LD      .tmp_vmlinux.kallsyms1
      | arch/arm64/kernel/head.o: in function `primary_entry':
      | (.idmap.text+0x1c): relocation truncated to fit: R_AARCH64_CALL26 against symbol `dcache_clean_poc' defined in .text section in arch/arm64/mm/cache.o
      | arch/arm64/kernel/head.o: in function `init_el2':
      | (.idmap.text+0x88): relocation truncated to fit: R_AARCH64_CALL26 against symbol `dcache_clean_poc' defined in .text section in arch/arm64/mm/cache.o
      | make[1]: *** [scripts/Makefile.vmlinux:34: vmlinux] Error 1
      | make: *** [Makefile:1252: vmlinux] Error 2
      
      Thus, if future changes add out-of-range branches in .idmap.text, it
      should be easy enough to identify those from the resulting linker
      errors.
      
      Reported-by: syzbot+f8ac312e31226e23302b@syzkaller.appspotmail.com
      Link: https://lore.kernel.org/linux-arm-kernel/00000000000028ea4105f4e2ef54@google.com/Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ardb@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Tested-by: NArd Biesheuvel <ardb@kernel.org>
      Reviewed-by: NArd Biesheuvel <ardb@kernel.org>
      Link: https://lore.kernel.org/r/20230220162317.1581208-1-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      d5417081