1. 03 6月, 2016 1 次提交
    • M
      arm64: fix alignment when RANDOMIZE_TEXT_OFFSET is enabled · aed7eb83
      Mark Rutland 提交于
      With ARM64_64K_PAGES and RANDOMIZE_TEXT_OFFSET enabled, we hit the
      following issue on the boot:
      
      kernel BUG at arch/arm64/mm/mmu.c:480!
      Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
      Modules linked in:
      CPU: 0 PID: 0 Comm: swapper Not tainted 4.6.0 #310
      Hardware name: ARM Juno development board (r2) (DT)
      task: ffff000008d58a80 ti: ffff000008d30000 task.ti: ffff000008d30000
      PC is at map_kernel_segment+0x44/0xb0
      LR is at paging_init+0x84/0x5b0
      pc : [<ffff000008c450b4>] lr : [<ffff000008c451a4>] pstate: 600002c5
      
      Call trace:
      [<ffff000008c450b4>] map_kernel_segment+0x44/0xb0
      [<ffff000008c451a4>] paging_init+0x84/0x5b0
      [<ffff000008c42728>] setup_arch+0x198/0x534
      [<ffff000008c40848>] start_kernel+0x70/0x388
      [<ffff000008c401bc>] __primary_switched+0x30/0x74
      
      Commit 7eb90f2f ("arm64: cover the .head.text section in the .text
      segment mapping") removed the alignment between the .head.text and .text
      sections, and used the _text rather than the _stext interval for mapping
      the .text segment.
      
      Prior to this commit _stext was always section aligned and didn't cause
      any issue even when RANDOMIZE_TEXT_OFFSET was enabled. Since that
      alignment has been removed and _text is used to map the .text segment,
      we need ensure _text is always page aligned when RANDOMIZE_TEXT_OFFSET
      is enabled.
      
      This patch adds logic to TEXT_OFFSET fuzzing to ensure that the offset
      is always aligned to the kernel page size. To ensure this, we rely on
      the PAGE_SHIFT being available via Kconfig.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reported-by: NSudeep Holla <sudeep.holla@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Fixes: 7eb90f2f ("arm64: cover the .head.text section in the .text segment mapping")
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      aed7eb83
  2. 24 2月, 2016 2 次提交
    • A
      arm64: add support for building vmlinux as a relocatable PIE binary · 1e48ef7f
      Ard Biesheuvel 提交于
      This implements CONFIG_RELOCATABLE, which links the final vmlinux
      image with a dynamic relocation section, allowing the early boot code
      to perform a relocation to a different virtual address at runtime.
      
      This is a prerequisite for KASLR (CONFIG_RANDOMIZE_BASE).
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1e48ef7f
    • A
      arm64: add support for module PLTs · fd045f6c
      Ard Biesheuvel 提交于
      This adds support for emitting PLTs at module load time for relative
      branches that are out of range. This is a prerequisite for KASLR, which
      may place the kernel and the modules anywhere in the vmalloc area,
      making it more likely that branch target offsets exceed the maximum
      range of +/- 128 MB.
      
      In this version, I removed the distinction between relocations against
      .init executable sections and ordinary executable sections. The reason
      is that it is hardly worth the trouble, given that .init.text usually
      does not contain that many far branches, and this version now only
      reserves PLT entry space for jump and call relocations against undefined
      symbols (since symbols defined in the same module can be assumed to be
      within +/- 128 MB)
      
      For example, the mac80211.ko module (which is fairly sizable at ~400 KB)
      built with -mcmodel=large gives the following relocation counts:
      
                          relocs    branches   unique     !local
        .text              3925       3347       518        219
        .init.text           11          8         7          1
        .exit.text            4          4         4          1
        .text.unlikely       81         67        36         17
      
      ('unique' means branches to unique type/symbol/addend combos, of which
      !local is the subset referring to undefined symbols)
      
      IOW, we are only emitting a single PLT entry for the .init sections, and
      we are better off just adding it to the core PLT section instead.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      fd045f6c
  3. 19 2月, 2016 1 次提交
  4. 26 1月, 2016 2 次提交
  5. 13 10月, 2015 2 次提交
    • A
      arm64: add KASAN support · 39d114dd
      Andrey Ryabinin 提交于
      This patch adds arch specific code for kernel address sanitizer
      (see Documentation/kasan.txt).
      
      1/8 of kernel addresses reserved for shadow memory. There was no
      big enough hole for this, so virtual addresses for shadow were
      stolen from vmalloc area.
      
      At early boot stage the whole shadow region populated with just
      one physical page (kasan_zero_page). Later, this page reused
      as readonly zero shadow for some memory that KASan currently
      don't track (vmalloc).
      After mapping the physical memory, pages for shadow memory are
      allocated and mapped.
      
      Functions like memset/memmove/memcpy do a lot of memory accesses.
      If bad pointer passed to one of these function it is important
      to catch this. Compiler's instrumentation cannot do this since
      these functions are written in assembly.
      KASan replaces memory functions with manually instrumented variants.
      Original functions declared as weak symbols so strong definitions
      in mm/kasan/kasan.c could replace them. Original functions have aliases
      with '__' prefix in name, so we could call non-instrumented variant
      if needed.
      Some files built without kasan instrumentation (e.g. mm/slub.c).
      Original mem* function replaced (via #define) with prefixed variants
      to disable memory access checks for such files.
      Signed-off-by: NAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: NLinus Walleij <linus.walleij@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      39d114dd
    • W
      arm64: errata: use KBUILD_CFLAGS_MODULE for erratum #843419 · b6dd8e07
      Will Deacon 提交于
      Commit df057cc7 ("arm64: errata: add module build workaround for
      erratum #843419") sets CFLAGS_MODULE to ensure that the large memory
      model is used by the compiler when building kernel modules.
      
      However, CFLAGS_MODULE is an environment variable and intended to be
      overridden on the command line, which appears to be the case with the
      Ubuntu kernel packaging system, so use KBUILD_CFLAGS_MODULE instead.
      
      Cc: <stable@vger.kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Fixes: df057cc7 ("arm64: errata: add module build workaround for erratum #843419")
      Reported-by: NDann Frazier <dann.frazier@canonical.com>
      Tested-by: NDann Frazier <dann.frazier@canonical.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      b6dd8e07
  6. 17 9月, 2015 1 次提交
    • W
      arm64: errata: add module build workaround for erratum #843419 · df057cc7
      Will Deacon 提交于
      Cortex-A53 processors <= r0p4 are affected by erratum #843419 which can
      lead to a memory access using an incorrect address in certain sequences
      headed by an ADRP instruction.
      
      There is a linker fix to generate veneers for ADRP instructions, but
      this doesn't work for kernel modules which are built as unlinked ELF
      objects.
      
      This patch adds a new config option for the erratum which, when enabled,
      builds kernel modules with the mcmodel=large flag. This uses absolute
      addressing for all kernel symbols, thereby removing the use of ADRP as
      a PC-relative form of addressing. The ADRP relocs are removed from the
      module loader so that we fail to load any potentially affected modules.
      
      Cc: <stable@vger.kernel.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      df057cc7
  7. 27 7月, 2015 2 次提交
  8. 18 3月, 2015 1 次提交
    • S
      arm64: Adjust EFI libstub object include logic · ad08fd49
      Steve Capper 提交于
      Commit f4f75ad5 ("efi: efistub: Convert into static library")
      introduced a static library for EFI stub, libstub.
      
      The EFI libstub directory is referenced by the kernel build system via
      a obj subdirectory rule in:
      drivers/firmware/efi/Makefile
      
      Unfortunately, arm64 also references the EFI libstub via:
      libs-$(CONFIG_EFI_STUB) += drivers/firmware/efi/libstub/
      
      If we're unlucky, the kernel build system can enter libstub via two
      simultaneous threads resulting in build failures such as:
      
      fixdep: error opening depfile: drivers/firmware/efi/libstub/.efi-stub-helper.o.d: No such file or directory
      scripts/Makefile.build:257: recipe for target 'drivers/firmware/efi/libstub/efi-stub-helper.o' failed
      make[1]: *** [drivers/firmware/efi/libstub/efi-stub-helper.o] Error 2
      Makefile:939: recipe for target 'drivers/firmware/efi/libstub' failed
      make: *** [drivers/firmware/efi/libstub] Error 2
      make: *** Waiting for unfinished jobs....
      
      This patch adjusts the arm64 Makefile to reference the compiled library
      explicitly (as is currently done in x86), rather than the directory.
      
      Fixes: f4f75ad5 efi: efistub: Convert into static library
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ad08fd49
  9. 20 1月, 2015 1 次提交
  10. 15 1月, 2015 1 次提交
    • K
      arm64: kill off the libgcc dependency · d67703a8
      Kevin Hao 提交于
      The arm64 kernel builds fine without the libgcc. Actually it should not
      be used at all in the kernel. The following are the reasons indicated
      by Russell King:
      
        Although libgcc is part of the compiler, libgcc is built with the
        expectation that it will be running in userland - it expects to link
        to a libc.  That's why you can't build libgcc without having the glibc
        headers around.
      
        [...]
      
        Meanwhile, having the kernel build the compiler support functions that
        it needs ensures that (a) we know what compiler support functions are
        being used, (b) we know the implementation of those support functions
        are sane for use in the kernel, (c) we can build them with appropriate
        compiler flags for best performance, and (d) we remove an unnecessary
        dependency on the build toolchain.
      Signed-off-by: NKevin Hao <haokexin@gmail.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      d67703a8
  11. 22 10月, 2014 3 次提交
  12. 02 10月, 2014 1 次提交
  13. 08 9月, 2014 1 次提交
  14. 20 8月, 2014 1 次提交
    • A
      arm64: align randomized TEXT_OFFSET on 4 kB boundary · 4190312b
      Ard Biesheuvel 提交于
      When booting via UEFI, the kernel Image is loaded at a 4 kB boundary and
      the embedded EFI stub is executed in place. The EFI stub relocates the
      Image to reside TEXT_OFFSET bytes above a 2 MB boundary, and jumps into
      the kernel proper.
      
      In AArch64, PC relative symbol references are emitted using adrp/add or
      adrp/ldr pairs, where the offset into a 4 kB page is resolved using a
      separate :lo12: relocation. This implicitly assumes that the code will
      always be executed at the same relative offset with respect to a 4 kB
      boundary, or the references will point to the wrong address.
      
      This means we should link the kernel at a 4 kB aligned base address in
      order to remain compatible with the base address the UEFI loader uses
      when doing the initial load of Image. So update the code that generates
      TEXT_OFFSET to choose a multiple of 4 kB.
      
      At the same time, update the code so it chooses from the interval [0..2MB)
      as the author originally intended.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      4190312b
  15. 19 7月, 2014 1 次提交
    • A
      efi: efistub: Convert into static library · f4f75ad5
      Ard Biesheuvel 提交于
      This patch changes both x86 and arm64 efistub implementations
      from #including shared .c files under drivers/firmware/efi to
      building shared code as a static library.
      
      The x86 code uses a stub built into the boot executable which
      uncompresses the kernel at boot time. In this case, the library is
      linked into the decompressor.
      
      In the arm64 case, the stub is part of the kernel proper so the library
      is linked into the kernel proper as well.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      f4f75ad5
  16. 10 7月, 2014 1 次提交
    • M
      arm64: Enable TEXT_OFFSET fuzzing · da57a369
      Mark Rutland 提交于
      The arm64 Image header contains a text_offset field which bootloaders
      are supposed to read to determine the offset (from a 2MB aligned "start
      of memory" per booting.txt) at which to load the kernel. The offset is
      not well respected by bootloaders at present, and due to the lack of
      variation there is little incentive to support it. This is unfortunate
      for the sake of future kernels where we may wish to vary the text offset
      (even zeroing it).
      
      This patch adds options to arm64 to enable fuzz-testing of text_offset.
      CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET forces the text offset to a random
      16-byte aligned value value in the range [0..2MB) upon a build of the
      kernel. It is recommended that distribution kernels enable randomization
      to test bootloaders such that any compliance issues can be fixed early.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NTom Rini <trini@ti.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      da57a369
  17. 15 5月, 2014 1 次提交
  18. 25 10月, 2013 1 次提交
  19. 20 6月, 2013 1 次提交
  20. 12 6月, 2013 1 次提交
  21. 07 6月, 2013 1 次提交
  22. 11 12月, 2012 1 次提交
  23. 04 12月, 2012 2 次提交
  24. 17 9月, 2012 1 次提交