1. 22 8月, 2016 1 次提交
  2. 29 7月, 2016 1 次提交
    • A
      arm64: relocatable: suppress R_AARCH64_ABS64 relocations in vmlinux · 08cc55b2
      Ard Biesheuvel 提交于
      The linker routines that we rely on to produce a relocatable PIE binary
      treat it as a shared ELF object in some ways, i.e., it emits symbol based
      R_AARCH64_ABS64 relocations into the final binary since doing so would be
      appropriate when linking a shared library that is subject to symbol
      preemption. (This means that an executable can override certain symbols
      that are exported by a shared library it is linked with, and that the
      shared library *must* update all its internal references as well, and point
      them to the version provided by the executable.)
      
      Symbol preemption does not occur for OS hosted PIE executables, let alone
      for vmlinux, and so we would prefer to get rid of these symbol based
      relocations. This would allow us to simplify the relocation routines, and
      to strip the .dynsym, .dynstr and .hash sections from the binary. (Note
      that these are tiny, and are placed in the .init segment, but they clutter
      up the vmlinux binary.)
      
      Note that these R_AARCH64_ABS64 relocations are only emitted for absolute
      references to symbols defined in the linker script, all other relocatable
      quantities are covered by anonymous R_AARCH64_RELATIVE relocations that
      simply list the offsets to all 64-bit values in the binary that need to be
      fixed up based on the offset between the link time and run time addresses.
      
      Fortunately, GNU ld has a -Bsymbolic option, which is intended for shared
      libraries to allow them to ignore symbol preemption, and unconditionally
      bind all internal symbol references to its own definitions. So set it for
      our PIE binary as well, and get rid of the asoociated sections and the
      relocation code that processes them.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      [will: fixed conflict with __dynsym_offset linker script entry]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      08cc55b2
  3. 28 4月, 2016 2 次提交
  4. 26 4月, 2016 6 次提交
  5. 22 4月, 2016 1 次提交
  6. 18 4月, 2016 1 次提交
  7. 15 4月, 2016 1 次提交
    • A
      arm64: move early boot code to the .init segment · 546c8c44
      Ard Biesheuvel 提交于
      Apart from the arm64/linux and EFI header data structures, there is nothing
      in the .head.text section that must reside at the beginning of the Image.
      So let's move it to the .init section where it belongs.
      
      Note that this involves some minor tweaking of the EFI header, primarily
      because the address of 'stext' no longer coincides with the start of the
      .text section. It also requires a couple of relocated symbol references
      to be slightly rewritten or their definition moved to the linker script.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      546c8c44
  8. 25 3月, 2016 1 次提交
  9. 21 3月, 2016 1 次提交
    • M
      arm64: fix KASLR boot-time I-cache maintenance · b90b4a60
      Mark Rutland 提交于
      Commit f80fb3a3 ("arm64: add support for kernel ASLR") missed a
      DSB necessary to complete I-cache maintenance in the primary boot path,
      and hence stale instructions may still be present in the I-cache and may
      be executed until the I-cache maintenance naturally completes.
      
      Since commit 8ec41987 ("arm64: mm: ensure patched kernel text is
      fetched from PoU"), all CPUs invalidate their I-caches after their MMU
      is enabled. Prior a CPU's MMU having been enabled, arbitrary lines may
      have been fetched from the PoC into I-caches. We never patch text
      expected to be executed with the MMU off. Thus, it is unnecessary to
      perform broadcast I-cache maintenance in the primary boot path.
      
      This patch reduces the scope of the I-cache maintenance to the local
      CPU, and adds the missing DSB with similar scope, matching prior
      maintenance in the primary boot path.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NArd Biesehvuel <ard.biesheuvel@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      b90b4a60
  10. 01 3月, 2016 1 次提交
  11. 25 2月, 2016 1 次提交
    • S
      arm64: Handle early CPU boot failures · bb905274
      Suzuki K Poulose 提交于
      A secondary CPU could fail to come online due to insufficient
      capabilities and could simply die or loop in the kernel.
      e.g, a CPU with no support for the selected kernel PAGE_SIZE
      loops in kernel with MMU turned off.
      or a hotplugged CPU which doesn't have one of the advertised
      system capability will die during the activation.
      
      There is no way to synchronise the status of the failing CPU
      back to the master. This patch solves the issue by adding a
      field to the secondary_data which can be updated by the failing
      CPU. If the secondary CPU fails even before turning the MMU on,
      it updates the status in a special variable reserved in the head.txt
      section to make sure that the update can be cache invalidated safely
      without possible sharing of cache write back granule.
      
      Here are the possible states :
      
       -1. CPU_MMU_OFF - Initial value set by the master CPU, this value
      indicates that the CPU could not turn the MMU on, hence the status
      could not be reliably updated in the secondary_data. Instead, the
      CPU has updated the status @ __early_cpu_boot_status.
      
       0. CPU_BOOT_SUCCESS - CPU has booted successfully.
      
       1. CPU_KILL_ME - CPU has invoked cpu_ops->die, indicating the
      master CPU to synchronise by issuing a cpu_ops->cpu_kill.
      
       2. CPU_STUCK_IN_KERNEL - CPU couldn't invoke die(), instead is
      looping in the kernel. This information could be used by say,
      kexec to check if it is really safe to do a kexec reboot.
      
       3. CPU_PANIC_KERNEL - CPU detected some serious issues which
      requires kernel to crash immediately. The secondary CPU cannot
      call panic() until it has initialised the GIC. This flag can
      be used to instruct the master to do so.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      [catalin.marinas@arm.com: conflict resolution]
      [catalin.marinas@arm.com: converted "status" from int to long]
      [catalin.marinas@arm.com: updated update_early_cpu_boot_status to use str_l]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bb905274
  12. 24 2月, 2016 4 次提交
    • A
      arm64: add support for kernel ASLR · f80fb3a3
      Ard Biesheuvel 提交于
      This adds support for KASLR is implemented, based on entropy provided by
      the bootloader in the /chosen/kaslr-seed DT property. Depending on the size
      of the address space (VA_BITS) and the page size, the entropy in the
      virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all
      4 levels), with the sidenote that displacements that result in the kernel
      image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB
      granule kernels, respectively) are not allowed, and will be rounded up to
      an acceptable value.
      
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is
      randomized independently from the core kernel. This makes it less likely
      that the location of core kernel data structures can be determined by an
      adversary, but causes all function calls from modules into the core kernel
      to be resolved via entries in the module PLTs.
      
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is
      randomized by choosing a page aligned 128 MB region inside the interval
      [_etext - 128 MB, _stext + 128 MB). This gives between 10 and 14 bits of
      entropy (depending on page size), independently of the kernel randomization,
      but still guarantees that modules are within the range of relative branch
      and jump instructions (with the caveat that, since the module region is
      shared with other uses of the vmalloc area, modules may need to be loaded
      further away if the module region is exhausted)
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f80fb3a3
    • A
      arm64: add support for building vmlinux as a relocatable PIE binary · 1e48ef7f
      Ard Biesheuvel 提交于
      This implements CONFIG_RELOCATABLE, which links the final vmlinux
      image with a dynamic relocation section, allowing the early boot code
      to perform a relocation to a different virtual address at runtime.
      
      This is a prerequisite for KASLR (CONFIG_RANDOMIZE_BASE).
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1e48ef7f
    • A
      arm64: avoid dynamic relocations in early boot code · 2bf31a4a
      Ard Biesheuvel 提交于
      Before implementing KASLR for arm64 by building a self-relocating PIE
      executable, we have to ensure that values we use before the relocation
      routine is executed are not subject to dynamic relocation themselves.
      This applies not only to virtual addresses, but also to values that are
      supplied by the linker at build time and relocated using R_AARCH64_ABS64
      relocations.
      
      So instead, use assemble time constants, or force the use of static
      relocations by folding the constants into the instructions.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      2bf31a4a
    • A
      arm64: avoid R_AARCH64_ABS64 relocations for Image header fields · 6ad1fe5d
      Ard Biesheuvel 提交于
      Unfortunately, the current way of using the linker to emit build time
      constants into the Image header will no longer work once we switch to
      the use of PIE executables. The reason is that such constants are emitted
      into the binary using R_AARCH64_ABS64 relocations, which are resolved at
      runtime, not at build time, and the places targeted by those relocations
      will contain zeroes before that.
      
      So refactor the endian swapping linker script constant generation code so
      that it emits the upper and lower 32-bit words separately.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      6ad1fe5d
  13. 19 2月, 2016 2 次提交
    • A
      arm64: allow kernel Image to be loaded anywhere in physical memory · a7f8de16
      Ard Biesheuvel 提交于
      This relaxes the kernel Image placement requirements, so that it
      may be placed at any 2 MB aligned offset in physical memory.
      
      This is accomplished by ignoring PHYS_OFFSET when installing
      memblocks, and accounting for the apparent virtual offset of
      the kernel Image. As a result, virtual address references
      below PAGE_OFFSET are correctly mapped onto physical references
      into the kernel Image regardless of where it sits in memory.
      
      Special care needs to be taken for dealing with memory limits passed
      via mem=, since the generic implementation clips memory top down, which
      may clip the kernel image itself if it is loaded high up in memory. To
      deal with this case, we simply add back the memory covering the kernel
      image, which may result in more memory to be retained than was passed
      as a mem= parameter.
      
      Since mem= should not be considered a production feature, a panic notifier
      handler is installed that dumps the memory limit at panic time if one was
      set.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a7f8de16
    • A
      arm64: introduce KIMAGE_VADDR as the virtual base of the kernel region · ab893fb9
      Ard Biesheuvel 提交于
      This introduces the preprocessor symbol KIMAGE_VADDR which will serve as
      the symbolic virtual base of the kernel region, i.e., the kernel's virtual
      offset will be KIMAGE_VADDR + TEXT_OFFSET. For now, we define it as being
      equal to PAGE_OFFSET, but in the future, it will be moved below it once
      we move the kernel virtual mapping out of the linear mapping.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ab893fb9
  14. 16 2月, 2016 1 次提交
    • M
      arm64: mm: place empty_zero_page in bss · 5227cfa7
      Mark Rutland 提交于
      Currently the zero page is set up in paging_init, and thus we cannot use
      the zero page earlier. We use the zero page as a reserved TTBR value
      from which no TLB entries may be allocated (e.g. when uninstalling the
      idmap). To enable such usage earlier (as may be required for invasive
      changes to the kernel page tables), and to minimise the time that the
      idmap is active, we need to be able to use the zero page before
      paging_init.
      
      This patch follows the example set by x86, by allocating the zero page
      at compile time, in .bss. This means that the zero page itself is
      available immediately upon entry to start_kernel (as we zero .bss before
      this), and also means that the zero page takes up no space in the raw
      Image binary. The associated struct page is allocated in bootmem_init,
      and remains unavailable until this time.
      
      Outside of arch code, the only users of empty_zero_page assume that the
      empty_zero_page symbol refers to the zeroed memory itself, and that
      ZERO_PAGE(x) must be used to acquire the associated struct page,
      following the example of x86. This patch also brings arm64 inline with
      these assumptions.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: NJeremy Linton <jeremy.linton@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5227cfa7
  15. 25 1月, 2016 1 次提交
  16. 07 1月, 2016 1 次提交
    • M
      arm64: head.S: use memset to clear BSS · 2a803c4d
      Mark Rutland 提交于
      Currently we use an open-coded memzero to clear the BSS. As it is a
      trivial implementation, it is sub-optimal.
      
      Our optimised memset doesn't use the stack, is position-independent, and
      for the memzero case can use of DC ZVA to clear large blocks
      efficiently. In __mmap_switched the MMU is on and there are no live
      caller-saved registers, so we can safely call an uninstrumented memset.
      
      This patch changes __mmap_switched to use memset when clearing the BSS.
      We use the __pi_memset alias so as to avoid any instrumentation in all
      kernel configurations.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2a803c4d
  17. 08 12月, 2015 1 次提交
  18. 20 10月, 2015 3 次提交
  19. 13 10月, 2015 1 次提交
    • A
      arm64: add KASAN support · 39d114dd
      Andrey Ryabinin 提交于
      This patch adds arch specific code for kernel address sanitizer
      (see Documentation/kasan.txt).
      
      1/8 of kernel addresses reserved for shadow memory. There was no
      big enough hole for this, so virtual addresses for shadow were
      stolen from vmalloc area.
      
      At early boot stage the whole shadow region populated with just
      one physical page (kasan_zero_page). Later, this page reused
      as readonly zero shadow for some memory that KASan currently
      don't track (vmalloc).
      After mapping the physical memory, pages for shadow memory are
      allocated and mapped.
      
      Functions like memset/memmove/memcpy do a lot of memory accesses.
      If bad pointer passed to one of these function it is important
      to catch this. Compiler's instrumentation cannot do this since
      these functions are written in assembly.
      KASan replaces memory functions with manually instrumented variants.
      Original functions declared as weak symbols so strong definitions
      in mm/kasan/kasan.c could replace them. Original functions have aliases
      with '__' prefix in name, so we could call non-instrumented variant
      if needed.
      Some files built without kasan instrumentation (e.g. mm/slub.c).
      Original mem* function replaced (via #define) with prefixed variants
      to disable memory access checks for such files.
      Signed-off-by: NAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: NLinus Walleij <linus.walleij@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      39d114dd
  20. 12 10月, 2015 1 次提交
    • A
      arm64/efi: isolate EFI stub from the kernel proper · e8f3010f
      Ard Biesheuvel 提交于
      Since arm64 does not use a builtin decompressor, the EFI stub is built
      into the kernel proper. So far, this has been working fine, but actually,
      since the stub is in fact a PE/COFF relocatable binary that is executed
      at an unknown offset in the 1:1 mapping provided by the UEFI firmware, we
      should not be seamlessly sharing code with the kernel proper, which is a
      position dependent executable linked at a high virtual offset.
      
      So instead, separate the contents of libstub and its dependencies, by
      putting them into their own namespace by prefixing all of its symbols
      with __efistub. This way, we have tight control over what parts of the
      kernel proper are referenced by the stub.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NMatt Fleming <matt.fleming@intel.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      e8f3010f
  21. 10 10月, 2015 1 次提交
  22. 15 9月, 2015 1 次提交
  23. 05 8月, 2015 1 次提交
    • W
      arm64: mm: ensure patched kernel text is fetched from PoU · 8ec41987
      Will Deacon 提交于
      The arm64 booting document requires that the bootloader has cleaned the
      kernel image to the PoC. However, when a CPU re-enters the kernel due to
      either a CPU hotplug "on" event or resuming from a low-power state (e.g.
      cpuidle), the kernel text may in-fact be dirty at the PoU due to things
      like alternative patching or even module loading.
      
      Thanks to I-cache speculation with the MMU off, stale instructions could
      be fetched prior to enabling the MMU, potentially leading to crashes
      when executing regions of code that have been modified at runtime.
      
      This patch addresses the issue by ensuring that the local I-cache is
      invalidated immediately after a CPU has enabled its MMU but before
      jumping out of the identity mapping. Any stale instructions fetched from
      the PoC will then be discarded and refetched correctly from the PoU.
      Patching kernel text executed prior to the MMU being enabled is
      prohibited, so the early entry code will always be clean.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      8ec41987
  24. 27 7月, 2015 1 次提交
  25. 03 6月, 2015 1 次提交
    • A
      arm64: reduce ID map to a single page · 5dfe9d7d
      Ard Biesheuvel 提交于
      Commit ea8c2e11 ("arm64: Extend the idmap to the whole kernel
      image") changed the early page table code so that the entire kernel
      Image is covered by the identity map. This allows functions that
      need to enable or disable the MMU to reside anywhere in the kernel
      Image.
      
      However, this change has the unfortunate side effect that the Image
      cannot cross a physical 512 MB alignment boundary anymore, since the
      early page table code cannot deal with the Image crossing a /virtual/
      512 MB alignment boundary.
      
      So instead, reduce the ID map to a single page, that is populated by
      the contents of the .idmap.text section. Only three functions reside
      there at the moment: __enable_mmu(), cpu_resume_mmu() and cpu_reset().
      If new code is introduced that needs to manipulate the MMU state, it
      should be added to this section as well.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5dfe9d7d
  26. 02 6月, 2015 1 次提交
  27. 24 3月, 2015 2 次提交
    • M
      arm64: head.S: ensure idmap_t0sz is visible · 0c20856c
      Mark Rutland 提交于
      We write idmap_t0sz with SCTLR_EL1.{C,M} clear, but we only have the
      guarnatee that the kernel Image is clean, not invalid in the caches, and
      therefore we might read a stale value once the MMU is enabled.
      
      This patch ensures we invalidate the corresponding cacheline after the
      write as we do for all other data written before we set SCTLR_EL1.{C.M},
      guaranteeing that the value will be visible later. We rely on the DSBs
      in __create_page_tables to complete the maintenance.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      CC: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      0c20856c
    • M
      arm64: head.S: ensure visibility of page tables · 91d57155
      Mark Rutland 提交于
      After writing the page tables, we use __inval_cache_range to invalidate
      any stale cache entries. Strongly Ordered memory accesses are not
      ordered w.r.t. cache maintenance instructions, and hence explicit memory
      barriers are required to provide this ordering. However,
      __inval_cache_range was written to be used on Normal Cacheable memory
      once the MMU and caches are on, and does not have any barriers prior to
      the DC instructions.
      
      This patch adds a DMB between the page tables being written and the
      corresponding cachelines being invalidated, ensuring that the
      invalidation makes the new data visible to subsequent cacheable
      accesses. A barrier is not required before the prior invalidate as we do
      not access the page table memory area prior to this, and earlier
      barriers in preserve_boot_args and set_cpu_boot_mode_flag ensures
      ordering w.r.t. any stores performed prior to entering Linux.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Fixes: c218bca7 ("arm64: Relax the kernel cache requirements for boot")
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      91d57155