1. 19 2月, 2016 2 次提交
    • A
      arm64: allow kernel Image to be loaded anywhere in physical memory · a7f8de16
      Ard Biesheuvel 提交于
      This relaxes the kernel Image placement requirements, so that it
      may be placed at any 2 MB aligned offset in physical memory.
      
      This is accomplished by ignoring PHYS_OFFSET when installing
      memblocks, and accounting for the apparent virtual offset of
      the kernel Image. As a result, virtual address references
      below PAGE_OFFSET are correctly mapped onto physical references
      into the kernel Image regardless of where it sits in memory.
      
      Special care needs to be taken for dealing with memory limits passed
      via mem=, since the generic implementation clips memory top down, which
      may clip the kernel image itself if it is loaded high up in memory. To
      deal with this case, we simply add back the memory covering the kernel
      image, which may result in more memory to be retained than was passed
      as a mem= parameter.
      
      Since mem= should not be considered a production feature, a panic notifier
      handler is installed that dumps the memory limit at panic time if one was
      set.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a7f8de16
    • A
      arm64: introduce KIMAGE_VADDR as the virtual base of the kernel region · ab893fb9
      Ard Biesheuvel 提交于
      This introduces the preprocessor symbol KIMAGE_VADDR which will serve as
      the symbolic virtual base of the kernel region, i.e., the kernel's virtual
      offset will be KIMAGE_VADDR + TEXT_OFFSET. For now, we define it as being
      equal to PAGE_OFFSET, but in the future, it will be moved below it once
      we move the kernel virtual mapping out of the linear mapping.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ab893fb9
  2. 16 2月, 2016 1 次提交
    • M
      arm64: mm: place empty_zero_page in bss · 5227cfa7
      Mark Rutland 提交于
      Currently the zero page is set up in paging_init, and thus we cannot use
      the zero page earlier. We use the zero page as a reserved TTBR value
      from which no TLB entries may be allocated (e.g. when uninstalling the
      idmap). To enable such usage earlier (as may be required for invasive
      changes to the kernel page tables), and to minimise the time that the
      idmap is active, we need to be able to use the zero page before
      paging_init.
      
      This patch follows the example set by x86, by allocating the zero page
      at compile time, in .bss. This means that the zero page itself is
      available immediately upon entry to start_kernel (as we zero .bss before
      this), and also means that the zero page takes up no space in the raw
      Image binary. The associated struct page is allocated in bootmem_init,
      and remains unavailable until this time.
      
      Outside of arch code, the only users of empty_zero_page assume that the
      empty_zero_page symbol refers to the zeroed memory itself, and that
      ZERO_PAGE(x) must be used to acquire the associated struct page,
      following the example of x86. This patch also brings arm64 inline with
      these assumptions.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: NJeremy Linton <jeremy.linton@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5227cfa7
  3. 25 1月, 2016 1 次提交
  4. 07 1月, 2016 1 次提交
    • M
      arm64: head.S: use memset to clear BSS · 2a803c4d
      Mark Rutland 提交于
      Currently we use an open-coded memzero to clear the BSS. As it is a
      trivial implementation, it is sub-optimal.
      
      Our optimised memset doesn't use the stack, is position-independent, and
      for the memzero case can use of DC ZVA to clear large blocks
      efficiently. In __mmap_switched the MMU is on and there are no live
      caller-saved registers, so we can safely call an uninstrumented memset.
      
      This patch changes __mmap_switched to use memset when clearing the BSS.
      We use the __pi_memset alias so as to avoid any instrumentation in all
      kernel configurations.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2a803c4d
  5. 08 12月, 2015 1 次提交
  6. 20 10月, 2015 3 次提交
  7. 13 10月, 2015 1 次提交
    • A
      arm64: add KASAN support · 39d114dd
      Andrey Ryabinin 提交于
      This patch adds arch specific code for kernel address sanitizer
      (see Documentation/kasan.txt).
      
      1/8 of kernel addresses reserved for shadow memory. There was no
      big enough hole for this, so virtual addresses for shadow were
      stolen from vmalloc area.
      
      At early boot stage the whole shadow region populated with just
      one physical page (kasan_zero_page). Later, this page reused
      as readonly zero shadow for some memory that KASan currently
      don't track (vmalloc).
      After mapping the physical memory, pages for shadow memory are
      allocated and mapped.
      
      Functions like memset/memmove/memcpy do a lot of memory accesses.
      If bad pointer passed to one of these function it is important
      to catch this. Compiler's instrumentation cannot do this since
      these functions are written in assembly.
      KASan replaces memory functions with manually instrumented variants.
      Original functions declared as weak symbols so strong definitions
      in mm/kasan/kasan.c could replace them. Original functions have aliases
      with '__' prefix in name, so we could call non-instrumented variant
      if needed.
      Some files built without kasan instrumentation (e.g. mm/slub.c).
      Original mem* function replaced (via #define) with prefixed variants
      to disable memory access checks for such files.
      Signed-off-by: NAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: NLinus Walleij <linus.walleij@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      39d114dd
  8. 12 10月, 2015 1 次提交
    • A
      arm64/efi: isolate EFI stub from the kernel proper · e8f3010f
      Ard Biesheuvel 提交于
      Since arm64 does not use a builtin decompressor, the EFI stub is built
      into the kernel proper. So far, this has been working fine, but actually,
      since the stub is in fact a PE/COFF relocatable binary that is executed
      at an unknown offset in the 1:1 mapping provided by the UEFI firmware, we
      should not be seamlessly sharing code with the kernel proper, which is a
      position dependent executable linked at a high virtual offset.
      
      So instead, separate the contents of libstub and its dependencies, by
      putting them into their own namespace by prefixing all of its symbols
      with __efistub. This way, we have tight control over what parts of the
      kernel proper are referenced by the stub.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NMatt Fleming <matt.fleming@intel.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      e8f3010f
  9. 10 10月, 2015 1 次提交
  10. 15 9月, 2015 1 次提交
  11. 05 8月, 2015 1 次提交
    • W
      arm64: mm: ensure patched kernel text is fetched from PoU · 8ec41987
      Will Deacon 提交于
      The arm64 booting document requires that the bootloader has cleaned the
      kernel image to the PoC. However, when a CPU re-enters the kernel due to
      either a CPU hotplug "on" event or resuming from a low-power state (e.g.
      cpuidle), the kernel text may in-fact be dirty at the PoU due to things
      like alternative patching or even module loading.
      
      Thanks to I-cache speculation with the MMU off, stale instructions could
      be fetched prior to enabling the MMU, potentially leading to crashes
      when executing regions of code that have been modified at runtime.
      
      This patch addresses the issue by ensuring that the local I-cache is
      invalidated immediately after a CPU has enabled its MMU but before
      jumping out of the identity mapping. Any stale instructions fetched from
      the PoC will then be discarded and refetched correctly from the PoU.
      Patching kernel text executed prior to the MMU being enabled is
      prohibited, so the early entry code will always be clean.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      8ec41987
  12. 27 7月, 2015 1 次提交
  13. 03 6月, 2015 1 次提交
    • A
      arm64: reduce ID map to a single page · 5dfe9d7d
      Ard Biesheuvel 提交于
      Commit ea8c2e11 ("arm64: Extend the idmap to the whole kernel
      image") changed the early page table code so that the entire kernel
      Image is covered by the identity map. This allows functions that
      need to enable or disable the MMU to reside anywhere in the kernel
      Image.
      
      However, this change has the unfortunate side effect that the Image
      cannot cross a physical 512 MB alignment boundary anymore, since the
      early page table code cannot deal with the Image crossing a /virtual/
      512 MB alignment boundary.
      
      So instead, reduce the ID map to a single page, that is populated by
      the contents of the .idmap.text section. Only three functions reside
      there at the moment: __enable_mmu(), cpu_resume_mmu() and cpu_reset().
      If new code is introduced that needs to manipulate the MMU state, it
      should be added to this section as well.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5dfe9d7d
  14. 02 6月, 2015 1 次提交
  15. 24 3月, 2015 2 次提交
    • M
      arm64: head.S: ensure idmap_t0sz is visible · 0c20856c
      Mark Rutland 提交于
      We write idmap_t0sz with SCTLR_EL1.{C,M} clear, but we only have the
      guarnatee that the kernel Image is clean, not invalid in the caches, and
      therefore we might read a stale value once the MMU is enabled.
      
      This patch ensures we invalidate the corresponding cacheline after the
      write as we do for all other data written before we set SCTLR_EL1.{C.M},
      guaranteeing that the value will be visible later. We rely on the DSBs
      in __create_page_tables to complete the maintenance.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      CC: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      0c20856c
    • M
      arm64: head.S: ensure visibility of page tables · 91d57155
      Mark Rutland 提交于
      After writing the page tables, we use __inval_cache_range to invalidate
      any stale cache entries. Strongly Ordered memory accesses are not
      ordered w.r.t. cache maintenance instructions, and hence explicit memory
      barriers are required to provide this ordering. However,
      __inval_cache_range was written to be used on Normal Cacheable memory
      once the MMU and caches are on, and does not have any barriers prior to
      the DC instructions.
      
      This patch adds a DMB between the page tables being written and the
      corresponding cachelines being invalidated, ensuring that the
      invalidation makes the new data visible to subsequent cacheable
      accesses. A barrier is not required before the prior invalidate as we do
      not access the page table memory area prior to this, and earlier
      barriers in preserve_boot_args and set_cpu_boot_mode_flag ensures
      ordering w.r.t. any stores performed prior to entering Linux.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Fixes: c218bca7 ("arm64: Relax the kernel cache requirements for boot")
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      91d57155
  16. 23 3月, 2015 1 次提交
  17. 20 3月, 2015 7 次提交
  18. 18 3月, 2015 1 次提交
    • M
      arm64: fix hyp mode mismatch detection · 424a3838
      Mark Rutland 提交于
      Commit 828e9834 ("arm64: head: create a new function for setting
      the boot_cpu_mode flag") added BOOT_CPU_MODE_EL1, a nonzero value
      replacing uses of zero. However it failed to update __boot_cpu_mode
      appropriately.
      
      A CPU booted at EL2 writes BOOT_CPU_MODE_EL2 to __boot_cpu_mode[0], and
      a CPU booted at EL1 writes BOOT_CPU_MODE_EL1 to __boot_cpu_mode[1].
      Later is_hyp_mode_mismatched() determines there to be a mismatch if
      __boot_cpu_mode[0] != __boot_cpu_mode[1].
      
      If all CPUs are booted at EL1, __boot_cpu_mode[0] will be set to
      BOOT_CPU_MODE_EL1, but __boot_cpu_mode[1] will retain its initial value
      of zero, and is_hyp_mode_mismatched will erroneously determine that the
      boot modes are mismatched. This hasn't been a problem so far, but later
      patches which will make use of is_hyp_mode_mismatched() expect it to
      work correctly.
      
      This patch initialises __boot_cpu_mode[1] to BOOT_CPU_MODE_EL1, fixing
      the erroneous mismatch detection when all CPUs are booted at EL1.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      424a3838
  19. 14 3月, 2015 1 次提交
  20. 27 11月, 2014 1 次提交
  21. 25 11月, 2014 1 次提交
  22. 05 11月, 2014 3 次提交
  23. 08 9月, 2014 1 次提交
  24. 27 8月, 2014 1 次提交
  25. 20 8月, 2014 1 次提交
    • A
      arm64: align randomized TEXT_OFFSET on 4 kB boundary · 4190312b
      Ard Biesheuvel 提交于
      When booting via UEFI, the kernel Image is loaded at a 4 kB boundary and
      the embedded EFI stub is executed in place. The EFI stub relocates the
      Image to reside TEXT_OFFSET bytes above a 2 MB boundary, and jumps into
      the kernel proper.
      
      In AArch64, PC relative symbol references are emitted using adrp/add or
      adrp/ldr pairs, where the offset into a 4 kB page is resolved using a
      separate :lo12: relocation. This implicitly assumes that the code will
      always be executed at the same relative offset with respect to a 4 kB
      boundary, or the references will point to the wrong address.
      
      This means we should link the kernel at a 4 kB aligned base address in
      order to remain compatible with the base address the UEFI loader uses
      when doing the initial load of Image. So update the code that generates
      TEXT_OFFSET to choose a multiple of 4 kB.
      
      At the same time, update the code so it chooses from the interval [0..2MB)
      as the author originally intended.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      4190312b
  26. 25 7月, 2014 1 次提交
  27. 23 7月, 2014 2 次提交