1. 13 10月, 2015 1 次提交
    • A
      arm64: add KASAN support · 39d114dd
      Andrey Ryabinin 提交于
      This patch adds arch specific code for kernel address sanitizer
      (see Documentation/kasan.txt).
      
      1/8 of kernel addresses reserved for shadow memory. There was no
      big enough hole for this, so virtual addresses for shadow were
      stolen from vmalloc area.
      
      At early boot stage the whole shadow region populated with just
      one physical page (kasan_zero_page). Later, this page reused
      as readonly zero shadow for some memory that KASan currently
      don't track (vmalloc).
      After mapping the physical memory, pages for shadow memory are
      allocated and mapped.
      
      Functions like memset/memmove/memcpy do a lot of memory accesses.
      If bad pointer passed to one of these function it is important
      to catch this. Compiler's instrumentation cannot do this since
      these functions are written in assembly.
      KASan replaces memory functions with manually instrumented variants.
      Original functions declared as weak symbols so strong definitions
      in mm/kasan/kasan.c could replace them. Original functions have aliases
      with '__' prefix in name, so we could call non-instrumented variant
      if needed.
      Some files built without kasan instrumentation (e.g. mm/slub.c).
      Original mem* function replaced (via #define) with prefixed variants
      to disable memory access checks for such files.
      Signed-off-by: NAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: NLinus Walleij <linus.walleij@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      39d114dd
  2. 12 10月, 2015 1 次提交
    • A
      arm64/efi: isolate EFI stub from the kernel proper · e8f3010f
      Ard Biesheuvel 提交于
      Since arm64 does not use a builtin decompressor, the EFI stub is built
      into the kernel proper. So far, this has been working fine, but actually,
      since the stub is in fact a PE/COFF relocatable binary that is executed
      at an unknown offset in the 1:1 mapping provided by the UEFI firmware, we
      should not be seamlessly sharing code with the kernel proper, which is a
      position dependent executable linked at a high virtual offset.
      
      So instead, separate the contents of libstub and its dependencies, by
      putting them into their own namespace by prefixing all of its symbols
      with __efistub. This way, we have tight control over what parts of the
      kernel proper are referenced by the stub.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NMatt Fleming <matt.fleming@intel.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      e8f3010f
  3. 15 9月, 2015 1 次提交
  4. 05 8月, 2015 1 次提交
    • W
      arm64: mm: ensure patched kernel text is fetched from PoU · 8ec41987
      Will Deacon 提交于
      The arm64 booting document requires that the bootloader has cleaned the
      kernel image to the PoC. However, when a CPU re-enters the kernel due to
      either a CPU hotplug "on" event or resuming from a low-power state (e.g.
      cpuidle), the kernel text may in-fact be dirty at the PoU due to things
      like alternative patching or even module loading.
      
      Thanks to I-cache speculation with the MMU off, stale instructions could
      be fetched prior to enabling the MMU, potentially leading to crashes
      when executing regions of code that have been modified at runtime.
      
      This patch addresses the issue by ensuring that the local I-cache is
      invalidated immediately after a CPU has enabled its MMU but before
      jumping out of the identity mapping. Any stale instructions fetched from
      the PoC will then be discarded and refetched correctly from the PoU.
      Patching kernel text executed prior to the MMU being enabled is
      prohibited, so the early entry code will always be clean.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      8ec41987
  5. 27 7月, 2015 1 次提交
  6. 03 6月, 2015 1 次提交
    • A
      arm64: reduce ID map to a single page · 5dfe9d7d
      Ard Biesheuvel 提交于
      Commit ea8c2e11 ("arm64: Extend the idmap to the whole kernel
      image") changed the early page table code so that the entire kernel
      Image is covered by the identity map. This allows functions that
      need to enable or disable the MMU to reside anywhere in the kernel
      Image.
      
      However, this change has the unfortunate side effect that the Image
      cannot cross a physical 512 MB alignment boundary anymore, since the
      early page table code cannot deal with the Image crossing a /virtual/
      512 MB alignment boundary.
      
      So instead, reduce the ID map to a single page, that is populated by
      the contents of the .idmap.text section. Only three functions reside
      there at the moment: __enable_mmu(), cpu_resume_mmu() and cpu_reset().
      If new code is introduced that needs to manipulate the MMU state, it
      should be added to this section as well.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5dfe9d7d
  7. 02 6月, 2015 1 次提交
  8. 24 3月, 2015 2 次提交
    • M
      arm64: head.S: ensure idmap_t0sz is visible · 0c20856c
      Mark Rutland 提交于
      We write idmap_t0sz with SCTLR_EL1.{C,M} clear, but we only have the
      guarnatee that the kernel Image is clean, not invalid in the caches, and
      therefore we might read a stale value once the MMU is enabled.
      
      This patch ensures we invalidate the corresponding cacheline after the
      write as we do for all other data written before we set SCTLR_EL1.{C.M},
      guaranteeing that the value will be visible later. We rely on the DSBs
      in __create_page_tables to complete the maintenance.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      CC: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      0c20856c
    • M
      arm64: head.S: ensure visibility of page tables · 91d57155
      Mark Rutland 提交于
      After writing the page tables, we use __inval_cache_range to invalidate
      any stale cache entries. Strongly Ordered memory accesses are not
      ordered w.r.t. cache maintenance instructions, and hence explicit memory
      barriers are required to provide this ordering. However,
      __inval_cache_range was written to be used on Normal Cacheable memory
      once the MMU and caches are on, and does not have any barriers prior to
      the DC instructions.
      
      This patch adds a DMB between the page tables being written and the
      corresponding cachelines being invalidated, ensuring that the
      invalidation makes the new data visible to subsequent cacheable
      accesses. A barrier is not required before the prior invalidate as we do
      not access the page table memory area prior to this, and earlier
      barriers in preserve_boot_args and set_cpu_boot_mode_flag ensures
      ordering w.r.t. any stores performed prior to entering Linux.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Fixes: c218bca7 ("arm64: Relax the kernel cache requirements for boot")
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      91d57155
  9. 23 3月, 2015 1 次提交
  10. 20 3月, 2015 7 次提交
  11. 18 3月, 2015 1 次提交
    • M
      arm64: fix hyp mode mismatch detection · 424a3838
      Mark Rutland 提交于
      Commit 828e9834 ("arm64: head: create a new function for setting
      the boot_cpu_mode flag") added BOOT_CPU_MODE_EL1, a nonzero value
      replacing uses of zero. However it failed to update __boot_cpu_mode
      appropriately.
      
      A CPU booted at EL2 writes BOOT_CPU_MODE_EL2 to __boot_cpu_mode[0], and
      a CPU booted at EL1 writes BOOT_CPU_MODE_EL1 to __boot_cpu_mode[1].
      Later is_hyp_mode_mismatched() determines there to be a mismatch if
      __boot_cpu_mode[0] != __boot_cpu_mode[1].
      
      If all CPUs are booted at EL1, __boot_cpu_mode[0] will be set to
      BOOT_CPU_MODE_EL1, but __boot_cpu_mode[1] will retain its initial value
      of zero, and is_hyp_mode_mismatched will erroneously determine that the
      boot modes are mismatched. This hasn't been a problem so far, but later
      patches which will make use of is_hyp_mode_mismatched() expect it to
      work correctly.
      
      This patch initialises __boot_cpu_mode[1] to BOOT_CPU_MODE_EL1, fixing
      the erroneous mismatch detection when all CPUs are booted at EL1.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      424a3838
  12. 14 3月, 2015 1 次提交
  13. 27 11月, 2014 1 次提交
  14. 25 11月, 2014 1 次提交
  15. 05 11月, 2014 3 次提交
  16. 08 9月, 2014 1 次提交
  17. 27 8月, 2014 1 次提交
  18. 20 8月, 2014 1 次提交
    • A
      arm64: align randomized TEXT_OFFSET on 4 kB boundary · 4190312b
      Ard Biesheuvel 提交于
      When booting via UEFI, the kernel Image is loaded at a 4 kB boundary and
      the embedded EFI stub is executed in place. The EFI stub relocates the
      Image to reside TEXT_OFFSET bytes above a 2 MB boundary, and jumps into
      the kernel proper.
      
      In AArch64, PC relative symbol references are emitted using adrp/add or
      adrp/ldr pairs, where the offset into a 4 kB page is resolved using a
      separate :lo12: relocation. This implicitly assumes that the code will
      always be executed at the same relative offset with respect to a 4 kB
      boundary, or the references will point to the wrong address.
      
      This means we should link the kernel at a 4 kB aligned base address in
      order to remain compatible with the base address the UEFI loader uses
      when doing the initial load of Image. So update the code that generates
      TEXT_OFFSET to choose a multiple of 4 kB.
      
      At the same time, update the code so it chooses from the interval [0..2MB)
      as the author originally intended.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      4190312b
  19. 25 7月, 2014 1 次提交
  20. 23 7月, 2014 5 次提交
  21. 10 7月, 2014 4 次提交
    • M
      arm64: Enable TEXT_OFFSET fuzzing · da57a369
      Mark Rutland 提交于
      The arm64 Image header contains a text_offset field which bootloaders
      are supposed to read to determine the offset (from a 2MB aligned "start
      of memory" per booting.txt) at which to load the kernel. The offset is
      not well respected by bootloaders at present, and due to the lack of
      variation there is little incentive to support it. This is unfortunate
      for the sake of future kernels where we may wish to vary the text offset
      (even zeroing it).
      
      This patch adds options to arm64 to enable fuzz-testing of text_offset.
      CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET forces the text offset to a random
      16-byte aligned value value in the range [0..2MB) upon a build of the
      kernel. It is recommended that distribution kernels enable randomization
      to test bootloaders such that any compliance issues can be fixed early.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NTom Rini <trini@ti.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      da57a369
    • M
      arm64: Update the Image header · a2c1d73b
      Mark Rutland 提交于
      Currently the kernel Image is stripped of everything past the initial
      stack, and at runtime the memory is initialised and used by the kernel.
      This makes the effective minimum memory footprint of the kernel larger
      than the size of the loaded binary, though bootloaders have no mechanism
      to identify how large this minimum memory footprint is. This makes it
      difficult to choose safe locations to place both the kernel and other
      binaries required at boot (DTB, initrd, etc), such that the kernel won't
      clobber said binaries or other reserved memory during initialisation.
      
      Additionally when big endian support was added the image load offset was
      overlooked, and is currently of an arbitrary endianness, which makes it
      difficult for bootloaders to make use of it. It seems that bootloaders
      aren't respecting the image load offset at present anyway, and are
      assuming that offset 0x80000 will always be correct.
      
      This patch adds an effective image size to the kernel header which
      describes the amount of memory from the start of the kernel Image binary
      which the kernel expects to use before detecting memory and handling any
      memory reservations. This can be used by bootloaders to choose suitable
      locations to load the kernel and/or other binaries such that the kernel
      will not clobber any memory unexpectedly. As before, memory reservations
      are required to prevent the kernel from clobbering these locations
      later.
      
      Both the image load offset and the effective image size are forced to be
      little-endian regardless of the native endianness of the kernel to
      enable bootloaders to load a kernel of arbitrary endianness. Bootloaders
      which wish to make use of the load offset can inspect the effective
      image size field for a non-zero value to determine if the offset is of a
      known endianness. To enable software to determine the endinanness of the
      kernel as may be required for certain use-cases, a new flags field (also
      little-endian) is added to the kernel header to export this information.
      
      The documentation is updated to clarify these details. To discourage
      future assumptions regarding the value of text_offset, the value at this
      point in time is removed from the main flow of the documentation (though
      kept as a compatibility note). Some minor formatting issues in the
      documentation are also corrected.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NTom Rini <trini@ti.com>
      Cc: Geoff Levand <geoff@infradead.org>
      Cc: Kevin Hilman <kevin.hilman@linaro.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a2c1d73b
    • M
      arm64: place initial page tables above the kernel · bd00cd5f
      Mark Rutland 提交于
      Currently we place swapper_pg_dir and idmap_pg_dir below the kernel
      image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However,
      bootloaders may use portions of this memory below the kernel and we do
      not parse the memory reservation list until after the MMU has been
      enabled. As such we may clobber some memory a bootloader wishes to have
      preserved.
      
      To enable the use of all of this memory by bootloaders (when the
      required memory reservations are communicated to the kernel) it is
      necessary to move our initial page tables elsewhere. As we currently
      have an effectively unbound requirement for memory at the end of the
      kernel image for .bss, we can place the page tables here.
      
      This patch moves the initial page table to the end of the kernel image,
      after the BSS. As they do not consist of any initialised data they will
      be stripped from the kernel Image as with the BSS. The BSS clearing
      routine is updated to stop at __bss_stop rather than _end so as to not
      clobber the page tables, and memory reservations made redundant by the
      new organisation are removed.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NLaura Abbott <lauraa@codeaurora.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bd00cd5f
    • M
      arm64: head.S: remove unnecessary function alignment · 909a4069
      Mark Rutland 提交于
      Currently __turn_mmu_on is aligned to 64 bytes to ensure that it doesn't
      span any page boundary, which simplifies the idmap and spares us
      requiring an additional page table to map half of the function. In
      keeping with other important requirements in architecture code, this
      fact is undocumented.
      
      Additionally, as the function consists of three instructions totalling
      12 bytes with no literal pool data, a smaller alignment of 16 bytes
      would be sufficient.
      
      This patch reduces the alignment to 16 bytes and documents the
      underlying reason for the alignment. This reduces the required alignment
      of the entire .head.text section from 64 bytes to 16 bytes, though it
      may still be aligned to a larger value depending on TEXT_OFFSET.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NLaura Abbott <lauraa@codeaurora.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      909a4069
  22. 09 7月, 2014 1 次提交
  23. 04 7月, 2014 1 次提交
  24. 10 5月, 2014 1 次提交
    • W
      arm64: head: fix cache flushing and barriers in set_cpu_boot_mode_flag · d0488597
      Will Deacon 提交于
      set_cpu_boot_mode_flag is used to identify which exception levels are
      encountered across the system by CPUs trying to enter the kernel. The
      basic algorithm is: if a CPU is booting at EL2, it will set a flag at
      an offset of #4 from __boot_cpu_mode, a cacheline-aligned variable.
      Otherwise, a flag is set at an offset of zero into the same cacheline.
      This enables us to check that all CPUs booted at the same exception
      level.
      
      This cacheline is written with the stage-1 MMU off (that is, via a
      strongly-ordered mapping) and will bypass any clean lines in the cache,
      leading to potential coherence problems when the variable is later
      checked via the normal, cacheable mapping of the kernel image.
      
      This patch reworks the broken flushing code so that we:
      
        (1) Use a DMB to order the strongly-ordered write of the cacheline
            against the subsequent cache-maintenance operation (by-VA
            operations only hazard against normal, cacheable accesses).
      
        (2) Use a single dc ivac instruction to invalidate any clean lines
            containing a stale copy of the line after it has been updated.
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      d0488597