1. 16 2月, 2016 1 次提交
  2. 21 10月, 2015 5 次提交
  3. 13 10月, 2015 1 次提交
    • A
      arm64: add KASAN support · 39d114dd
      Andrey Ryabinin 提交于
      This patch adds arch specific code for kernel address sanitizer
      (see Documentation/kasan.txt).
      
      1/8 of kernel addresses reserved for shadow memory. There was no
      big enough hole for this, so virtual addresses for shadow were
      stolen from vmalloc area.
      
      At early boot stage the whole shadow region populated with just
      one physical page (kasan_zero_page). Later, this page reused
      as readonly zero shadow for some memory that KASan currently
      don't track (vmalloc).
      After mapping the physical memory, pages for shadow memory are
      allocated and mapped.
      
      Functions like memset/memmove/memcpy do a lot of memory accesses.
      If bad pointer passed to one of these function it is important
      to catch this. Compiler's instrumentation cannot do this since
      these functions are written in assembly.
      KASan replaces memory functions with manually instrumented variants.
      Original functions declared as weak symbols so strong definitions
      in mm/kasan/kasan.c could replace them. Original functions have aliases
      with '__' prefix in name, so we could call non-instrumented variant
      if needed.
      Some files built without kasan instrumentation (e.g. mm/slub.c).
      Original mem* function replaced (via #define) with prefixed variants
      to disable memory access checks for such files.
      Signed-off-by: NAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: NLinus Walleij <linus.walleij@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      39d114dd
  4. 07 10月, 2015 1 次提交
    • M
      arm64: Don't relocate non-existent initrd · 4ca3bc86
      Mark Rutland 提交于
      When booting a kernel without an initrd, the kernel reports that it
      moves -1 bytes worth, having gone through the motions with initrd_start
      equal to initrd_end:
      
          Moving initrd from [4080000000-407fffffff] to [9fff49000-9fff48fff]
      
      Prevent this by bailing out early when the initrd size is zero (i.e. we
      have no initrd), avoiding the confusing message and other associated
      work.
      
      Fixes: 1570f0d7 ("arm64: support initrd outside kernel linear map")
      Cc: Mark Salter <msalter@redhat.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      4ca3bc86
  5. 09 9月, 2015 1 次提交
  6. 03 8月, 2015 1 次提交
  7. 30 7月, 2015 1 次提交
  8. 27 7月, 2015 8 次提交
  9. 21 7月, 2015 1 次提交
  10. 02 6月, 2015 1 次提交
  11. 28 5月, 2015 1 次提交
  12. 19 5月, 2015 1 次提交
  13. 26 3月, 2015 2 次提交
  14. 25 3月, 2015 4 次提交
  15. 20 3月, 2015 3 次提交
  16. 18 3月, 2015 2 次提交
  17. 24 1月, 2015 1 次提交
  18. 22 1月, 2015 1 次提交
  19. 17 1月, 2015 1 次提交
    • M
      arm64: respect mem= for EFI · 6083fe74
      Mark Rutland 提交于
      When booting with EFI, we acquire the EFI memory map after parsing the
      early params. This unfortuantely renders the option useless as we call
      memblock_enforce_memory_limit (which uses memblock_remove_range behind
      the scenes) before we've added any memblocks. We end up removing
      nothing, then adding all of memory later when efi_init calls
      reserve_regions.
      
      Instead, we can log the limit and apply this later when we do the rest
      of the memblock work in memblock_init, which should work regardless of
      the presence of EFI. At the same time we may as well move the early
      parameter into arm64's mm/init.c, close to arm64_memblock_init.
      
      Any memory which must be mapped (e.g. for use by EFI runtime services)
      must be mapped explicitly reather than relying on the linear mapping,
      which may be truncated as a result of a mem= option passed on the kernel
      command line.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Leif Lindholm <leif.lindholm@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      6083fe74
  20. 13 1月, 2015 2 次提交
  21. 08 1月, 2015 1 次提交
    • A
      arm64/efi: add missing call to early_ioremap_reset() · 0e63ea48
      Ard Biesheuvel 提交于
      The early ioremap support introduced by patch bf4b558e
      ("arm64: add early_ioremap support") failed to add a call to
      early_ioremap_reset() at an appropriate time. Without this call,
      invocations of early_ioremap etc. that are done too late will go
      unnoticed and may cause corruption.
      
      This is exactly what happened when the first user of this feature
      was added in patch f84d0275 ("arm64: add EFI runtime services").
      The early mapping of the EFI memory map is unmapped during an early
      initcall, at which time the early ioremap support is long gone.
      
      Fix by adding the missing call to early_ioremap_reset() to
      setup_arch(), and move the offending early_memunmap() to right after
      the point where the early mapping of the EFI memory map is last used.
      
      Fixes: f84d0275 ("arm64: add EFI runtime services")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NLeif Lindholm <leif.lindholm@linaro.org>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      0e63ea48