1. 16 2月, 2016 2 次提交
    • M
      arm64: mm: create new fine-grained mappings at boot · 068a17a5
      Mark Rutland 提交于
      At boot we may change the granularity of the tables mapping the kernel
      (by splitting or making sections). This may happen when we create the
      linear mapping (in __map_memblock), or at any point we try to apply
      fine-grained permissions to the kernel (e.g. fixup_executable,
      mark_rodata_ro, fixup_init).
      
      Changing the active page tables in this manner may result in multiple
      entries for the same address being allocated into TLBs, risking problems
      such as TLB conflict aborts or issues derived from the amalgamation of
      TLB entries. Generally, a break-before-make (BBM) approach is necessary
      to avoid conflicts, but we cannot do this for the kernel tables as it
      risks unmapping text or data being used to do so.
      
      Instead, we can create a new set of tables from scratch in the safety of
      the existing mappings, and subsequently migrate over to these using the
      new cpu_replace_ttbr1 helper, which avoids the two sets of tables being
      active simultaneously.
      
      To avoid issues when we later modify permissions of the page tables
      (e.g. in fixup_init), we must create the page tables at a granularity
      such that later modification does not result in splitting of tables.
      
      This patch applies this strategy, creating a new set of fine-grained
      page tables from scratch, and safely migrating to them. The existing
      fixmap and kasan shadow page tables are reused in the new fine-grained
      tables.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: NJeremy Linton <jeremy.linton@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      068a17a5
    • M
      arm64: kasan: avoid TLB conflicts · c1a88e91
      Mark Rutland 提交于
      The page table modification performed during the KASAN init risks the
      allocation of conflicting TLB entries, as it swaps a set of valid global
      entries for another without suitable TLB maintenance.
      
      The presence of conflicting TLB entries can result in the delivery of
      synchronous TLB conflict aborts, or may result in the use of erroneous
      data being returned in response to a TLB lookup. This can affect
      explicit data accesses from software as well as translations performed
      asynchronously (e.g. as part of page table walks or speculative I-cache
      fetches), and can therefore result in a wide variety of problems.
      
      To avoid this, use cpu_replace_ttbr1 to swap the page tables. This
      ensures that when the new tables are installed there are no stale
      entries from the old tables which may conflict. As all updates are made
      to the tables while they are not active, the updates themselves are
      safe.
      
      At the same time, add the missing barrier to ensure that the tmp_pg_dir
      entries updated via memcpy are visible to the page table walkers at the
      point the tmp_pg_dir is installed. All other page table updates made as
      part of KASAN initialisation have the requisite barriers due to the use
      of the standard page table accessors.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: NJeremy Linton <jeremy.linton@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c1a88e91
  2. 25 1月, 2016 1 次提交
  3. 13 10月, 2015 2 次提交
    • W
      arm64: kasan: fix issues reported by sparse · 83040123
      Will Deacon 提交于
      Sparse reports some new issues introduced by the kasan patches:
      
        arch/arm64/mm/kasan_init.c:91:13: warning: no previous prototype for
        'kasan_early_init' [-Wmissing-prototypes] void __init kasan_early_init(void)
                   ^
        arch/arm64/mm/kasan_init.c:91:13: warning: symbol 'kasan_early_init'
        was not declared. Should it be static? [sparse]
      
      This patch resolves the problem by adding a prototype for
      kasan_early_init and marking the function as asmlinkage, since it's only
      called from head.S.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Acked-by: NAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      83040123
    • A
      arm64: add KASAN support · 39d114dd
      Andrey Ryabinin 提交于
      This patch adds arch specific code for kernel address sanitizer
      (see Documentation/kasan.txt).
      
      1/8 of kernel addresses reserved for shadow memory. There was no
      big enough hole for this, so virtual addresses for shadow were
      stolen from vmalloc area.
      
      At early boot stage the whole shadow region populated with just
      one physical page (kasan_zero_page). Later, this page reused
      as readonly zero shadow for some memory that KASan currently
      don't track (vmalloc).
      After mapping the physical memory, pages for shadow memory are
      allocated and mapped.
      
      Functions like memset/memmove/memcpy do a lot of memory accesses.
      If bad pointer passed to one of these function it is important
      to catch this. Compiler's instrumentation cannot do this since
      these functions are written in assembly.
      KASan replaces memory functions with manually instrumented variants.
      Original functions declared as weak symbols so strong definitions
      in mm/kasan/kasan.c could replace them. Original functions have aliases
      with '__' prefix in name, so we could call non-instrumented variant
      if needed.
      Some files built without kasan instrumentation (e.g. mm/slub.c).
      Original mem* function replaced (via #define) with prefixed variants
      to disable memory access checks for such files.
      Signed-off-by: NAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: NLinus Walleij <linus.walleij@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      39d114dd