1. 24 5月, 2018 1 次提交
  2. 23 5月, 2018 1 次提交
    • P
      arm64: fault: Don't leak data in ESR context for user fault on kernel VA · cc198460
      Peter Maydell 提交于
      If userspace faults on a kernel address, handing them the raw ESR
      value on the sigframe as part of the delivered signal can leak data
      useful to attackers who are using information about the underlying hardware
      fault type (e.g. translation vs permission) as a mechanism to defeat KASLR.
      
      However there are also legitimate uses for the information provided
      in the ESR -- notably the GCC and LLVM sanitizers use this to report
      whether wild pointer accesses by the application are reads or writes
      (since a wild write is a more serious bug than a wild read), so we
      don't want to drop the ESR information entirely.
      
      For faulting addresses in the kernel, sanitize the ESR. We choose
      to present userspace with the illusion that there is nothing mapped
      in the kernel's part of the address space at all, by reporting all
      faults as level 0 translation faults taken to EL1.
      
      These fields are safe to pass through to userspace as they depend
      only on the instruction that userspace used to provoke the fault:
       EC IL (always)
       ISV CM WNR (for all data aborts)
      All the other fields in ESR except DFSC are architecturally RES0
      for an L0 translation fault taken to EL1, so can be zeroed out
      without confusing userspace.
      
      The illusion is not entirely perfect, as there is a tiny wrinkle
      where we will report an alignment fault that was not due to the memory
      type (for instance a LDREX to an unaligned address) as a translation
      fault, whereas if you do this on real unmapped memory the alignment
      fault takes precedence. This is not likely to trip anybody up in
      practice, as the only users we know of for the ESR information who
      care about the behaviour for kernel addresses only really want to
      know about the WnR bit.
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      cc198460
  3. 01 5月, 2018 1 次提交
  4. 24 4月, 2018 1 次提交
  5. 17 4月, 2018 1 次提交
    • M
      arm64: kasan: avoid pfn_to_nid() before page array is initialized · 800cb2e5
      Mark Rutland 提交于
      In arm64's kasan_init(), we use pfn_to_nid() to find the NUMA node a
      span of memory is in, hoping to allocate shadow from the same NUMA node.
      However, at this point, the page array has not been initialized, and
      thus this is bogus.
      
      Since commit:
      
        f165b378 ("mm: uninitialized struct page poisoning sanity")
      
      ... accessing fields of the page array results in a boot time Oops(),
      highlighting this problem:
      
      [    0.000000] Unable to handle kernel paging request at virtual address dfff200000000000
      [    0.000000] Mem abort info:
      [    0.000000]   ESR = 0x96000004
      [    0.000000]   Exception class = DABT (current EL), IL = 32 bits
      [    0.000000]   SET = 0, FnV = 0
      [    0.000000]   EA = 0, S1PTW = 0
      [    0.000000] Data abort info:
      [    0.000000]   ISV = 0, ISS = 0x00000004
      [    0.000000]   CM = 0, WnR = 0
      [    0.000000] [dfff200000000000] address between user and kernel address ranges
      [    0.000000] Internal error: Oops: 96000004 [#1] PREEMPT SMP
      [    0.000000] Modules linked in:
      [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.16.0-07317-gf165b378 #42
      [    0.000000] Hardware name: ARM Juno development board (r1) (DT)
      [    0.000000] pstate: 80000085 (Nzcv daIf -PAN -UAO)
      [    0.000000] pc : __asan_load8+0x8c/0xa8
      [    0.000000] lr : __dump_page+0x3c/0x3b8
      [    0.000000] sp : ffff2000099b7ca0
      [    0.000000] x29: ffff2000099b7ca0 x28: ffff20000a1762c0
      [    0.000000] x27: ffff7e0000000000 x26: ffff2000099dd000
      [    0.000000] x25: ffff200009a3f960 x24: ffff200008f9c38c
      [    0.000000] x23: ffff20000a9d3000 x22: ffff200009735430
      [    0.000000] x21: fffffffffffffffe x20: ffff7e0001e50420
      [    0.000000] x19: ffff7e0001e50400 x18: 0000000000001840
      [    0.000000] x17: ffffffffffff8270 x16: 0000000000001840
      [    0.000000] x15: 0000000000001920 x14: 0000000000000004
      [    0.000000] x13: 0000000000000000 x12: 0000000000000800
      [    0.000000] x11: 1ffff0012d0f89ff x10: ffff10012d0f89ff
      [    0.000000] x9 : 0000000000000000 x8 : ffff8009687c5000
      [    0.000000] x7 : 0000000000000000 x6 : ffff10000f282000
      [    0.000000] x5 : 0000000000000040 x4 : fffffffffffffffe
      [    0.000000] x3 : 0000000000000000 x2 : dfff200000000000
      [    0.000000] x1 : 0000000000000005 x0 : 0000000000000000
      [    0.000000] Process swapper (pid: 0, stack limit = 0x        (ptrval))
      [    0.000000] Call trace:
      [    0.000000]  __asan_load8+0x8c/0xa8
      [    0.000000]  __dump_page+0x3c/0x3b8
      [    0.000000]  dump_page+0xc/0x18
      [    0.000000]  kasan_init+0x2e8/0x5a8
      [    0.000000]  setup_arch+0x294/0x71c
      [    0.000000]  start_kernel+0xdc/0x500
      [    0.000000] Code: aa0403e0 9400063c 17ffffee d343fc00 (38e26800)
      [    0.000000] ---[ end trace 67064f0e9c0cc338 ]---
      [    0.000000] Kernel panic - not syncing: Attempted to kill the idle task!
      [    0.000000] ---[ end Kernel panic - not syncing: Attempted to kill the idle task! ]---
      
      Let's fix this by using early_pfn_to_nid(), as other architectures do in
      their kasan init code. Note that early_pfn_to_nid acquires the nid from
      the memblock array, which we iterate over in kasan_init(), so this
      should be fine.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Fixes: 39d114dd ("arm64: add KASAN support")
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      800cb2e5
  6. 12 4月, 2018 1 次提交
    • K
      exec: pass stack rlimit into mm layout functions · 8f2af155
      Kees Cook 提交于
      Patch series "exec: Pin stack limit during exec".
      
      Attempts to solve problems with the stack limit changing during exec
      continue to be frustrated[1][2].  In addition to the specific issues
      around the Stack Clash family of flaws, Andy Lutomirski pointed out[3]
      other places during exec where the stack limit is used and is assumed to
      be unchanging.  Given the many places it gets used and the fact that it
      can be manipulated/raced via setrlimit() and prlimit(), I think the only
      way to handle this is to move away from the "current" view of the stack
      limit and instead attach it to the bprm, and plumb this down into the
      functions that need to know the stack limits.  This series implements
      the approach.
      
      [1] 04e35f44 ("exec: avoid RLIMIT_STACK races with prlimit()")
      [2] 779f4e1c ("Revert "exec: avoid RLIMIT_STACK races with prlimit()"")
      [3] to security@kernel.org, "Subject: existing rlimit races?"
      
      This patch (of 3):
      
      Since it is possible that the stack rlimit can change externally during
      exec (either via another thread calling setrlimit() or another process
      calling prlimit()), provide a way to pass the rlimit down into the
      per-architecture mm layout functions so that the rlimit can stay in the
      bprm structure instead of sitting in the signal structure until exec is
      finalized.
      
      Link: http://lkml.kernel.org/r/1518638796-20819-2-git-send-email-keescook@chromium.orgSigned-off-by: NKees Cook <keescook@chromium.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: Willy Tarreau <w@1wt.eu>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Laura Abbott <labbott@redhat.com>
      Cc: Greg KH <greg@kroah.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ben Hutchings <ben.hutchings@codethink.co.uk>
      Cc: Brad Spengler <spender@grsecurity.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8f2af155
  7. 27 3月, 2018 3 次提交
    • W
      Revert "arm64: Revert L1_CACHE_SHIFT back to 6 (64-byte cache line size)" · 3f251cf0
      Will Deacon 提交于
      This reverts commit 1f85b42a.
      
      The internal dma-direct.h API has changed in -next, which collides with
      us trying to use it to manage non-coherent DMA devices on systems with
      unreasonably large cache writeback granules.
      
      This isn't at all trivial to resolve, so revert our changes for now and
      we can revisit this after the merge window. Effectively, this just
      restores our behaviour back to that of 4.16.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      3f251cf0
    • S
      arm64: Delay enabling hardware DBM feature · 05abb595
      Suzuki K Poulose 提交于
      We enable hardware DBM bit in a capable CPU, very early in the
      boot via __cpu_setup. This doesn't give us a flexibility of
      optionally disable the feature, as the clearing the bit
      is a bit costly as the TLB can cache the settings. Instead,
      we delay enabling the feature until the CPU is brought up
      into the kernel. We use the feature capability mechanism
      to handle it.
      
      The hardware DBM is a non-conflicting feature. i.e, the kernel
      can safely run with a mix of CPUs with some using the feature
      and the others don't. So, it is safe for a late CPU to have
      this capability and enable it, even if the active CPUs don't.
      
      To get this handled properly by the infrastructure, we
      unconditionally set the capability and only enable it
      on CPUs which really have the feature. Also, we print the
      feature detection from the "matches" call back to make sure
      we don't mislead the user when none of the CPUs could use the
      feature.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      05abb595
    • D
      arm64: capabilities: Update prototype for enable call back · c0cda3b8
      Dave Martin 提交于
      We issue the enable() call back for all CPU hwcaps capabilities
      available on the system, on all the CPUs. So far we have ignored
      the argument passed to the call back, which had a prototype to
      accept a "void *" for use with on_each_cpu() and later with
      stop_machine(). However, with commit 0a0d111d
      ("arm64: cpufeature: Pass capability structure to ->enable callback"),
      there are some users of the argument who wants the matching capability
      struct pointer where there are multiple matching criteria for a single
      capability. Clean up the declaration of the call back to make it clear.
      
       1) Renamed to cpu_enable(), to imply taking necessary actions on the
          called CPU for the entry.
       2) Pass const pointer to the capability, to allow the call back to
          check the entry. (e.,g to check if any action is needed on the CPU)
       3) We don't care about the result of the call back, turning this to
          a void.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Andre Przywara <andre.przywara@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Acked-by: NRobin Murphy <robin.murphy@arm.com>
      Reviewed-by: NJulien Thierry <julien.thierry@arm.com>
      Signed-off-by: NDave Martin <dave.martin@arm.com>
      [suzuki: convert more users, rename call back and drop results]
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      c0cda3b8
  8. 23 3月, 2018 1 次提交
    • T
      mm/vmalloc: add interfaces to free unmapped page table · b6bdb751
      Toshi Kani 提交于
      On architectures with CONFIG_HAVE_ARCH_HUGE_VMAP set, ioremap() may
      create pud/pmd mappings.  A kernel panic was observed on arm64 systems
      with Cortex-A75 in the following steps as described by Hanjun Guo.
      
       1. ioremap a 4K size, valid page table will build,
       2. iounmap it, pte0 will set to 0;
       3. ioremap the same address with 2M size, pgd/pmd is unchanged,
          then set the a new value for pmd;
       4. pte0 is leaked;
       5. CPU may meet exception because the old pmd is still in TLB,
          which will lead to kernel panic.
      
      This panic is not reproducible on x86.  INVLPG, called from iounmap,
      purges all levels of entries associated with purged address on x86.  x86
      still has memory leak.
      
      The patch changes the ioremap path to free unmapped page table(s) since
      doing so in the unmap path has the following issues:
      
       - The iounmap() path is shared with vunmap(). Since vmap() only
         supports pte mappings, making vunmap() to free a pte page is an
         overhead for regular vmap users as they do not need a pte page freed
         up.
      
       - Checking if all entries in a pte page are cleared in the unmap path
         is racy, and serializing this check is expensive.
      
       - The unmap path calls free_vmap_area_noflush() to do lazy TLB purges.
         Clearing a pud/pmd entry before the lazy TLB purges needs extra TLB
         purge.
      
      Add two interfaces, pud_free_pmd_page() and pmd_free_pte_page(), which
      clear a given pud/pmd entry and free up a page for the lower level
      entries.
      
      This patch implements their stub functions on x86 and arm64, which work
      as workaround.
      
      [akpm@linux-foundation.org: fix typo in pmd_free_pte_page() stub]
      Link: http://lkml.kernel.org/r/20180314180155.19492-2-toshi.kani@hpe.com
      Fixes: e61ce6ad ("mm: change ioremap to set up huge I/O mappings")
      Reported-by: NLei Li <lious.lilei@hisilicon.com>
      Signed-off-by: NToshi Kani <toshi.kani@hpe.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Wang Xuefeng <wxf.wang@hisilicon.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Hanjun Guo <guohanjun@huawei.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Chintan Pandya <cpandya@codeaurora.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b6bdb751
  9. 09 3月, 2018 2 次提交
    • D
      arm64: signal: Ensure si_code is valid for all fault signals · af40ff68
      Dave Martin 提交于
      Currently, as reported by Eric, an invalid si_code value 0 is
      passed in many signals delivered to userspace in response to faults
      and other kernel errors.  Typically 0 is passed when the fault is
      insufficiently diagnosable or when there does not appear to be any
      sensible alternative value to choose.
      
      This appears to violate POSIX, and is intuitively wrong for at
      least two reasons arising from the fact that 0 == SI_USER:
      
       1) si_code is a union selector, and SI_USER (and si_code <= 0 in
          general) implies the existence of a different set of fields
          (siginfo._kill) from that which exists for a fault signal
          (siginfo._sigfault).  However, the code raising the signal
          typically writes only the _sigfault fields, and the _kill
          fields make no sense in this case.
      
          Thus when userspace sees si_code == 0 (SI_USER) it may
          legitimately inspect fields in the inactive union member _kill
          and obtain garbage as a result.
      
          There appears to be software in the wild relying on this,
          albeit generally only for printing diagnostic messages.
      
       2) Software that wants to be robust against spurious signals may
          discard signals where si_code == SI_USER (or <= 0), or may
          filter such signals based on the si_uid and si_pid fields of
          siginfo._sigkill.  In the case of fault signals, this means
          that important (and usually fatal) error conditions may be
          silently ignored.
      
      In practice, many of the faults for which arm64 passes si_code == 0
      are undiagnosable conditions such as exceptions with syndrome
      values in ESR_ELx to which the architecture does not yet assign any
      meaning, or conditions indicative of a bug or error in the kernel
      or system and thus that are unrecoverable and should never occur in
      normal operation.
      
      The approach taken in this patch is to translate all such
      undiagnosable or "impossible" synchronous fault conditions to
      SIGKILL, since these are at least probably localisable to a single
      process.  Some of these conditions should really result in a kernel
      panic, but due to the lack of diagnostic information it is
      difficult to be certain: this patch does not add any calls to
      panic(), but this could change later if justified.
      
      Although si_code will not reach userspace in the case of SIGKILL,
      it is still desirable to pass a nonzero value so that the common
      siginfo handling code can detect incorrect use of si_code == 0
      without false positives.  In this case the si_code dependent
      siginfo fields will not be correctly initialised, but since they
      are not passed to userspace I deem this not to matter.
      
      A few faults can reasonably occur in realistic userspace scenarios,
      and _should_ raise a regular, handleable (but perhaps not
      ignorable/blockable) signal: for these, this patch attempts to
      choose a suitable standard si_code value for the raised signal in
      each case instead of 0.
      
      arm64 was the only arch to define a BUS_FIXME code, so after this
      patch nobody defines it.  This patch therefore also removes the
      relevant code from siginfo_layout().
      
      Cc: James Morse <james.morse@arm.com>
      Reported-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      af40ff68
    • S
      arm64: Add support for new control bits CTR_EL0.DIC and CTR_EL0.IDC · 6ae4b6e0
      Shanker Donthineni 提交于
      The DCache clean & ICache invalidation requirements for instructions
      to be data coherence are discoverable through new fields in CTR_EL0.
      The following two control bits DIC and IDC were defined for this
      purpose. No need to perform point of unification cache maintenance
      operations from software on systems where CPU caches are transparent.
      
      This patch optimize the three functions __flush_cache_user_range(),
      clean_dcache_area_pou() and invalidate_icache_range() if the hardware
      reports CTR_EL0.IDC and/or CTR_EL0.IDC. Basically it skips the two
      instructions 'DC CVAU' and 'IC IVAU', and the associated loop logic
      in order to avoid the unnecessary overhead.
      
      CTR_EL0.DIC: Instruction cache invalidation requirements for
       instruction to data coherence. The meaning of this bit[29].
        0: Instruction cache invalidation to the point of unification
           is required for instruction to data coherence.
        1: Instruction cache cleaning to the point of unification is
            not required for instruction to data coherence.
      
      CTR_EL0.IDC: Data cache clean requirements for instruction to data
       coherence. The meaning of this bit[28].
        0: Data cache clean to the point of unification is required for
           instruction to data coherence, unless CLIDR_EL1.LoC == 0b000
           or (CLIDR_EL1.LoUIS == 0b000 && CLIDR_EL1.LoUU == 0b000).
        1: Data cache clean to the point of unification is not required
           for instruction to data coherence.
      Co-authored-by: NPhilip Elcan <pelcan@codeaurora.org>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NShanker Donthineni <shankerd@codeaurora.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      6ae4b6e0
  10. 07 3月, 2018 4 次提交
  11. 26 2月, 2018 1 次提交
  12. 22 2月, 2018 1 次提交
  13. 17 2月, 2018 1 次提交
    • W
      arm64: mm: Use READ_ONCE/WRITE_ONCE when accessing page tables · 20a004e7
      Will Deacon 提交于
      In many cases, page tables can be accessed concurrently by either another
      CPU (due to things like fast gup) or by the hardware page table walker
      itself, which may set access/dirty bits. In such cases, it is important
      to use READ_ONCE/WRITE_ONCE when accessing page table entries so that
      entries cannot be torn, merged or subject to apparent loss of coherence
      due to compiler transformations.
      
      Whilst there are some scenarios where this cannot happen (e.g. pinned
      kernel mappings for the linear region), the overhead of using READ_ONCE
      /WRITE_ONCE everywhere is minimal and makes the code an awful lot easier
      to reason about. This patch consistently uses these macros in the arch
      code, as well as explicitly namespacing pointers to page table entries
      from the entries themselves by using adopting a 'p' suffix for the former
      (as is sometimes used elsewhere in the kernel source).
      Tested-by: NYury Norov <ynorov@caviumnetworks.com>
      Tested-by: NRichard Ruigrok <rruigrok@codeaurora.org>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      20a004e7
  14. 15 2月, 2018 1 次提交
  15. 07 2月, 2018 8 次提交
  16. 27 1月, 2018 1 次提交
  17. 23 1月, 2018 1 次提交
  18. 19 1月, 2018 1 次提交
  19. 17 1月, 2018 2 次提交
  20. 16 1月, 2018 3 次提交
  21. 15 1月, 2018 4 次提交