1. 14 11月, 2017 1 次提交
    • D
      arm64: Make ARMV8_DEPRECATED depend on SYSCTL · 6cfa7cc4
      Dave Martin 提交于
      If CONFIG_SYSCTL=n and CONFIG_ARMV8_DEPRECATED=y, the deprecated
      instruction emulation code currently leaks some memory at boot
      time, and won't have any runtime control interface.  This does
      not feel like useful or intended behaviour...
      
      This patch adds a dependency on CONFIG_SYSCTL, so that such a
      kernel can't be built in the first place.
      
      It's probably not worth adding the error-handling / cleanup code
      that would be needed to deal with this otherwise: people who
      desperately need the emulation can still enable SYSCTL.
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      6cfa7cc4
  2. 03 11月, 2017 2 次提交
  3. 04 10月, 2017 1 次提交
  4. 02 10月, 2017 1 次提交
  5. 16 8月, 2017 1 次提交
  6. 09 8月, 2017 2 次提交
  7. 13 7月, 2017 1 次提交
    • D
      include/linux/string.h: add the option of fortified string.h functions · 6974f0c4
      Daniel Micay 提交于
      This adds support for compiling with a rough equivalent to the glibc
      _FORTIFY_SOURCE=1 feature, providing compile-time and runtime buffer
      overflow checks for string.h functions when the compiler determines the
      size of the source or destination buffer at compile-time.  Unlike glibc,
      it covers buffer reads in addition to writes.
      
      GNU C __builtin_*_chk intrinsics are avoided because they would force a
      much more complex implementation.  They aren't designed to detect read
      overflows and offer no real benefit when using an implementation based
      on inline checks.  Inline checks don't add up to much code size and
      allow full use of the regular string intrinsics while avoiding the need
      for a bunch of _chk functions and per-arch assembly to avoid wrapper
      overhead.
      
      This detects various overflows at compile-time in various drivers and
      some non-x86 core kernel code.  There will likely be issues caught in
      regular use at runtime too.
      
      Future improvements left out of initial implementation for simplicity,
      as it's all quite optional and can be done incrementally:
      
      * Some of the fortified string functions (strncpy, strcat), don't yet
        place a limit on reads from the source based on __builtin_object_size of
        the source buffer.
      
      * Extending coverage to more string functions like strlcat.
      
      * It should be possible to optionally use __builtin_object_size(x, 1) for
        some functions (C strings) to detect intra-object overflows (like
        glibc's _FORTIFY_SOURCE=2), but for now this takes the conservative
        approach to avoid likely compatibility issues.
      
      * The compile-time checks should be made available via a separate config
        option which can be enabled by default (or always enabled) once enough
        time has passed to get the issues it catches fixed.
      
      Kees said:
       "This is great to have. While it was out-of-tree code, it would have
        blocked at least CVE-2016-3858 from being exploitable (improper size
        argument to strlcpy()). I've sent a number of fixes for
        out-of-bounds-reads that this detected upstream already"
      
      [arnd@arndb.de: x86: fix fortified memcpy]
        Link: http://lkml.kernel.org/r/20170627150047.660360-1-arnd@arndb.de
      [keescook@chromium.org: avoid panic() in favor of BUG()]
        Link: http://lkml.kernel.org/r/20170626235122.GA25261@beast
      [keescook@chromium.org: move from -mm, add ARCH_HAS_FORTIFY_SOURCE, tweak Kconfig help]
      Link: http://lkml.kernel.org/r/20170526095404.20439-1-danielmicay@gmail.com
      Link: http://lkml.kernel.org/r/1497903987-21002-8-git-send-email-keescook@chromium.orgSigned-off-by: NDaniel Micay <danielmicay@gmail.com>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Acked-by: NKees Cook <keescook@chromium.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6974f0c4
  8. 07 7月, 2017 1 次提交
  9. 23 6月, 2017 1 次提交
    • T
      acpi: apei: handle SEA notification type for ARMv8 · 7edda088
      Tyler Baicar 提交于
      ARM APEI extension proposal added SEA (Synchronous External Abort)
      notification type for ARMv8.
      Add a new GHES error source handling function for SEA. If an error
      source's notification type is SEA, then this function can be registered
      into the SEA exception handler. That way GHES will parse and report
      SEA exceptions when they occur.
      An SEA can interrupt code that had interrupts masked and is treated as
      an NMI. To aid this the page of address space for mapping APEI buffers
      while in_nmi() is always reserved, and ghes_ioremap_pfn_nmi() is
      changed to use the helper methods to find the prot_t to map with in
      the same way as ghes_ioremap_pfn_irq().
      Signed-off-by: NTyler Baicar <tbaicar@codeaurora.org>
      CC: Jonathan (Zhixiong) Zhang <zjzhang@codeaurora.org>
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      7edda088
  10. 20 6月, 2017 1 次提交
  11. 15 6月, 2017 2 次提交
  12. 13 6月, 2017 1 次提交
  13. 12 6月, 2017 1 次提交
  14. 09 6月, 2017 1 次提交
  15. 07 6月, 2017 1 次提交
    • A
      arm64: ftrace: add support for far branches to dynamic ftrace · e71a4e1b
      Ard Biesheuvel 提交于
      Currently, dynamic ftrace support in the arm64 kernel assumes that all
      core kernel code is within range of ordinary branch instructions that
      occur in module code, which is usually the case, but is no longer
      guaranteed now that we have support for module PLTs and address space
      randomization.
      
      Since on arm64, all patching of branch instructions involves function
      calls to the same entry point [ftrace_caller()], we can emit the modules
      with a trampoline that has unlimited range, and patch both the trampoline
      itself and the branch instruction to redirect the call via the trampoline.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      [will: minor clarification to smp_wmb() comment]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      e71a4e1b
  16. 03 6月, 2017 1 次提交
  17. 27 4月, 2017 2 次提交
  18. 23 4月, 2017 1 次提交
    • I
      Revert "x86/mm/gup: Switch GUP to the generic get_user_page_fast() implementation" · 6dd29b3d
      Ingo Molnar 提交于
      This reverts commit 2947ba05.
      
      Dan Williams reported dax-pmem kernel warnings with the following signature:
      
         WARNING: CPU: 8 PID: 245 at lib/percpu-refcount.c:155 percpu_ref_switch_to_atomic_rcu+0x1f5/0x200
         percpu ref (dax_pmem_percpu_release [dax_pmem]) <= 0 (0) after switching to atomic
      
      ... and bisected it to this commit, which suggests possible memory corruption
      caused by the x86 fast-GUP conversion.
      
      He also pointed out:
      
       "
        This is similar to the backtrace when we were not properly handling
        pud faults and was fixed with this commit: 220ced16 "mm: fix
        get_user_pages() vs device-dax pud mappings"
      
        I've found some missing _devmap checks in the generic
        get_user_pages_fast() path, but this does not fix the regression
        [...]
       "
      
      So given that there are known bugs, and a pretty robust looking bisection
      points to this commit suggesting that are unknown bugs in the conversion
      as well, revert it for the time being - we'll re-try in v4.13.
      Reported-by: NDan Williams <dan.j.williams@intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: aneesh.kumar@linux.vnet.ibm.com
      Cc: dann.frazier@canonical.com
      Cc: dave.hansen@intel.com
      Cc: steve.capper@linaro.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      6dd29b3d
  19. 19 4月, 2017 1 次提交
  20. 06 4月, 2017 1 次提交
    • A
      arm64: kdump: provide /proc/vmcore file · e62aaeac
      AKASHI Takahiro 提交于
      Arch-specific functions are added to allow for implementing a crash dump
      file interface, /proc/vmcore, which can be viewed as a ELF file.
      
      A user space tool, like kexec-tools, is responsible for allocating
      a separate region for the core's ELF header within crash kdump kernel
      memory and filling it in when executing kexec_load().
      
      Then, its location will be advertised to crash dump kernel via a new
      device-tree property, "linux,elfcorehdr", and crash dump kernel preserves
      the region for later use with reserve_elfcorehdr() at boot time.
      
      On crash dump kernel, /proc/vmcore will access the primary kernel's memory
      with copy_oldmem_page(), which feeds the data page-by-page by ioremap'ing
      it since it does not reside in linear mapping on crash dump kernel.
      
      Meanwhile, elfcorehdr_read() is simple as the region is always mapped.
      Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org>
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      e62aaeac
  21. 29 3月, 2017 1 次提交
  22. 18 3月, 2017 1 次提交
  23. 11 3月, 2017 1 次提交
  24. 07 3月, 2017 1 次提交
  25. 22 2月, 2017 1 次提交
    • D
      arch: add ARCH_HAS_SET_MEMORY config · d2852a22
      Daniel Borkmann 提交于
      Currently, there's no good way to test for the presence of
      set_memory_ro/rw/x/nx() helpers implemented by archs such as
      x86, arm, arm64 and s390.
      
      There's DEBUG_SET_MODULE_RONX and DEBUG_RODATA, however both
      don't really reflect that: set_memory_*() are also available
      even when DEBUG_SET_MODULE_RONX is turned off, and DEBUG_RODATA
      is set by parisc, but doesn't implement above functions. Thus,
      add ARCH_HAS_SET_MEMORY that is selected by mentioned archs,
      where generic code can test against this.
      
      This also allows later on to move DEBUG_SET_MODULE_RONX out of
      the arch specific Kconfig to define it only once depending on
      ARCH_HAS_SET_MEMORY.
      Suggested-by: NLaura Abbott <labbott@redhat.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d2852a22
  26. 10 2月, 2017 1 次提交
    • C
      arm64: Work around Falkor erratum 1003 · 38fd94b0
      Christopher Covington 提交于
      The Qualcomm Datacenter Technologies Falkor v1 CPU may allocate TLB entries
      using an incorrect ASID when TTBRx_EL1 is being updated. When the erratum
      is triggered, page table entries using the new translation table base
      address (BADDR) will be allocated into the TLB using the old ASID. All
      circumstances leading to the incorrect ASID being cached in the TLB arise
      when software writes TTBRx_EL1[ASID] and TTBRx_EL1[BADDR], a memory
      operation is in the process of performing a translation using the specific
      TTBRx_EL1 being written, and the memory operation uses a translation table
      descriptor designated as non-global. EL2 and EL3 code changing the EL1&0
      ASID is not subject to this erratum because hardware is prohibited from
      performing translations from an out-of-context translation regime.
      
      Consider the following pseudo code.
      
        write new BADDR and ASID values to TTBRx_EL1
      
      Replacing the above sequence with the one below will ensure that no TLB
      entries with an incorrect ASID are used by software.
      
        write reserved value to TTBRx_EL1[ASID]
        ISB
        write new value to TTBRx_EL1[BADDR]
        ISB
        write new value to TTBRx_EL1[ASID]
        ISB
      
      When the above sequence is used, page table entries using the new BADDR
      value may still be incorrectly allocated into the TLB using the reserved
      ASID. Yet this will not reduce functionality, since TLB entries incorrectly
      tagged with the reserved ASID will never be hit by a later instruction.
      
      Based on work by Shanker Donthineni <shankerd@codeaurora.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NChristopher Covington <cov@codeaurora.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      38fd94b0
  27. 08 2月, 2017 1 次提交
  28. 07 2月, 2017 1 次提交
  29. 06 2月, 2017 1 次提交
  30. 01 2月, 2017 1 次提交
    • C
      arm64: Work around Falkor erratum 1009 · d9ff80f8
      Christopher Covington 提交于
      During a TLB invalidate sequence targeting the inner shareable domain,
      Falkor may prematurely complete the DSB before all loads and stores using
      the old translation are observed. Instruction fetches are not subject to
      the conditions of this erratum. If the original code sequence includes
      multiple TLB invalidate instructions followed by a single DSB, onle one of
      the TLB instructions needs to be repeated to work around this erratum.
      While the erratum only applies to cases in which the TLBI specifies the
      inner-shareable domain (*IS form of TLBI) and the DSB is ISH form or
      stronger (OSH, SYS), this changes applies the workaround overabundantly--
      to local TLBI, DSB NSH sequences as well--for simplicity.
      
      Based on work by Shanker Donthineni <shankerd@codeaurora.org>
      Signed-off-by: NChristopher Covington <cov@codeaurora.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      d9ff80f8
  31. 26 1月, 2017 1 次提交
  32. 12 1月, 2017 1 次提交
  33. 02 12月, 2016 1 次提交
  34. 22 11月, 2016 1 次提交
    • C
      arm64: Enable CONFIG_ARM64_SW_TTBR0_PAN · ba42822a
      Catalin Marinas 提交于
      This patch adds the Kconfig option to enable support for TTBR0 PAN
      emulation. The option is default off because of a slight performance hit
      when enabled, caused by the additional TTBR0_EL1 switching during user
      access operations or exception entry/exit code.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ba42822a
  35. 12 11月, 2016 1 次提交
    • M
      arm64: split thread_info from task stack · c02433dd
      Mark Rutland 提交于
      This patch moves arm64's struct thread_info from the task stack into
      task_struct. This protects thread_info from corruption in the case of
      stack overflows, and makes its address harder to determine if stack
      addresses are leaked, making a number of attacks more difficult. Precise
      detection and handling of overflow is left for subsequent patches.
      
      Largely, this involves changing code to store the task_struct in sp_el0,
      and acquire the thread_info from the task struct. Core code now
      implements current_thread_info(), and as noted in <linux/sched.h> this
      relies on offsetof(task_struct, thread_info) == 0, enforced by core
      code.
      
      This change means that the 'tsk' register used in entry.S now points to
      a task_struct, rather than a thread_info as it used to. To make this
      clear, the TI_* field offsets are renamed to TSK_TI_*, with asm-offsets
      appropriately updated to account for the structural change.
      
      Userspace clobbers sp_el0, and we can no longer restore this from the
      stack. Instead, the current task is cached in a per-cpu variable that we
      can safely access from early assembly as interrupts are disabled (and we
      are thus not preemptible).
      
      Both secondary entry and idle are updated to stash the sp and task
      pointer separately.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c02433dd
  36. 08 11月, 2016 1 次提交
    • P
      arm64: Add uprobe support · 9842ceae
      Pratyush Anand 提交于
      This patch adds support for uprobe on ARM64 architecture.
      
      Unit tests for following have been done so far and they have been found
      working
          1. Step-able instructions, like sub, ldr, add etc.
          2. Simulation-able like ret, cbnz, cbz etc.
          3. uretprobe
          4. Reject-able instructions like sev, wfe etc.
          5. trapped and abort xol path
          6. probe at unaligned user address.
          7. longjump test cases
      
      Currently it does not support aarch32 instruction probing.
      Signed-off-by: NPratyush Anand <panand@redhat.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      9842ceae