1. 04 8月, 2017 1 次提交
  2. 26 7月, 2017 1 次提交
  3. 20 7月, 2017 3 次提交
  4. 13 7月, 2017 2 次提交
    • R
      arm64: ascii armor the arm64 boot init stack canary · d21f5498
      Rik van Riel 提交于
      Use the ascii-armor canary to prevent unterminated C string overflows
      from being able to successfully overwrite the canary, even if they
      somehow obtain the canary value.
      
      Inspired by execshield ascii-armor and Daniel Micay's linux-hardened
      tree.
      
      Link: http://lkml.kernel.org/r/20170524155751.424-5-riel@redhat.comSigned-off-by: NRik van Riel <riel@redhat.com>
      Acked-by: NKees Cook <keescook@chromium.org>
      Cc: Daniel Micay <danielmicay@gmail.com>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d21f5498
    • D
      include/linux/string.h: add the option of fortified string.h functions · 6974f0c4
      Daniel Micay 提交于
      This adds support for compiling with a rough equivalent to the glibc
      _FORTIFY_SOURCE=1 feature, providing compile-time and runtime buffer
      overflow checks for string.h functions when the compiler determines the
      size of the source or destination buffer at compile-time.  Unlike glibc,
      it covers buffer reads in addition to writes.
      
      GNU C __builtin_*_chk intrinsics are avoided because they would force a
      much more complex implementation.  They aren't designed to detect read
      overflows and offer no real benefit when using an implementation based
      on inline checks.  Inline checks don't add up to much code size and
      allow full use of the regular string intrinsics while avoiding the need
      for a bunch of _chk functions and per-arch assembly to avoid wrapper
      overhead.
      
      This detects various overflows at compile-time in various drivers and
      some non-x86 core kernel code.  There will likely be issues caught in
      regular use at runtime too.
      
      Future improvements left out of initial implementation for simplicity,
      as it's all quite optional and can be done incrementally:
      
      * Some of the fortified string functions (strncpy, strcat), don't yet
        place a limit on reads from the source based on __builtin_object_size of
        the source buffer.
      
      * Extending coverage to more string functions like strlcat.
      
      * It should be possible to optionally use __builtin_object_size(x, 1) for
        some functions (C strings) to detect intra-object overflows (like
        glibc's _FORTIFY_SOURCE=2), but for now this takes the conservative
        approach to avoid likely compatibility issues.
      
      * The compile-time checks should be made available via a separate config
        option which can be enabled by default (or always enabled) once enough
        time has passed to get the issues it catches fixed.
      
      Kees said:
       "This is great to have. While it was out-of-tree code, it would have
        blocked at least CVE-2016-3858 from being exploitable (improper size
        argument to strlcpy()). I've sent a number of fixes for
        out-of-bounds-reads that this detected upstream already"
      
      [arnd@arndb.de: x86: fix fortified memcpy]
        Link: http://lkml.kernel.org/r/20170627150047.660360-1-arnd@arndb.de
      [keescook@chromium.org: avoid panic() in favor of BUG()]
        Link: http://lkml.kernel.org/r/20170626235122.GA25261@beast
      [keescook@chromium.org: move from -mm, add ARCH_HAS_FORTIFY_SOURCE, tweak Kconfig help]
      Link: http://lkml.kernel.org/r/20170526095404.20439-1-danielmicay@gmail.com
      Link: http://lkml.kernel.org/r/1497903987-21002-8-git-send-email-keescook@chromium.orgSigned-off-by: NDaniel Micay <danielmicay@gmail.com>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Acked-by: NKees Cook <keescook@chromium.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6974f0c4
  5. 11 7月, 2017 1 次提交
  6. 10 7月, 2017 1 次提交
  7. 07 7月, 2017 1 次提交
  8. 04 7月, 2017 1 次提交
  9. 29 6月, 2017 1 次提交
  10. 23 6月, 2017 3 次提交
  11. 22 6月, 2017 1 次提交
    • D
      arm64: ptrace: Flush user-RW TLS reg to thread_struct before reading · 936eb65c
      Dave Martin 提交于
      When reading current's user-writable TLS register (which occurs
      when dumping core for native tasks), it is possible that userspace
      has modified it since the time the task was last scheduled out.
      The new TLS register value is not guaranteed to have been written
      immediately back to thread_struct in this case.
      
      As a result, a coredump can capture stale data for this register.
      Reading the register for a stopped task via ptrace is unaffected.
      
      For native tasks, this patch explicitly flushes the TPIDR_EL0
      register back to thread_struct before dumping when operating on
      current, thus ensuring that coredump contents are up to date.  For
      compat tasks, the TLS register is not user-writable and so cannot
      be out of sync, so no flush is required in compat_tls_get().
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      936eb65c
  12. 20 6月, 2017 1 次提交
  13. 15 6月, 2017 13 次提交
  14. 12 6月, 2017 2 次提交
  15. 07 6月, 2017 2 次提交
    • A
      arm64: ftrace: add support for far branches to dynamic ftrace · e71a4e1b
      Ard Biesheuvel 提交于
      Currently, dynamic ftrace support in the arm64 kernel assumes that all
      core kernel code is within range of ordinary branch instructions that
      occur in module code, which is usually the case, but is no longer
      guaranteed now that we have support for module PLTs and address space
      randomization.
      
      Since on arm64, all patching of branch instructions involves function
      calls to the same entry point [ftrace_caller()], we can emit the modules
      with a trampoline that has unlimited range, and patch both the trampoline
      itself and the branch instruction to redirect the call via the trampoline.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      [will: minor clarification to smp_wmb() comment]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      e71a4e1b
    • M
      arm64: KVM: Preserve RES1 bits in SCTLR_EL2 · d68c1f7f
      Marc Zyngier 提交于
      __do_hyp_init has the rather bad habit of ignoring RES1 bits and
      writing them back as zero. On a v8.0-8.2 CPU, this doesn't do anything
      bad, but may end-up being pretty nasty on future revisions of the
      architecture.
      
      Let's preserve those bits so that we don't have to fix this later on.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <cdall@linaro.org>
      d68c1f7f
  16. 04 6月, 2017 3 次提交
  17. 02 6月, 2017 1 次提交
    • L
      ARM64/ACPI: Fix BAD_MADT_GICC_ENTRY() macro implementation · cb7cf772
      Lorenzo Pieralisi 提交于
      The BAD_MADT_GICC_ENTRY() macro checks if a GICC MADT entry passes
      muster from an ACPI specification standpoint. Current macro detects the
      MADT GICC entry length through ACPI firmware version (it changed from 76
      to 80 bytes in the transition from ACPI 5.1 to ACPI 6.0 specification)
      but always uses (erroneously) the ACPICA (latest) struct (ie struct
      acpi_madt_generic_interrupt - that is 80-bytes long) length to check if
      the current GICC entry memory record exceeds the MADT table end in
      memory as defined by the MADT table header itself, which may result in
      false negatives depending on the ACPI firmware version and how the MADT
      entries are laid out in memory (ie on ACPI 5.1 firmware MADT GICC
      entries are 76 bytes long, so by adding 80 to a GICC entry start address
      in memory the resulting address may well be past the actual MADT end,
      triggering a false negative).
      
      Fix the BAD_MADT_GICC_ENTRY() macro by reshuffling the condition checks
      and update them to always use the firmware version specific MADT GICC
      entry length in order to carry out boundary checks.
      
      Fixes: b6cfb277 ("ACPI / ARM64: add BAD_MADT_GICC_ENTRY() macro")
      Reported-by: NJulien Grall <julien.grall@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Julien Grall <julien.grall@arm.com>
      Cc: Hanjun Guo <hanjun.guo@linaro.org>
      Cc: Al Stone <ahs3@redhat.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      cb7cf772
  18. 30 5月, 2017 2 次提交
    • W
      arm64: futex: Fix undefined behaviour with FUTEX_OP_OPARG_SHIFT usage · 5f16a046
      Will Deacon 提交于
      FUTEX_OP_OPARG_SHIFT instructs the futex code to treat the 12-bit oparg
      field as a shift value, potentially leading to a left shift value that
      is negative or with an absolute value that is significantly larger then
      the size of the type. UBSAN chokes with:
      
      ================================================================================
      UBSAN: Undefined behaviour in ./arch/arm64/include/asm/futex.h:60:13
      shift exponent -1 is negative
      CPU: 1 PID: 1449 Comm: syz-executor0 Not tainted 4.11.0-rc4-00005-g977eb52-dirty #11
      Hardware name: linux,dummy-virt (DT)
      Call trace:
      [<ffff200008094778>] dump_backtrace+0x0/0x538 arch/arm64/kernel/traps.c:73
      [<ffff200008094cd0>] show_stack+0x20/0x30 arch/arm64/kernel/traps.c:228
      [<ffff200008c194a8>] __dump_stack lib/dump_stack.c:16 [inline]
      [<ffff200008c194a8>] dump_stack+0x120/0x188 lib/dump_stack.c:52
      [<ffff200008cc24b8>] ubsan_epilogue+0x18/0x98 lib/ubsan.c:164
      [<ffff200008cc3098>] __ubsan_handle_shift_out_of_bounds+0x250/0x294 lib/ubsan.c:421
      [<ffff20000832002c>] futex_atomic_op_inuser arch/arm64/include/asm/futex.h:60 [inline]
      [<ffff20000832002c>] futex_wake_op kernel/futex.c:1489 [inline]
      [<ffff20000832002c>] do_futex+0x137c/0x1740 kernel/futex.c:3231
      [<ffff200008320504>] SYSC_futex kernel/futex.c:3281 [inline]
      [<ffff200008320504>] SyS_futex+0x114/0x268 kernel/futex.c:3249
      [<ffff200008084770>] el0_svc_naked+0x24/0x28
      ================================================================================
      syz-executor1 uses obsolete (PF_INET,SOCK_PACKET)
      sock: process `syz-executor0' is using obsolete setsockopt SO_BSDCOMPAT
      
      This patch attempts to fix some of this by:
      
        * Making encoded_op an unsigned type, so we can shift it left even if
          the top bit is set.
      
        * Casting to signed prior to shifting right when extracting oparg
          and cmparg
      
        * Consider only the bottom 5 bits of oparg when using it as a left-shift
          value.
      
      Whilst I think this catches all of the issues, I'd much prefer to remove
      this stuff, as I think it's unused and the bugs are copy-pasted between
      a bunch of architectures.
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      5f16a046
    • K
      arm64: Add dump_backtrace() in show_regs · 1149aad1
      Kefeng Wang 提交于
      Generic code expects show_regs() to dump the stack, but arm64's
      show_regs() does not. This makes it hard to debug softlockups and
      other issues that result in show_regs() being called.
      
      This patch updates arm64's show_regs() to dump the stack, as common
      code expects.
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com>
      [will: folded in bug_handler fix from mrutland]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      1149aad1