1. 04 8月, 2022 8 次提交
  2. 07 10月, 2020 2 次提交
  3. 09 9月, 2020 1 次提交
  4. 12 6月, 2020 1 次提交
  5. 27 3月, 2020 1 次提交
  6. 25 11月, 2019 1 次提交
  7. 21 5月, 2019 1 次提交
  8. 09 4月, 2019 1 次提交
    • S
      treewide: Switch printk users from %pf and %pF to %ps and %pS, respectively · d75f773c
      Sakari Ailus 提交于
      %pF and %pf are functionally equivalent to %pS and %ps conversion
      specifiers. The former are deprecated, therefore switch the current users
      to use the preferred variant.
      
      The changes have been produced by the following command:
      
      	git grep -l '%p[fF]' | grep -v '^\(tools\|Documentation\)/' | \
      	while read i; do perl -i -pe 's/%pf/%ps/g; s/%pF/%pS/g;' $i; done
      
      And verifying the result.
      
      Link: http://lkml.kernel.org/r/20190325193229.23390-1-sakari.ailus@linux.intel.com
      Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: sparclinux@vger.kernel.org
      Cc: linux-um@lists.infradead.org
      Cc: xen-devel@lists.xenproject.org
      Cc: linux-acpi@vger.kernel.org
      Cc: linux-pm@vger.kernel.org
      Cc: drbd-dev@lists.linbit.com
      Cc: linux-block@vger.kernel.org
      Cc: linux-mmc@vger.kernel.org
      Cc: linux-nvdimm@lists.01.org
      Cc: linux-pci@vger.kernel.org
      Cc: linux-scsi@vger.kernel.org
      Cc: linux-btrfs@vger.kernel.org
      Cc: linux-f2fs-devel@lists.sourceforge.net
      Cc: linux-mm@kvack.org
      Cc: ceph-devel@vger.kernel.org
      Cc: netdev@vger.kernel.org
      Signed-off-by: NSakari Ailus <sakari.ailus@linux.intel.com>
      Acked-by: David Sterba <dsterba@suse.com> (for btrfs)
      Acked-by: Mike Rapoport <rppt@linux.ibm.com> (for mm/memblock.c)
      Acked-by: Bjorn Helgaas <bhelgaas@google.com> (for drivers/pci)
      Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Signed-off-by: NPetr Mladek <pmladek@suse.com>
      d75f773c
  9. 05 3月, 2019 1 次提交
    • L
      x86-64: add warning for non-canonical user access address dereferences · 00c42373
      Linus Torvalds 提交于
      This adds a warning (once) for any kernel dereference that has a user
      exception handler, but accesses a non-canonical address.  It basically
      is a simpler - and more limited - version of commit 9da3f2b7
      ("x86/fault: BUG() when uaccess helpers fault on kernel addresses") that
      got reverted.
      
      Note that unlike that original commit, this only causes a warning,
      because there are real situations where we currently can do this
      (notably speculative argument fetching for uprobes etc).  Also, unlike
      that original commit, this _only_ triggers for #GP accesses, so the
      cases of valid kernel pointers that cross into a non-mapped page aren't
      affected.
      
      The intent of this is two-fold:
      
       - the uprobe/tracing accesses really do need to be more careful. In
         particular, from a portability standpoint it's just wrong to think
         that "a pointer is a pointer", and use the same logic for any random
         pointer value you find on the stack. It may _work_ on x86-64, but it
         doesn't necessarily work on other architectures (where the same
         pointer value can be either a kernel pointer _or_ a user pointer, and
         you really need to be much more careful in how you try to access it)
      
         The warning can hopefully end up being a reminder that just any
         random pointer access won't do.
      
       - Kees in particular wanted a way to actually report invalid uses of
         wild pointers to user space accessors, instead of just silently
         failing them. Automated fuzzers want a way to get reports if the
         kernel ever uses invalid values that the fuzzer fed it.
      
         The non-canonical address range is a fair chunk of the address space,
         and with this you can teach syzkaller to feed in invalid pointer
         values and find cases where we do not properly validate user
         addresses (possibly due to bad uses of "set_fs()").
      Acked-by: NKees Cook <keescook@chromium.org>
      Cc: Jann Horn <jannh@google.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      00c42373
  10. 26 2月, 2019 1 次提交
    • L
      Revert "x86/fault: BUG() when uaccess helpers fault on kernel addresses" · 53a41cb7
      Linus Torvalds 提交于
      This reverts commit 9da3f2b7.
      
      It was well-intentioned, but wrong.  Overriding the exception tables for
      instructions for random reasons is just wrong, and that is what the new
      code did.
      
      It caused problems for tracing, and it caused problems for strncpy_from_user(),
      because the new checks made perfectly valid use cases break, rather than
      catch things that did bad things.
      
      Unchecked user space accesses are a problem, but that's not a reason to
      add invalid checks that then people have to work around with silly flags
      (in this case, that 'kernel_uaccess_faults_ok' flag, which is just an
      odd way to say "this commit was wrong" and was sprinked into random
      places to hide the wrongness).
      
      The real fix to unchecked user space accesses is to get rid of the
      special "let's not check __get_user() and __put_user() at all" logic.
      Make __{get|put}_user() be just aliases to the regular {get|put}_user()
      functions, and make it impossible to access user space without having
      the proper checks in places.
      
      The raison d'être of the special double-underscore versions used to be
      that the range check was expensive, and if you did multiple user
      accesses, you'd do the range check up front (like the signal frame
      handling code, for example).  But SMAP (on x86) and PAN (on ARM) have
      made that optimization pointless, because the _real_ expense is the "set
      CPU flag to allow user space access".
      
      Do let's not break the valid cases to catch invalid cases that shouldn't
      even exist.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Tobin C. Harding <tobin@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Jann Horn <jannh@google.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      53a41cb7
  11. 03 10月, 2018 1 次提交
  12. 03 9月, 2018 3 次提交
    • J
      x86/fault: BUG() when uaccess helpers fault on kernel addresses · 9da3f2b7
      Jann Horn 提交于
      There have been multiple kernel vulnerabilities that permitted userspace to
      pass completely unchecked pointers through to userspace accessors:
      
       - the waitid() bug - commit 96ca579a ("waitid(): Add missing
         access_ok() checks")
       - the sg/bsg read/write APIs
       - the infiniband read/write APIs
      
      These don't happen all that often, but when they do happen, it is hard to
      test for them properly; and it is probably also hard to discover them with
      fuzzing. Even when an unmapped kernel address is supplied to such buggy
      code, it just returns -EFAULT instead of doing a proper BUG() or at least
      WARN().
      
      Try to make such misbehaving code a bit more visible by refusing to do a
      fixup in the pagefault handler code when a userspace accessor causes a #PF
      on a kernel address and the current context isn't whitelisted.
      Signed-off-by: NJann Horn <jannh@google.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NKees Cook <keescook@chromium.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: kernel-hardening@lists.openwall.com
      Cc: dvyukov@google.com
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: linux-fsdevel@vger.kernel.org
      Cc: Borislav Petkov <bp@alien8.de>
      Link: https://lkml.kernel.org/r/20180828201421.157735-7-jannh@google.com
      9da3f2b7
    • J
      x86/fault: Plumb error code and fault address through to fault handlers · 81fd9c18
      Jann Horn 提交于
      This is preparation for looking at trap number and fault address in the
      handlers for uaccess errors. No functional change.
      Signed-off-by: NJann Horn <jannh@google.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NKees Cook <keescook@chromium.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: kernel-hardening@lists.openwall.com
      Cc: linux-kernel@vger.kernel.org
      Cc: dvyukov@google.com
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: linux-fsdevel@vger.kernel.org
      Cc: Borislav Petkov <bp@alien8.de>
      Link: https://lkml.kernel.org/r/20180828201421.157735-6-jannh@google.com
      81fd9c18
    • J
      x86/extable: Introduce _ASM_EXTABLE_UA for uaccess fixups · 75045f77
      Jann Horn 提交于
      Currently, most fixups for attempting to access userspace memory are
      handled using _ASM_EXTABLE, which is also used for various other types of
      fixups (e.g. safe MSR access, IRET failures, and a bunch of other things).
      In order to make it possible to add special safety checks to uaccess fixups
      (in particular, checking whether the fault address is actually in
      userspace), introduce a new exception table handler ex_handler_uaccess()
      and wire it up to all the user access fixups (excluding ones that
      already use _ASM_EXTABLE_EX).
      Signed-off-by: NJann Horn <jannh@google.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NKees Cook <keescook@chromium.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: kernel-hardening@lists.openwall.com
      Cc: dvyukov@google.com
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: linux-fsdevel@vger.kernel.org
      Cc: Borislav Petkov <bp@alien8.de>
      Link: https://lkml.kernel.org/r/20180828201421.157735-5-jannh@google.com
      75045f77
  13. 15 1月, 2018 1 次提交
  14. 06 12月, 2017 1 次提交
  15. 28 11月, 2017 1 次提交
  16. 28 9月, 2017 1 次提交
    • K
      locking/refcounts, x86/asm: Use unique .text section for refcount exceptions · 564c9cc8
      Kees Cook 提交于
      Using .text.unlikely for refcount exceptions isn't safe because gcc may
      move entire functions into .text.unlikely (e.g. in6_dev_dev()), which
      would cause any uses of a protected refcount_t function to stay inline
      with the function, triggering the protection unconditionally:
      
              .section        .text.unlikely,"ax",@progbits
              .type   in6_dev_get, @function
      in6_dev_getx:
      .LFB4673:
              .loc 2 4128 0
              .cfi_startproc
      ...
              lock; incl 480(%rbx)
              js 111f
              .pushsection .text.unlikely
      111:    lea 480(%rbx), %rcx
      112:    .byte 0x0f, 0xff
      .popsection
      113:
      
      This creates a unique .text..refcount section and adds an additional
      test to the exception handler to WARN in the case of having none of OF,
      SF, nor ZF set so we can see things like this more easily in the future.
      
      The double dot for the section name keeps it out of the TEXT_MAIN macro
      namespace, to avoid collisions and so it can be put at the end with
      text.unlikely to keep the cold code together.
      
      See commit:
      
        cb87481e ("kbuild: linker script do not match C names unless LD_DEAD_CODE_DATA_ELIMINATION is configured")
      
      ... which matches C names: [a-zA-Z0-9_] but not ".".
      Reported-by: NMike Galbraith <efault@gmx.de>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Elena <elena.reshetova@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-arch <linux-arch@vger.kernel.org>
      Fixes: 7a46ec0e ("locking/refcounts, x86/asm: Implement fast refcount overflow protection")
      Link: http://lkml.kernel.org/r/1504382986-49301-2-git-send-email-keescook@chromium.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      564c9cc8
  17. 25 9月, 2017 1 次提交
    • E
      x86/fpu: Reinitialize FPU registers if restoring FPU state fails · d5c8028b
      Eric Biggers 提交于
      Userspace can change the FPU state of a task using the ptrace() or
      rt_sigreturn() system calls.  Because reserved bits in the FPU state can
      cause the XRSTOR instruction to fail, the kernel has to carefully
      validate that no reserved bits or other invalid values are being set.
      
      Unfortunately, there have been bugs in this validation code.  For
      example, we were not checking that the 'xcomp_bv' field in the
      xstate_header was 0.  As-is, such bugs are exploitable to read the FPU
      registers of other processes on the system.  To do so, an attacker can
      create a task, assign to it an invalid FPU state, then spin in a loop
      and monitor the values of the FPU registers.  Because the task's FPU
      registers are not being restored, sometimes the FPU registers will have
      the values from another process.
      
      This is likely to continue to be a problem in the future because the
      validation done by the CPU instructions like XRSTOR is not immediately
      visible to kernel developers.  Nor will invalid FPU states ever be
      encountered during ordinary use --- they will only be seen during
      fuzzing or exploits.  There can even be reserved bits outside the
      xstate_header which are easy to forget about.  For example, the MXCSR
      register contains reserved bits, which were not validated by the
      KVM_SET_XSAVE ioctl until commit a575813b ("KVM: x86: Fix load
      damaged SSEx MXCSR register").
      
      Therefore, mitigate this class of vulnerability by restoring the FPU
      registers from init_fpstate if restoring from the task's state fails.
      
      We actually used to do this, but it was (perhaps unwisely) removed by
      commit 9ccc27a5 ("x86/fpu: Remove error return values from
      copy_kernel_to_*regs() functions").  This new patch is also a bit
      different.  First, it only clears the registers, not also the bad
      in-memory state; this is simpler and makes it easier to make the
      mitigation cover all callers of __copy_kernel_to_fpregs().  Second, it
      does the register clearing in an exception handler so that no extra
      instructions are added to context switches.  In fact, we *remove*
      instructions, since previously we were always zeroing the register
      containing 'err' even if CONFIG_X86_DEBUG_FPU was disabled.
      Signed-off-by: NEric Biggers <ebiggers@google.com>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Eric Biggers <ebiggers3@gmail.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Kevin Hao <haokexin@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michael Halcrow <mhalcrow@google.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wanpeng Li <wanpeng.li@hotmail.com>
      Cc: Yu-cheng Yu <yu-cheng.yu@intel.com>
      Cc: kernel-hardening@lists.openwall.com
      Link: http://lkml.kernel.org/r/20170922174156.16780-4-ebiggers3@gmail.com
      Link: http://lkml.kernel.org/r/20170923130016.21448-27-mingo@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d5c8028b
  18. 17 8月, 2017 1 次提交
    • K
      locking/refcounts, x86/asm: Implement fast refcount overflow protection · 7a46ec0e
      Kees Cook 提交于
      This implements refcount_t overflow protection on x86 without a noticeable
      performance impact, though without the fuller checking of REFCOUNT_FULL.
      
      This is done by duplicating the existing atomic_t refcount implementation
      but with normally a single instruction added to detect if the refcount
      has gone negative (e.g. wrapped past INT_MAX or below zero). When detected,
      the handler saturates the refcount_t to INT_MIN / 2. With this overflow
      protection, the erroneous reference release that would follow a wrap back
      to zero is blocked from happening, avoiding the class of refcount-overflow
      use-after-free vulnerabilities entirely.
      
      Only the overflow case of refcounting can be perfectly protected, since
      it can be detected and stopped before the reference is freed and left to
      be abused by an attacker. There isn't a way to block early decrements,
      and while REFCOUNT_FULL stops increment-from-zero cases (which would
      be the state _after_ an early decrement and stops potential double-free
      conditions), this fast implementation does not, since it would require
      the more expensive cmpxchg loops. Since the overflow case is much more
      common (e.g. missing a "put" during an error path), this protection
      provides real-world protection. For example, the two public refcount
      overflow use-after-free exploits published in 2016 would have been
      rendered unexploitable:
      
        http://perception-point.io/2016/01/14/analysis-and-exploitation-of-a-linux-kernel-vulnerability-cve-2016-0728/
      
        http://cyseclabs.com/page?n=02012016
      
      This implementation does, however, notice an unchecked decrement to zero
      (i.e. caller used refcount_dec() instead of refcount_dec_and_test() and it
      resulted in a zero). Decrements under zero are noticed (since they will
      have resulted in a negative value), though this only indicates that a
      use-after-free may have already happened. Such notifications are likely
      avoidable by an attacker that has already exploited a use-after-free
      vulnerability, but it's better to have them reported than allow such
      conditions to remain universally silent.
      
      On first overflow detection, the refcount value is reset to INT_MIN / 2
      (which serves as a saturation value) and a report and stack trace are
      produced. When operations detect only negative value results (such as
      changing an already saturated value), saturation still happens but no
      notification is performed (since the value was already saturated).
      
      On the matter of races, since the entire range beyond INT_MAX but before
      0 is negative, every operation at INT_MIN / 2 will trap, leaving no
      overflow-only race condition.
      
      As for performance, this implementation adds a single "js" instruction
      to the regular execution flow of a copy of the standard atomic_t refcount
      operations. (The non-"and_test" refcount_dec() function, which is uncommon
      in regular refcount design patterns, has an additional "jz" instruction
      to detect reaching exactly zero.) Since this is a forward jump, it is by
      default the non-predicted path, which will be reinforced by dynamic branch
      prediction. The result is this protection having virtually no measurable
      change in performance over standard atomic_t operations. The error path,
      located in .text.unlikely, saves the refcount location and then uses UD0
      to fire a refcount exception handler, which resets the refcount, handles
      reporting, and returns to regular execution. This keeps the changes to
      .text size minimal, avoiding return jumps and open-coded calls to the
      error reporting routine.
      
      Example assembly comparison:
      
      refcount_inc() before:
      
        .text:
        ffffffff81546149:       f0 ff 45 f4             lock incl -0xc(%rbp)
      
      refcount_inc() after:
      
        .text:
        ffffffff81546149:       f0 ff 45 f4             lock incl -0xc(%rbp)
        ffffffff8154614d:       0f 88 80 d5 17 00       js     ffffffff816c36d3
        ...
        .text.unlikely:
        ffffffff816c36d3:       48 8d 4d f4             lea    -0xc(%rbp),%rcx
        ffffffff816c36d7:       0f ff                   (bad)
      
      These are the cycle counts comparing a loop of refcount_inc() from 1
      to INT_MAX and back down to 0 (via refcount_dec_and_test()), between
      unprotected refcount_t (atomic_t), fully protected REFCOUNT_FULL
      (refcount_t-full), and this overflow-protected refcount (refcount_t-fast):
      
        2147483646 refcount_inc()s and 2147483647 refcount_dec_and_test()s:
      		    cycles		protections
        atomic_t           82249267387	none
        refcount_t-fast    82211446892	overflow, untested dec-to-zero
        refcount_t-full   144814735193	overflow, untested dec-to-zero, inc-from-zero
      
      This code is a modified version of the x86 PAX_REFCOUNT atomic_t
      overflow defense from the last public patch of PaX/grsecurity, based
      on my understanding of the code. Changes or omissions from the original
      code are mine and don't reflect the original grsecurity/PaX code. Thanks
      to PaX Team for various suggestions for improvement for repurposing this
      code to be a refcount-only protection.
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Elena Reshetova <elena.reshetova@intel.com>
      Cc: Eric Biggers <ebiggers3@gmail.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Hans Liljestrand <ishkamiel@gmail.com>
      Cc: James Bottomley <James.Bottomley@hansenpartnership.com>
      Cc: Jann Horn <jannh@google.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Manfred Spraul <manfred@colorfullife.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Serge E. Hallyn <serge@hallyn.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: arozansk@redhat.com
      Cc: axboe@kernel.dk
      Cc: kernel-hardening@lists.openwall.com
      Cc: linux-arch <linux-arch@vger.kernel.org>
      Link: http://lkml.kernel.org/r/20170815161924.GA133115@beastSigned-off-by: NIngo Molnar <mingo@kernel.org>
      7a46ec0e
  19. 30 7月, 2017 1 次提交
    • A
      x86/asm/32: Remove a bunch of '& 0xffff' from pt_regs segment reads · 99504819
      Andy Lutomirski 提交于
      Now that pt_regs properly defines segment fields as 16-bit on 32-bit
      CPUs, there's no need to mask off the high word.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      99504819
  20. 13 6月, 2017 1 次提交
  21. 02 3月, 2017 1 次提交
  22. 25 12月, 2016 1 次提交
  23. 21 11月, 2016 1 次提交
    • A
      x86/traps: Ignore high word of regs->cs in early_fixup_exception() · fc0e81b2
      Andy Lutomirski 提交于
      On the 80486 DX, it seems that some exceptions may leave garbage in
      the high bits of CS.  This causes sporadic failures in which
      early_fixup_exception() refuses to fix up an exception.
      
      As far as I can tell, this has been buggy for a long time, but the
      problem seems to have been exacerbated by commits:
      
        1e02ce4c ("x86: Store a per-cpu shadow copy of CR4")
        e1bfc11c ("x86/init: Fix cr4_init_shadow() on CR4-less machines")
      
      This appears to have broken for as long as we've had early
      exception handling.
      
      [ Note to stable maintainers: This patch is needed all the way back to 3.4,
        but it will only apply to 4.6 and up, as it depends on commit:
      
          0e861fbb ("x86/head: Move early exception panic code into early_fixup_exception()")
      
        If you want to backport to kernels before 4.6, please don't backport the
        prerequisites (there was a big chain of them that rewrote a lot of the
        early exception machinery); instead, ask me and I can send you a one-liner
        that will apply. ]
      Reported-by: NMatthew Whitehead <tedheadster@gmail.com>
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Fixes: 4c5023a3 ("x86-32: Handle exception table entries during early boot")
      Link: http://lkml.kernel.org/r/cb32c69920e58a1a58e7b5cad975038a69c0ce7d.1479609510.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      fc0e81b2
  24. 20 9月, 2016 1 次提交
  25. 15 7月, 2016 1 次提交
  26. 08 7月, 2016 1 次提交
    • B
      x86/dumpstack: Add show_stack_regs() and use it · 81c2949f
      Borislav Petkov 提交于
      Add a helper to dump supplied pt_regs and use it in the MSR exception
      handling code to have precise stack traces pointing to the actual
      function causing the MSR access exception and not the stack frame of the
      exception handler itself.
      
      The new output looks like this:
      
       unchecked MSR access error: RDMSR from 0xdeadbeef at rIP: 0xffffffff8102ddb6 (early_init_intel+0x16/0x3a0)
        00000000756e6547 ffffffff81c03f68 ffffffff81dd0940 ffffffff81c03f10
        ffffffff81d42e65 0000000001000000 ffffffff81c03f58 ffffffff81d3e5a3
        0000800000000000 ffffffff81800080 ffffffffffffffff 0000000000000000
       Call Trace:
        [<ffffffff81d42e65>] early_cpu_init+0xe7/0x136
        [<ffffffff81d3e5a3>] setup_arch+0xa5/0x9df
        [<ffffffff81d38bb9>] start_kernel+0x9f/0x43a
        [<ffffffff81d38294>] x86_64_start_reservations+0x2f/0x31
        [<ffffffff81d383fe>] x86_64_start_kernel+0x168/0x176
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1467671487-10344-4-git-send-email-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      81c2949f
  27. 29 4月, 2016 1 次提交
  28. 13 4月, 2016 3 次提交