1. 17 12月, 2017 25 次提交
    • A
      x86/entry/64: Create a per-CPU SYSCALL entry trampoline · 3386bc8a
      Andy Lutomirski 提交于
      Handling SYSCALL is tricky: the SYSCALL handler is entered with every
      single register (except FLAGS), including RSP, live.  It somehow needs
      to set RSP to point to a valid stack, which means it needs to save the
      user RSP somewhere and find its own stack pointer.  The canonical way
      to do this is with SWAPGS, which lets us access percpu data using the
      %gs prefix.
      
      With PAGE_TABLE_ISOLATION-like pagetable switching, this is
      problematic.  Without a scratch register, switching CR3 is impossible, so
      %gs-based percpu memory would need to be mapped in the user pagetables.
      Doing that without information leaks is difficult or impossible.
      
      Instead, use a different sneaky trick.  Map a copy of the first part
      of the SYSCALL asm at a different address for each CPU.  Now RIP
      varies depending on the CPU, so we can use RIP-relative memory access
      to access percpu memory.  By putting the relevant information (one
      scratch slot and the stack address) at a constant offset relative to
      RIP, we can make SYSCALL work without relying on %gs.
      
      A nice thing about this approach is that we can easily switch it on
      and off if we want pagetable switching to be configurable.
      
      The compat variant of SYSCALL doesn't have this problem in the first
      place -- there are plenty of scratch registers, since we don't care
      about preserving r8-r15.  This patch therefore doesn't touch SYSCALL32
      at all.
      
      This patch actually seems to be a small speedup.  With this patch,
      SYSCALL touches an extra cache line and an extra virtual page, but
      the pipeline no longer stalls waiting for SWAPGS.  It seems that, at
      least in a tight loop, the latter outweights the former.
      
      Thanks to David Laight for an optimization tip.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bpetkov@suse.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Link: https://lkml.kernel.org/r/20171204150606.403607157@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3386bc8a
    • A
      x86/entry/64: Return to userspace from the trampoline stack · 3e3b9293
      Andy Lutomirski 提交于
      By itself, this is useless.  It gives us the ability to run some final code
      before exit that cannnot run on the kernel stack.  This could include a CR3
      switch a la PAGE_TABLE_ISOLATION or some kernel stack erasing, for
      example.  (Or even weird things like *changing* which kernel stack gets
      used as an ASLR-strengthening mechanism.)
      
      The SYSRET32 path is not covered yet.  It could be in the future or
      we could just ignore it and force the slow path if needed.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Link: https://lkml.kernel.org/r/20171204150606.306546484@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3e3b9293
    • A
      x86/entry/64: Use a per-CPU trampoline stack for IDT entries · 7f2590a1
      Andy Lutomirski 提交于
      Historically, IDT entries from usermode have always gone directly
      to the running task's kernel stack.  Rearrange it so that we enter on
      a per-CPU trampoline stack and then manually switch to the task's stack.
      This touches a couple of extra cachelines, but it gives us a chance
      to run some code before we touch the kernel stack.
      
      The asm isn't exactly beautiful, but I think that fully refactoring
      it can wait.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Link: https://lkml.kernel.org/r/20171204150606.225330557@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      7f2590a1
    • A
      x86/espfix/64: Stop assuming that pt_regs is on the entry stack · 6d9256f0
      Andy Lutomirski 提交于
      When we start using an entry trampoline, a #GP from userspace will
      be delivered on the entry stack, not on the task stack.  Fix the
      espfix64 #DF fixup to set up #GP according to TSS.SP0, rather than
      assuming that pt_regs + 1 == SP0.  This won't change anything
      without an entry stack, but it will make the code continue to work
      when an entry stack is added.
      
      While we're at it, improve the comments to explain what's actually
      going on.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Link: https://lkml.kernel.org/r/20171204150606.130778051@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6d9256f0
    • A
      x86/entry/64: Separate cpu_current_top_of_stack from TSS.sp0 · 9aaefe7b
      Andy Lutomirski 提交于
      On 64-bit kernels, we used to assume that TSS.sp0 was the current
      top of stack.  With the addition of an entry trampoline, this will
      no longer be the case.  Store the current top of stack in TSS.sp1,
      which is otherwise unused but shares the same cacheline.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Link: https://lkml.kernel.org/r/20171204150606.050864668@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9aaefe7b
    • A
      x86/entry: Remap the TSS into the CPU entry area · 72f5e08d
      Andy Lutomirski 提交于
      This has a secondary purpose: it puts the entry stack into a region
      with a well-controlled layout.  A subsequent patch will take
      advantage of this to streamline the SYSCALL entry code to be able to
      find it more easily.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bpetkov@suse.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Link: https://lkml.kernel.org/r/20171204150605.962042855@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      72f5e08d
    • A
      x86/entry: Move SYSENTER_stack to the beginning of struct tss_struct · 1a935bc3
      Andy Lutomirski 提交于
      SYSENTER_stack should have reliable overflow detection, which
      means that it needs to be at the bottom of a page, not the top.
      Move it to the beginning of struct tss_struct and page-align it.
      
      Also add an assertion to make sure that the fixed hardware TSS
      doesn't cross a page boundary.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Link: https://lkml.kernel.org/r/20171204150605.881827433@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1a935bc3
    • A
      x86/dumpstack: Handle stack overflow on all stacks · 6e60e583
      Andy Lutomirski 提交于
      We currently special-case stack overflow on the task stack.  We're
      going to start putting special stacks in the fixmap with a custom
      layout, so they'll have guard pages, too.  Teach the unwinder to be
      able to unwind an overflow of any of the stacks.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Link: https://lkml.kernel.org/r/20171204150605.802057305@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6e60e583
    • A
      x86/entry: Fix assumptions that the HW TSS is at the beginning of cpu_tss · 7fb983b4
      Andy Lutomirski 提交于
      A future patch will move SYSENTER_stack to the beginning of cpu_tss
      to help detect overflow.  Before this can happen, fix several code
      paths that hardcode assumptions about the old layout.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NDave Hansen <dave.hansen@intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Link: https://lkml.kernel.org/r/20171204150605.722425540@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      7fb983b4
    • A
      x86/kasan/64: Teach KASAN about the cpu_entry_area · 21506525
      Andy Lutomirski 提交于
      The cpu_entry_area will contain stacks.  Make sure that KASAN has
      appropriate shadow mappings for them.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: kasan-dev@googlegroups.com
      Cc: keescook@google.com
      Link: https://lkml.kernel.org/r/20171204150605.642806442@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      21506525
    • A
      x86/mm/fixmap: Generalize the GDT fixmap mechanism, introduce struct cpu_entry_area · ef8813ab
      Andy Lutomirski 提交于
      Currently, the GDT is an ad-hoc array of pages, one per CPU, in the
      fixmap.  Generalize it to be an array of a new 'struct cpu_entry_area'
      so that we can cleanly add new things to it.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Link: https://lkml.kernel.org/r/20171204150605.563271721@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ef8813ab
    • A
      x86/entry/gdt: Put per-CPU GDT remaps in ascending order · aaeed3ae
      Andy Lutomirski 提交于
      We currently have CPU 0's GDT at the top of the GDT range and
      higher-numbered CPUs at lower addresses.  This happens because the
      fixmap is upside down (index 0 is the top of the fixmap).
      
      Flip it so that GDTs are in ascending order by virtual address.
      This will simplify a future patch that will generalize the GDT
      remap to contain multiple pages.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Link: https://lkml.kernel.org/r/20171204150605.471561421@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      aaeed3ae
    • A
      x86/dumpstack: Add get_stack_info() support for the SYSENTER stack · 33a2f1a6
      Andy Lutomirski 提交于
      get_stack_info() doesn't currently know about the SYSENTER stack, so
      unwinding will fail if we entered the kernel on the SYSENTER stack
      and haven't fully switched off.  Teach get_stack_info() about the
      SYSENTER stack.
      
      With future patches applied that run part of the entry code on the
      SYSENTER stack and introduce an intentional BUG(), I would get:
      
        PANIC: double fault, error_code: 0x0
        ...
        RIP: 0010:do_error_trap+0x33/0x1c0
        ...
        Call Trace:
        Code: ...
      
      With this patch, I get:
      
        PANIC: double fault, error_code: 0x0
        ...
        Call Trace:
         <SYSENTER>
         ? async_page_fault+0x36/0x60
         ? invalid_op+0x22/0x40
         ? async_page_fault+0x36/0x60
         ? sync_regs+0x3c/0x40
         ? sync_regs+0x2e/0x40
         ? error_entry+0x6c/0xd0
         ? async_page_fault+0x36/0x60
         </SYSENTER>
        Code: ...
      
      which is a lot more informative.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Link: https://lkml.kernel.org/r/20171204150605.392711508@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      33a2f1a6
    • A
      x86/entry/64: Allocate and enable the SYSENTER stack · 1a79797b
      Andy Lutomirski 提交于
      This will simplify future changes that want scratch variables early in
      the SYSENTER handler -- they'll be able to spill registers to the
      stack.  It also lets us get rid of a SWAPGS_UNSAFE_STACK user.
      
      This does not depend on CONFIG_IA32_EMULATION=y because we'll want the
      stack space even without IA32 emulation.
      
      As far as I can tell, the reason that this wasn't done from day 1 is
      that we use IST for #DB and #BP, which is IMO rather nasty and causes
      a lot more problems than it solves.  But, since #DB uses IST, we don't
      actually need a real stack for SYSENTER (because SYSENTER with TF set
      will invoke #DB on the IST stack rather than the SYSENTER stack).
      
      I want to remove IST usage from these vectors some day, and this patch
      is a prerequisite for that as well.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Link: https://lkml.kernel.org/r/20171204150605.312726423@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1a79797b
    • A
      x86/irq/64: Print the offending IP in the stack overflow warning · 4f3789e7
      Andy Lutomirski 提交于
      In case something goes wrong with unwind (not unlikely in case of
      overflow), print the offending IP where we detected the overflow.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Link: https://lkml.kernel.org/r/20171204150605.231677119@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4f3789e7
    • A
      x86/irq: Remove an old outdated comment about context tracking races · 6669a692
      Andy Lutomirski 提交于
      That race has been fixed and code cleaned up for a while now.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Link: https://lkml.kernel.org/r/20171204150605.150551639@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6669a692
    • J
      x86/unwinder: Handle stack overflows more gracefully · b02fcf9b
      Josh Poimboeuf 提交于
      There are at least two unwinder bugs hindering the debugging of
      stack-overflow crashes:
      
      - It doesn't deal gracefully with the case where the stack overflows and
        the stack pointer itself isn't on a valid stack but the
        to-be-dereferenced data *is*.
      
      - The ORC oops dump code doesn't know how to print partial pt_regs, for the
        case where if we get an interrupt/exception in *early* entry code
        before the full pt_regs have been saved.
      
      Fix both issues.
      
      http://lkml.kernel.org/r/20171126024031.uxi4numpbjm5rlbr@trebleSigned-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bpetkov@suse.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Link: https://lkml.kernel.org/r/20171204150605.071425003@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b02fcf9b
    • A
      x86/unwinder/orc: Dont bail on stack overflow · d3a09104
      Andy Lutomirski 提交于
      If the stack overflows into a guard page and the ORC unwinder should work
      well: by construction, there can't be any meaningful data in the guard page
      because no writes to the guard page will have succeeded.
      
      But there is a bug that prevents unwinding from working correctly: if the
      starting register state has RSP pointing into a stack guard page, the ORC
      unwinder bails out immediately.
      
      Instead of bailing out immediately check whether the next page up is a
      valid check page and if so analyze that. As a result the ORC unwinder will
      start the unwind.
      
      Tested by intentionally overflowing the task stack.  The result is an
      accurate call trace instead of a trace consisting purely of '?' entries.
      
      There are a few other bugs that are triggered if the unwinder encounters a
      stack overflow after the first step, but they are outside the scope of this
      fix.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Link: https://lkml.kernel.org/r/20171204150604.991389777@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d3a09104
    • B
      x86/entry/64/paravirt: Use paravirt-safe macro to access eflags · e17f8234
      Boris Ostrovsky 提交于
      Commit 1d3e53e8 ("x86/entry/64: Refactor IRQ stacks and make them
      NMI-safe") added DEBUG_ENTRY_ASSERT_IRQS_OFF macro that acceses eflags
      using 'pushfq' instruction when testing for IF bit. On PV Xen guests
      looking at IF flag directly will always see it set, resulting in 'ud2'.
      
      Introduce SAVE_FLAGS() macro that will use appropriate save_fl pv op when
      running paravirt.
      Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NJuergen Gross <jgross@suse.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Cc: xen-devel@lists.xenproject.org
      Link: https://lkml.kernel.org/r/20171204150604.899457242@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e17f8234
    • A
      x86/mm/kasan: Don't use vmemmap_populate() to initialize shadow · 2aeb0736
      Andrey Ryabinin 提交于
      [ Note, this is a Git cherry-pick of the following commit:
      
          d17a1d97: ("x86/mm/kasan: don't use vmemmap_populate() to initialize shadow")
      
        ... for easier x86 PTI code testing and back-porting. ]
      
      The KASAN shadow is currently mapped using vmemmap_populate() since that
      provides a semi-convenient way to map pages into init_top_pgt.  However,
      since that no longer zeroes the mapped pages, it is not suitable for
      KASAN, which requires zeroed shadow memory.
      
      Add kasan_populate_shadow() interface and use it instead of
      vmemmap_populate().  Besides, this allows us to take advantage of
      gigantic pages and use them to populate the shadow, which should save us
      some memory wasted on page tables and reduce TLB pressure.
      
      Link: http://lkml.kernel.org/r/20171103185147.2688-2-pasha.tatashin@oracle.comSigned-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Steven Sistare <steven.sistare@oracle.com>
      Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
      Cc: Bob Picco <bob.picco@oracle.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      2aeb0736
    • W
      locking/barriers: Convert users of lockless_dereference() to READ_ONCE() · 3382290e
      Will Deacon 提交于
      [ Note, this is a Git cherry-pick of the following commit:
      
          506458ef ("locking/barriers: Convert users of lockless_dereference() to READ_ONCE()")
      
        ... for easier x86 PTI code testing and back-porting. ]
      
      READ_ONCE() now has an implicit smp_read_barrier_depends() call, so it
      can be used instead of lockless_dereference() without any change in
      semantics.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1508840570-22169-4-git-send-email-will.deacon@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3382290e
    • D
      bpf: fix build issues on um due to mising bpf_perf_event.h · ab95477e
      Daniel Borkmann 提交于
      [ Note, this is a Git cherry-pick of the following commit:
      
          a23f06f0 ("bpf: fix build issues on um due to mising bpf_perf_event.h")
      
        ... for easier x86 PTI code testing and back-porting. ]
      
      Since c895f6f7 ("bpf: correct broken uapi for
      BPF_PROG_TYPE_PERF_EVENT program type") um (uml) won't build
      on i386 or x86_64:
      
        [...]
          CC      init/main.o
        In file included from ../include/linux/perf_event.h:18:0,
                         from ../include/linux/trace_events.h:10,
                         from ../include/trace/syscall.h:7,
                         from ../include/linux/syscalls.h:82,
                         from ../init/main.c:20:
        ../include/uapi/linux/bpf_perf_event.h:11:32: fatal error:
        asm/bpf_perf_event.h: No such file or directory #include
        <asm/bpf_perf_event.h>
        [...]
      
      Lets add missing bpf_perf_event.h also to um arch. This seems
      to be the only one still missing.
      
      Fixes: c895f6f7 ("bpf: correct broken uapi for BPF_PROG_TYPE_PERF_EVENT program type")
      Reported-by: NRandy Dunlap <rdunlap@infradead.org>
      Suggested-by: NRichard Weinberger <richard@sigma-star.at>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Tested-by: NRandy Dunlap <rdunlap@infradead.org>
      Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
      Cc: Richard Weinberger <richard@sigma-star.at>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NRichard Weinberger <richard@nod.at>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      ab95477e
    • A
      perf/x86: Enable free running PEBS for REGS_USER/INTR · 2fe1bc1f
      Andi Kleen 提交于
      [ Note, this is a Git cherry-pick of the following commit:
      
          a47ba4d7 ("perf/x86: Enable free running PEBS for REGS_USER/INTR")
      
        ... for easier x86 PTI code testing and back-porting. ]
      
      Currently free running PEBS is disabled when user or interrupt
      registers are requested. Most of the registers are actually
      available in the PEBS record and can be supported.
      
      So we just need to check for the supported registers and then
      allow it: it is all except for the segment register.
      
      For user registers this only works when the counter is limited
      to ring 3 only, so this also needs to be checked.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170831214630.21892-1-andi@firstfloor.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      2fe1bc1f
    • R
      x86: Make X86_BUG_FXSAVE_LEAK detectable in CPUID on AMD · f2dbad36
      Rudolf Marek 提交于
      [ Note, this is a Git cherry-pick of the following commit:
      
          2b67799bdf25 ("x86: Make X86_BUG_FXSAVE_LEAK detectable in CPUID on AMD")
      
        ... for easier x86 PTI code testing and back-porting. ]
      
      The latest AMD AMD64 Architecture Programmer's Manual
      adds a CPUID feature XSaveErPtr (CPUID_Fn80000008_EBX[2]).
      
      If this feature is set, the FXSAVE, XSAVE, FXSAVEOPT, XSAVEC, XSAVES
      / FXRSTOR, XRSTOR, XRSTORS always save/restore error pointers,
      thus making the X86_BUG_FXSAVE_LEAK workaround obsolete on such CPUs.
      Signed-Off-By: NRudolf Marek <r.marek@assembler.cz>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Tested-by: NBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Link: https://lkml.kernel.org/r/bdcebe90-62c5-1f05-083c-eba7f08b2540@assembler.czSigned-off-by: NIngo Molnar <mingo@kernel.org>
      f2dbad36
    • R
      x86/cpufeature: Add User-Mode Instruction Prevention definitions · a8b4db56
      Ricardo Neri 提交于
      [ Note, this is a Git cherry-pick of the following commit: (limited to the cpufeatures.h file)
      
          3522c2a6 ("x86/cpufeature: Add User-Mode Instruction Prevention definitions")
      
        ... for easier x86 PTI code testing and back-porting. ]
      
      User-Mode Instruction Prevention is a security feature present in new
      Intel processors that, when set, prevents the execution of a subset of
      instructions if such instructions are executed in user mode (CPL > 0).
      Attempting to execute such instructions causes a general protection
      exception.
      
      The subset of instructions comprises:
      
       * SGDT - Store Global Descriptor Table
       * SIDT - Store Interrupt Descriptor Table
       * SLDT - Store Local Descriptor Table
       * SMSW - Store Machine Status Word
       * STR  - Store Task Register
      
      This feature is also added to the list of disabled-features to allow
      a cleaner handling of build-time configuration.
      Signed-off-by: NRicardo Neri <ricardo.neri-calderon@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Chen Yucong <slaoub@gmail.com>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Huang Rui <ray.huang@amd.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi V. Shankar <ravi.v.shankar@intel.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: ricardo.neri@intel.com
      Link: http://lkml.kernel.org/r/1509935277-22138-7-git-send-email-ricardo.neri-calderon@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a8b4db56
  2. 11 11月, 2017 1 次提交
  3. 10 11月, 2017 4 次提交
  4. 09 11月, 2017 2 次提交
    • J
      x86/mm: Unbreak modules that rely on external PAGE_KERNEL availability · 87df2617
      Jiri Kosina 提交于
      Commit 7744ccdb ("x86/mm: Add Secure Memory Encryption (SME)
      support") as a side-effect made PAGE_KERNEL all of a sudden unavailable
      to modules which can't make use of EXPORT_SYMBOL_GPL() symbols.
      
      This is because once SME is enabled, sme_me_mask (which is introduced as
      EXPORT_SYMBOL_GPL) makes its way to PAGE_KERNEL through _PAGE_ENC,
      causing imminent build failure for all the modules which make use of all
      the EXPORT-SYMBOL()-exported API (such as vmap(), __vmalloc(),
      remap_pfn_range(), ...).
      
      Exporting (as EXPORT_SYMBOL()) interfaces (and having done so for ages)
      that take pgprot_t argument, while making it impossible to -- all of a
      sudden -- pass PAGE_KERNEL to it, feels rather incosistent.
      
      Restore the original behavior and make it possible to pass PAGE_KERNEL
      to all its EXPORT_SYMBOL() consumers.
      
      [ This is all so not wonderful. We shouldn't need that "sme_me_mask"
        access at all in all those places that really don't care about that
        level of detail, and just want _PAGE_KERNEL or whatever.
      
        We have some similar issues with _PAGE_CACHE_WP and _PAGE_NOCACHE,
        both of which hide a "cachemode2protval()" call, and which also ends
        up using another EXPORT_SYMBOL(), but at least that only triggers for
        the much more rare cases.
      
        Maybe we could move these dynamic page table bits to be generated much
        deeper down in the VM layer, instead of hiding them in the macros that
        everybody uses.
      
        So this all would merit some cleanup. But not today.   - Linus ]
      
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      Despised-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      87df2617
    • Y
      x86/idt: Remove X86_TRAP_BP initialization in idt_setup_traps() · d0cd64b0
      Yonghong Song 提交于
      Commit b70543a0("x86/idt: Move regular trap init to tables") moves
      regular trap init for each trap vector into a table based
      initialization. It introduced the initialization for vector X86_TRAP_BP
      which was not in the code which it replaced. This breaks uprobe
      functionality for x86_32; the probed program segfaults instead of handling
      the probe proper.
      
      The reason for this is that TRAP_BP is set up as system interrupt gate
      (DPL3) in the early IDT and then replaced by a regular interrupt gate
      (DPL0) in idt_setup_traps(). The DPL0 restriction causes the int3 trap
      to fail with a #GP resulting in a SIGSEGV of the probed program.
      
      On 64bit this does not cause a problem because the IDT entry is replaced
      with a system interrupt gate (DPL3) with interrupt stack afterwards.
      
      Remove X86_TRAP_BP from the def_idts table which is used in
      idt_setup_traps(). Remove a redundant entry for X86_TRAP_NMI in def_idts
      while at it. Tested on both x86_64 and x86_32.
      
      [ tglx: Amended changelog with a description of the root cause ]
      
      Fixes: b70543a0("x86/idt: Move regular trap init to tables")
      Reported-and-tested-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: a.p.zijlstra@chello.nl
      Cc: ast@fb.com
      Cc: oleg@redhat.com
      Cc: luto@kernel.org
      Cc: kernel-team@fb.com
      Link: https://lkml.kernel.org/r/20171108192845.552709-1-yhs@fb.com
      d0cd64b0
  5. 08 11月, 2017 6 次提交
    • O
      MIPS: AR7: Ensure that serial ports are properly set up · b084116f
      Oswald Buddenhagen 提交于
      Without UPF_FIXED_TYPE, the data from the PORT_AR7 uart_config entry is
      never copied, resulting in a dead port.
      
      Fixes: 154615d5 ("MIPS: AR7: Use correct UART port type")
      Signed-off-by: NOswald Buddenhagen <oswald.buddenhagen@gmx.de>
      [jonas.gorski: add Fixes tag]
      Signed-off-by: NJonas Gorski <jonas.gorski@gmail.com>
      Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Yoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com>
      Cc: Nicolas Schichan <nschichan@freebox.fr>
      Cc: Oswald Buddenhagen <oswald.buddenhagen@gmx.de>
      Cc: linux-mips@linux-mips.org
      Cc: linux-serial@vger.kernel.org
      Cc: <stable@vger.kernel.org>
      Patchwork: https://patchwork.linux-mips.org/patch/17543/Signed-off-by: NJames Hogan <jhogan@kernel.org>
      b084116f
    • J
      MIPS: AR7: Defer registration of GPIO · e6b03ab6
      Jonas Gorski 提交于
      When called from prom init code, ar7_gpio_init() will fail as it will
      call gpiochip_add() which relies on a working kmalloc() to alloc
      the gpio_desc array and kmalloc is not useable yet at prom init time.
      
      Move ar7_gpio_init() to ar7_register_devices() (a device_initcall)
      where kmalloc works.
      
      Fixes: 14e85c0e ("gpio: remove gpio_descs global array")
      Signed-off-by: NJonas Gorski <jonas.gorski@gmail.com>
      Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Yoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com>
      Cc: Nicolas Schichan <nschichan@freebox.fr>
      Cc: linux-mips@linux-mips.org
      Cc: linux-serial@vger.kernel.org
      Cc: <stable@vger.kernel.org> # 3.19+
      Patchwork: https://patchwork.linux-mips.org/patch/17542/Signed-off-by: NJames Hogan <jhogan@kernel.org>
      e6b03ab6
    • B
      x86/oprofile/ppro: Do not use __this_cpu*() in preemptible context · a743bbee
      Borislav Petkov 提交于
      The warning below says it all:
      
        BUG: using __this_cpu_read() in preemptible [00000000] code: swapper/0/1
        caller is __this_cpu_preempt_check
        CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.14.0-rc8 #4
        Call Trace:
         dump_stack
         check_preemption_disabled
         ? do_early_param
         __this_cpu_preempt_check
         arch_perfmon_init
         op_nmi_init
         ? alloc_pci_root_info
         oprofile_arch_init
         oprofile_init
         do_one_initcall
         ...
      
      These accessors should not have been used in the first place: it is PPro so
      no mixed silicon revisions and thus it can simply use boot_cpu_data.
      Reported-by: NFengguang Wu <fengguang.wu@intel.com>
      Tested-by: NFengguang Wu <fengguang.wu@intel.com>
      Fix-creation-mandated-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Robert Richter <rric@kernel.org>
      Cc: x86@kernel.org
      Cc: stable@vger.kernel.org
      a743bbee
    • J
      x86/unwind: Disable KASAN checking in the ORC unwinder · 881125bf
      Josh Poimboeuf 提交于
      Fengguang reported a KASAN warning:
      
        Kprobe smoke test: started
        ==================================================================
        BUG: KASAN: stack-out-of-bounds in deref_stack_reg+0xb5/0x11a
        Read of size 8 at addr ffff8800001c7cd8 by task swapper/1
      
        CPU: 0 PID: 1 Comm: swapper Not tainted 4.14.0-rc8 #26
        Call Trace:
         <#DB>
         ...
         save_trace+0xd9/0x1d3
         mark_lock+0x5f7/0xdc3
         __lock_acquire+0x6b4/0x38ef
         lock_acquire+0x1a1/0x2aa
         _raw_spin_lock_irqsave+0x46/0x55
         kretprobe_table_lock+0x1a/0x42
         pre_handler_kretprobe+0x3f5/0x521
         kprobe_int3_handler+0x19c/0x25f
         do_int3+0x61/0x142
         int3+0x30/0x60
        [...]
      
      The ORC unwinder got confused by some kprobes changes, which isn't
      surprising since the runtime code no longer matches vmlinux and the
      stack was modified for kretprobes.
      
      Until we have a way for generated code to register changes with the
      unwinder, these types of warnings are inevitable.  So just disable KASAN
      checks for stack accesses in the ORC unwinder.
      Reported-by: NFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20171108021934.zbl6unh5hpugybc5@trebleSigned-off-by: NIngo Molnar <mingo@kernel.org>
      881125bf
    • P
      KVM: PPC: Book3S HV: Fix exclusion between HPT resizing and other HPT updates · 38c53af8
      Paul Mackerras 提交于
      Commit 5e985969 ("KVM: PPC: Book3S HV: Outline of KVM-HV HPT resizing
      implementation", 2016-12-20) added code that tries to exclude any use
      or update of the hashed page table (HPT) while the HPT resizing code
      is iterating through all the entries in the HPT.  It does this by
      taking the kvm->lock mutex, clearing the kvm->arch.hpte_setup_done
      flag and then sending an IPI to all CPUs in the host.  The idea is
      that any VCPU task that tries to enter the guest will see that the
      hpte_setup_done flag is clear and therefore call kvmppc_hv_setup_htab_rma,
      which also takes the kvm->lock mutex and will therefore block until
      we release kvm->lock.
      
      However, any VCPU that is already in the guest, or is handling a
      hypervisor page fault or hypercall, can re-enter the guest without
      rechecking the hpte_setup_done flag.  The IPI will cause a guest exit
      of any VCPUs that are currently in the guest, but does not prevent
      those VCPU tasks from immediately re-entering the guest.
      
      The result is that after resize_hpt_rehash_hpte() has made a HPTE
      absent, a hypervisor page fault can occur and make that HPTE present
      again.  This includes updating the rmap array for the guest real page,
      meaning that we now have a pointer in the rmap array which connects
      with pointers in the old rev array but not the new rev array.  In
      fact, if the HPT is being reduced in size, the pointer in the rmap
      array could point outside the bounds of the new rev array.  If that
      happens, we can get a host crash later on such as this one:
      
      [91652.628516] Unable to handle kernel paging request for data at address 0xd0000000157fb10c
      [91652.628668] Faulting instruction address: 0xc0000000000e2640
      [91652.628736] Oops: Kernel access of bad area, sig: 11 [#1]
      [91652.628789] LE SMP NR_CPUS=1024 NUMA PowerNV
      [91652.628847] Modules linked in: binfmt_misc vhost_net vhost tap xt_CHECKSUM ipt_MASQUERADE nf_nat_masquerade_ipv4 ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp llc ip6table_mangle ip6table_security ip6table_raw iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack libcrc32c iptable_mangle iptable_security iptable_raw ebtable_filter ebtables ip6table_filter ip6_tables ses enclosure scsi_transport_sas i2c_opal ipmi_powernv ipmi_devintf i2c_core ipmi_msghandler powernv_op_panel nfsd auth_rpcgss oid_registry nfs_acl lockd grace sunrpc kvm_hv kvm_pr kvm scsi_dh_alua dm_service_time dm_multipath tg3 ptp pps_core [last unloaded: stap_552b612747aec2da355051e464fa72a1_14259]
      [91652.629566] CPU: 136 PID: 41315 Comm: CPU 21/KVM Tainted: G           O    4.14.0-1.rc4.dev.gitb27fc5c.el7.centos.ppc64le #1
      [91652.629684] task: c0000007a419e400 task.stack: c0000000028d8000
      [91652.629750] NIP:  c0000000000e2640 LR: d00000000c36e498 CTR: c0000000000e25f0
      [91652.629829] REGS: c0000000028db5d0 TRAP: 0300   Tainted: G           O     (4.14.0-1.rc4.dev.gitb27fc5c.el7.centos.ppc64le)
      [91652.629932] MSR:  900000010280b033 <SF,HV,VEC,VSX,EE,FP,ME,IR,DR,RI,LE,TM[E]>  CR: 44022422  XER: 00000000
      [91652.630034] CFAR: d00000000c373f84 DAR: d0000000157fb10c DSISR: 40000000 SOFTE: 1
      [91652.630034] GPR00: d00000000c36e498 c0000000028db850 c000000001403900 c0000007b7960000
      [91652.630034] GPR04: d0000000117fb100 d000000007ab00d8 000000000033bb10 0000000000000000
      [91652.630034] GPR08: fffffffffffffe7f 801001810073bb10 d00000000e440000 d00000000c373f70
      [91652.630034] GPR12: c0000000000e25f0 c00000000fdb9400 f000000003b24680 0000000000000000
      [91652.630034] GPR16: 00000000000004fb 00007ff7081a0000 00000000000ec91a 000000000033bb10
      [91652.630034] GPR20: 0000000000010000 00000000001b1190 0000000000000001 0000000000010000
      [91652.630034] GPR24: c0000007b7ab8038 d0000000117fb100 0000000ec91a1190 c000001e6a000000
      [91652.630034] GPR28: 00000000033bb100 000000000073bb10 c0000007b7960000 d0000000157fb100
      [91652.630735] NIP [c0000000000e2640] kvmppc_add_revmap_chain+0x50/0x120
      [91652.630806] LR [d00000000c36e498] kvmppc_book3s_hv_page_fault+0xbb8/0xc40 [kvm_hv]
      [91652.630884] Call Trace:
      [91652.630913] [c0000000028db850] [c0000000028db8b0] 0xc0000000028db8b0 (unreliable)
      [91652.630996] [c0000000028db8b0] [d00000000c36e498] kvmppc_book3s_hv_page_fault+0xbb8/0xc40 [kvm_hv]
      [91652.631091] [c0000000028db9e0] [d00000000c36a078] kvmppc_vcpu_run_hv+0xdf8/0x1300 [kvm_hv]
      [91652.631179] [c0000000028dbb30] [d00000000c2248c4] kvmppc_vcpu_run+0x34/0x50 [kvm]
      [91652.631266] [c0000000028dbb50] [d00000000c220d54] kvm_arch_vcpu_ioctl_run+0x114/0x2a0 [kvm]
      [91652.631351] [c0000000028dbbd0] [d00000000c2139d8] kvm_vcpu_ioctl+0x598/0x7a0 [kvm]
      [91652.631433] [c0000000028dbd40] [c0000000003832e0] do_vfs_ioctl+0xd0/0x8c0
      [91652.631501] [c0000000028dbde0] [c000000000383ba4] SyS_ioctl+0xd4/0x130
      [91652.631569] [c0000000028dbe30] [c00000000000b8e0] system_call+0x58/0x6c
      [91652.631635] Instruction dump:
      [91652.631676] fba1ffe8 fbc1fff0 fbe1fff8 f8010010 f821ffa1 2fa70000 793d0020 e9432110
      [91652.631814] 7bbf26e4 7c7e1b78 7feafa14 409e0094 <807f000c> 786326e4 7c6a1a14 93a40008
      [91652.631959] ---[ end trace ac85ba6db72e5b2e ]---
      
      To fix this, we tighten up the way that the hpte_setup_done flag is
      checked to ensure that it does provide the guarantee that the resizing
      code needs.  In kvmppc_run_core(), we check the hpte_setup_done flag
      after disabling interrupts and refuse to enter the guest if it is
      clear (for a HPT guest).  The code that checks hpte_setup_done and
      calls kvmppc_hv_setup_htab_rma() is moved from kvmppc_vcpu_run_hv()
      to a point inside the main loop in kvmppc_run_vcpu(), ensuring that
      we don't just spin endlessly calling kvmppc_run_core() while
      hpte_setup_done is clear, but instead have a chance to block on the
      kvm->lock mutex.
      
      Finally we also check hpte_setup_done inside the region in
      kvmppc_book3s_hv_page_fault() where the HPTE is locked and we are about
      to update the HPTE, and bail out if it is clear.  If another CPU is
      inside kvm_vm_ioctl_resize_hpt_commit) and has cleared hpte_setup_done,
      then we know that either we are looking at a HPTE
      that resize_hpt_rehash_hpte() has not yet processed, which is OK,
      or else we will see hpte_setup_done clear and refuse to update it,
      because of the full barrier formed by the unlock of the HPTE in
      resize_hpt_rehash_hpte() combined with the locking of the HPTE
      in kvmppc_book3s_hv_page_fault().
      
      Fixes: 5e985969 ("KVM: PPC: Book3S HV: Outline of KVM-HV HPT resizing implementation")
      Cc: stable@vger.kernel.org # v4.10+
      Reported-by: NSatheesh Rajendran <satheera@in.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      38c53af8
    • J
      MIPS: BMIPS: Fix missing cbr address · ea4b3afe
      Jaedon Shin 提交于
      Fix NULL pointer access in BMIPS3300 RAC flush.
      
      Fixes: 738a3f79 ("MIPS: BMIPS: Add early CPU initialization code")
      Signed-off-by: NJaedon Shin <jaedon.shin@gmail.com>
      Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Cc: Kevin Cernekee <cernekee@gmail.com>
      Cc: linux-mips@linux-mips.org
      Cc: <stable@vger.kernel.org> # 4.7+
      Patchwork: https://patchwork.linux-mips.org/patch/16423/Signed-off-by: NJames Hogan <jhogan@kernel.org>
      ea4b3afe
  6. 07 11月, 2017 2 次提交