1. 06 4月, 2019 1 次提交
  2. 17 10月, 2018 1 次提交
  3. 14 10月, 2018 1 次提交
  4. 24 7月, 2018 1 次提交
    • A
      x86/entry/64: Remove %ebx handling from error_entry/exit · b3681dd5
      Andy Lutomirski 提交于
      error_entry and error_exit communicate the user vs. kernel status of
      the frame using %ebx.  This is unnecessary -- the information is in
      regs->cs.  Just use regs->cs.
      
      This makes error_entry simpler and makes error_exit more robust.
      
      It also fixes a nasty bug.  Before all the Spectre nonsense, the
      xen_failsafe_callback entry point returned like this:
      
              ALLOC_PT_GPREGS_ON_STACK
              SAVE_C_REGS
              SAVE_EXTRA_REGS
              ENCODE_FRAME_POINTER
              jmp     error_exit
      
      And it did not go through error_entry.  This was bogus: RBX
      contained garbage, and error_exit expected a flag in RBX.
      
      Fortunately, it generally contained *nonzero* garbage, so the
      correct code path was used.  As part of the Spectre fixes, code was
      added to clear RBX to mitigate certain speculation attacks.  Now,
      depending on kernel configuration, RBX got zeroed and, when running
      some Wine workloads, the kernel crashes.  This was introduced by:
      
          commit 3ac6d8c7 ("x86/entry/64: Clear registers for exceptions/interrupts, to reduce speculation attack surface")
      
      With this patch applied, RBX is no longer needed as a flag, and the
      problem goes away.
      
      I suspect that malicious userspace could use this bug to crash the
      kernel even without the offending patch applied, though.
      
      [ Historical note: I wrote this patch as a cleanup before I was aware
        of the bug it fixed. ]
      
      [ Note to stable maintainers: this should probably get applied to all
        kernels.  If you're nervous about that, a more conservative fix to
        add xorl %ebx,%ebx; incl %ebx before the jump to error_exit should
        also fix the problem. ]
      Reported-and-tested-by: NM. Vefa Bicakci <m.v.b@runbox.com>
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Dominik Brodowski <linux@dominikbrodowski.net>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Cc: xen-devel@lists.xenproject.org
      Fixes: 3ac6d8c7 ("x86/entry/64: Clear registers for exceptions/interrupts, to reduce speculation attack surface")
      Link: http://lkml.kernel.org/r/b5010a090d3586b2d6e06c7ad3ec5542d1241c45.1532282627.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b3681dd5
  5. 03 7月, 2018 1 次提交
    • J
      x86/entry/64: Add two more instruction suffixes · 6709812f
      Jan Beulich 提交于
      Sadly, other than claimed in:
      
        a368d7fd ("x86/entry/64: Add instruction suffix")
      
      ... there are two more instances which want to be adjusted.
      
      As said there, omitting suffixes from instructions in AT&T mode is bad
      practice when operand size cannot be determined by the assembler from
      register operands, and is likely going to be warned about by upstream
      gas in the future (mine does already).
      
      Add the other missing suffixes here as well.
      Signed-off-by: NJan Beulich <jbeulich@suse.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/5B3A02DD02000078001CFB78@prv1-mh.provo.novell.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6709812f
  6. 21 6月, 2018 1 次提交
  7. 14 6月, 2018 1 次提交
    • L
      Kbuild: rename CC_STACKPROTECTOR[_STRONG] config variables · 050e9baa
      Linus Torvalds 提交于
      The changes to automatically test for working stack protector compiler
      support in the Kconfig files removed the special STACKPROTECTOR_AUTO
      option that picked the strongest stack protector that the compiler
      supported.
      
      That was all a nice cleanup - it makes no sense to have the AUTO case
      now that the Kconfig phase can just determine the compiler support
      directly.
      
      HOWEVER.
      
      It also meant that doing "make oldconfig" would now _disable_ the strong
      stackprotector if you had AUTO enabled, because in a legacy config file,
      the sane stack protector configuration would look like
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        # CONFIG_CC_STACKPROTECTOR_NONE is not set
        # CONFIG_CC_STACKPROTECTOR_REGULAR is not set
        # CONFIG_CC_STACKPROTECTOR_STRONG is not set
        CONFIG_CC_STACKPROTECTOR_AUTO=y
      
      and when you ran this through "make oldconfig" with the Kbuild changes,
      it would ask you about the regular CONFIG_CC_STACKPROTECTOR (that had
      been renamed from CONFIG_CC_STACKPROTECTOR_REGULAR to just
      CONFIG_CC_STACKPROTECTOR), but it would think that the STRONG version
      used to be disabled (because it was really enabled by AUTO), and would
      disable it in the new config, resulting in:
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
        CONFIG_CC_STACKPROTECTOR=y
        # CONFIG_CC_STACKPROTECTOR_STRONG is not set
        CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
      
      That's dangerously subtle - people could suddenly find themselves with
      the weaker stack protector setup without even realizing.
      
      The solution here is to just rename not just the old RECULAR stack
      protector option, but also the strong one.  This does that by just
      removing the CC_ prefix entirely for the user choices, because it really
      is not about the compiler support (the compiler support now instead
      automatially impacts _visibility_ of the options to users).
      
      This results in "make oldconfig" actually asking the user for their
      choice, so that we don't have any silent subtle security model changes.
      The end result would generally look like this:
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
        CONFIG_STACKPROTECTOR=y
        CONFIG_STACKPROTECTOR_STRONG=y
        CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
      
      where the "CC_" versions really are about internal compiler
      infrastructure, not the user selections.
      Acked-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      050e9baa
  8. 10 4月, 2018 1 次提交
  9. 05 4月, 2018 1 次提交
  10. 24 3月, 2018 1 次提交
  11. 07 3月, 2018 1 次提交
    • M
      Drivers: hv: vmbus: Implement Direct Mode for stimer0 · 248e742a
      Michael Kelley 提交于
      The 2016 version of Hyper-V offers the option to operate the guest VM
      per-vcpu stimer's in Direct Mode, which means the timer interupts on its
      own vector rather than queueing a VMbus message. Direct Mode reduces
      timer processing overhead in both the hypervisor and the guest, and
      avoids having timer interrupts pollute the VMbus interrupt stream for
      the synthetic NIC and storage.  This patch enables Direct Mode by
      default on stimer0 when running on a version of Hyper-V that supports
      it.
      
      In prep for coming support of Hyper-V on ARM64, the arch independent
      portion of the code contains calls to routines that will be populated
      on ARM64 but are not needed and do nothing on x86.
      Signed-off-by: NMichael Kelley <mikelley@microsoft.com>
      Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      248e742a
  12. 28 2月, 2018 1 次提交
  13. 21 2月, 2018 7 次提交
    • D
      x86/entry/64: Open-code switch_to_thread_stack() · f3d415ea
      Dominik Brodowski 提交于
      Open-code the two instances which called switch_to_thread_stack(). This
      allows us to remove the wrapper around DO_SWITCH_TO_THREAD_STACK.
      
      While at it, update the UNWIND hint to reflect where the IRET frame is,
      and update the commentary to reflect what we are actually doing here.
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: dan.j.williams@intel.com
      Link: http://lkml.kernel.org/r/20180220210113.6725-7-linux@dominikbrodowski.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      f3d415ea
    • D
      x86/entry/64: Move ASM_CLAC to interrupt_entry() · b2855d8d
      Dominik Brodowski 提交于
      Moving ASM_CLAC to interrupt_entry means two instructions (addq / pushq
      and call interrupt_entry) are not covered by it. However, it offers a
      noticeable size reduction (-.2k):
      
         text	   data	    bss	    dec	    hex	filename
        16882	      0	      0	  16882	   41f2	entry_64.o-orig
        16623	      0	      0	  16623	   40ef	entry_64.o
      Suggested-by: NBrian Gerst <brgerst@gmail.com>
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: dan.j.williams@intel.com
      Link: http://lkml.kernel.org/r/20180220210113.6725-6-linux@dominikbrodowski.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b2855d8d
    • D
      x86/entry/64: Remove 'interrupt' macro · 3aa99fc3
      Dominik Brodowski 提交于
      It is now trivial to call interrupt_entry() and then the actual worker.
      Therefore, remove the interrupt macro and open code it all.
      Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: dan.j.williams@intel.com
      Link: http://lkml.kernel.org/r/20180220210113.6725-5-linux@dominikbrodowski.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3aa99fc3
    • D
      x86/entry/64: Move the switch_to_thread_stack() call to interrupt_entry() · 90a6acc4
      Dominik Brodowski 提交于
      We can also move the CLD, SWAPGS, and the switch_to_thread_stack() call
      to the interrupt_entry() helper function. As we do not want call depths
      of two, convert switch_to_thread_stack() to a macro.
      
      However, switch_to_thread_stack() has another user in entry_64_compat.S,
      which currently expects it to be a function. To keep the code changes
      in this patch minimal, create a wrapper function.
      
      The switch to a macro means that there is some binary code duplication
      if CONFIG_IA32_EMULATION=y is enabled. Therefore, the size reduction
      differs whether CONFIG_IA32_EMULATION is enabled or not:
      
      CONFIG_IA32_EMULATION=y (-0.13k):
         text	   data	    bss	    dec	    hex	filename
        17158	      0	      0	  17158	   4306	entry_64.o-orig
        17028	      0	      0	  17028	   4284	entry_64.o
      
      CONFIG_IA32_EMULATION=n (-0.27k):
         text	   data	    bss	    dec	    hex	filename
        17158	      0	      0	  17158	   4306	entry_64.o-orig
        16882	      0	      0	  16882	   41f2	entry_64.o
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: dan.j.williams@intel.com
      Link: http://lkml.kernel.org/r/20180220210113.6725-4-linux@dominikbrodowski.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      90a6acc4
    • D
      x86/entry/64: Move ENTER_IRQ_STACK from interrupt macro to interrupt_entry · 2ba64741
      Dominik Brodowski 提交于
      Moving the switch to IRQ stack from the interrupt macro to the helper
      function requires some trickery: All ENTER_IRQ_STACK really cares about
      is where the "original" stack -- meaning the GP registers etc. -- is
      stored. Therefore, we need to offset the stored RSP value by 8 whenever
      ENTER_IRQ_STACK is called from within a function. In such cases, and
      after switching to the IRQ stack, we need to push the "original" return
      address (i.e. the return address from the call to the interrupt entry
      function) to the IRQ stack.
      
      This trickery allows us to carve another .85k from the text size (it
      would be more except for the additional unwind hints):
      
         text	   data	    bss	    dec	    hex	filename
        18006	      0	      0	  18006	   4656	entry_64.o-orig
        17158	      0	      0	  17158	   4306	entry_64.o
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: dan.j.williams@intel.com
      Link: http://lkml.kernel.org/r/20180220210113.6725-3-linux@dominikbrodowski.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      2ba64741
    • D
      x86/entry/64: Move PUSH_AND_CLEAR_REGS from interrupt macro to helper function · 0e34d226
      Dominik Brodowski 提交于
      The PUSH_AND_CLEAR_REGS macro is able to insert the GP registers
      "above" the original return address. This allows us to move a sizeable
      part of the interrupt entry macro to an interrupt entry helper function:
      
         text	   data	    bss	    dec	    hex	filename
        21088	      0	      0	  21088	   5260	entry_64.o-orig
        18006	      0	      0	  18006	   4656	entry_64.o
      Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: dan.j.williams@intel.com
      Link: http://lkml.kernel.org/r/20180220210113.6725-2-linux@dominikbrodowski.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0e34d226
    • K
      x86/mm: Optimize boot-time paging mode switching cost · 39b95522
      Kirill A. Shutemov 提交于
      By this point we have functioning boot-time switching between 4- and
      5-level paging mode. But naive approach comes with cost.
      
      Numbers below are for kernel build, allmodconfig, 5 times.
      
      CONFIG_X86_5LEVEL=n:
      
       Performance counter stats for 'sh -c make -j100 -B -k >/dev/null' (5 runs):
      
         17308719.892691      task-clock:u (msec)       #   26.772 CPUs utilized            ( +-  0.11% )
                       0      context-switches:u        #    0.000 K/sec
                       0      cpu-migrations:u          #    0.000 K/sec
             331,993,164      page-faults:u             #    0.019 M/sec                    ( +-  0.01% )
      43,614,978,867,455      cycles:u                  #    2.520 GHz                      ( +-  0.01% )
      39,371,534,575,126      stalled-cycles-frontend:u #   90.27% frontend cycles idle     ( +-  0.09% )
      28,363,350,152,428      instructions:u            #    0.65  insn per cycle
                                                        #    1.39  stalled cycles per insn  ( +-  0.00% )
       6,316,784,066,413      branches:u                #  364.948 M/sec                    ( +-  0.00% )
         250,808,144,781      branch-misses:u           #    3.97% of all branches          ( +-  0.01% )
      
           646.531974142 seconds time elapsed                                          ( +-  1.15% )
      
      CONFIG_X86_5LEVEL=y:
      
       Performance counter stats for 'sh -c make -j100 -B -k >/dev/null' (5 runs):
      
         17411536.780625      task-clock:u (msec)       #   26.426 CPUs utilized            ( +-  0.10% )
                       0      context-switches:u        #    0.000 K/sec
                       0      cpu-migrations:u          #    0.000 K/sec
             331,868,663      page-faults:u             #    0.019 M/sec                    ( +-  0.01% )
      43,865,909,056,301      cycles:u                  #    2.519 GHz                      ( +-  0.01% )
      39,740,130,365,581      stalled-cycles-frontend:u #   90.59% frontend cycles idle     ( +-  0.05% )
      28,363,358,997,959      instructions:u            #    0.65  insn per cycle
                                                        #    1.40  stalled cycles per insn  ( +-  0.00% )
       6,316,784,937,460      branches:u                #  362.793 M/sec                    ( +-  0.00% )
         251,531,919,485      branch-misses:u           #    3.98% of all branches          ( +-  0.00% )
      
           658.886307752 seconds time elapsed                                          ( +-  0.92% )
      
      The patch tries to fix the performance regression by using
      cpu_feature_enabled(X86_FEATURE_LA57) instead of pgtable_l5_enabled in
      all hot code paths. These will statically patch the target code for
      additional performance.
      
      CONFIG_X86_5LEVEL=y + the patch:
      
       Performance counter stats for 'sh -c make -j100 -B -k >/dev/null' (5 runs):
      
         17381990.268506      task-clock:u (msec)       #   26.907 CPUs utilized            ( +-  0.19% )
                       0      context-switches:u        #    0.000 K/sec
                       0      cpu-migrations:u          #    0.000 K/sec
             331,862,625      page-faults:u             #    0.019 M/sec                    ( +-  0.01% )
      43,697,726,320,051      cycles:u                  #    2.514 GHz                      ( +-  0.03% )
      39,480,408,690,401      stalled-cycles-frontend:u #   90.35% frontend cycles idle     ( +-  0.05% )
      28,363,394,221,388      instructions:u            #    0.65  insn per cycle
                                                        #    1.39  stalled cycles per insn  ( +-  0.00% )
       6,316,794,985,573      branches:u                #  363.410 M/sec                    ( +-  0.00% )
         251,013,232,547      branch-misses:u           #    3.97% of all branches          ( +-  0.01% )
      
           645.991174661 seconds time elapsed                                          ( +-  1.19% )
      
      Unfortunately, this approach doesn't help with text size:
      
        vmlinux.before .text size:	8190319
        vmlinux.after .text size:	8200623
      
      The .text section is increased by about 4k. Not sure if we can do anything
      about this.
      Signed-off-by: NKirill A. Shuemov <kirill.shutemov@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20180216114948.68868-4-kirill.shutemov@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      39b95522
  14. 20 2月, 2018 1 次提交
  15. 17 2月, 2018 1 次提交
  16. 15 2月, 2018 1 次提交
    • I
      x86/entry/64: Fix CR3 restore in paranoid_exit() · e4865757
      Ingo Molnar 提交于
      Josh Poimboeuf noticed the following bug:
      
       "The paranoid exit code only restores the saved CR3 when it switches back
        to the user GS.  However, even in the kernel GS case, it's possible that
        it needs to restore a user CR3, if for example, the paranoid exception
        occurred in the syscall exit path between SWITCH_TO_USER_CR3_STACK and
        SWAPGS."
      
      Josh also confirmed via targeted testing that it's possible to hit this bug.
      
      Fix the bug by also restoring CR3 in the paranoid_exit_no_swapgs branch.
      
      The reason we haven't seen this bug reported by users yet is probably because
      "paranoid" entry points are limited to the following cases:
      
       idtentry double_fault       do_double_fault  has_error_code=1  paranoid=2
       idtentry debug              do_debug         has_error_code=0  paranoid=1 shift_ist=DEBUG_STACK
       idtentry int3               do_int3          has_error_code=0  paranoid=1 shift_ist=DEBUG_STACK
       idtentry machine_check      do_mce           has_error_code=0  paranoid=1
      
      Amongst those entry points only machine_check is one that will interrupt an
      IRQS-off critical section asynchronously - and machine check events are rare.
      
      The other main asynchronous entries are NMI entries, which can be very high-freq
      with perf profiling, but they are special: they don't use the 'idtentry' macro but
      are open coded and restore user CR3 unconditionally so don't have this bug.
      Reported-and-tested-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Reviewed-by: NAndy Lutomirski <luto@kernel.org>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20180214073910.boevmg65upbk3vqb@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e4865757
  17. 14 2月, 2018 1 次提交
  18. 13 2月, 2018 7 次提交
  19. 10 2月, 2018 1 次提交
    • N
      x86/mm/pti: Fix PTI comment in entry_SYSCALL_64() · 14b1fcc6
      Nadav Amit 提交于
      The comment is confusing since the path is taken when
      CONFIG_PAGE_TABLE_ISOLATION=y is disabled (while the comment says it is not
      taken).
      Signed-off-by: NNadav Amit <namit@vmware.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: nadav.amit@gmail.com
      Link: http://lkml.kernel.org/r/20180209170638.15161-1-namit@vmware.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      14b1fcc6
  20. 06 2月, 2018 3 次提交
    • D
      x86/entry/64: Clear registers for exceptions/interrupts, to reduce speculation attack surface · 3ac6d8c7
      Dan Williams 提交于
      Clear the 'extra' registers on entering the 64-bit kernel for exceptions
      and interrupts. The common registers are not cleared since they are
      likely clobbered well before they can be exploited in a speculative
      execution attack.
      
      Originally-From: Andi Kleen <ak@linux.intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Cc: <stable@vger.kernel.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/151787989146.7847.15749181712358213254.stgit@dwillia2-desk3.amr.corp.intel.com
      [ Made small improvements to the changelog and the code comments. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      3ac6d8c7
    • D
      x86/entry/64: Clear extra registers beyond syscall arguments, to reduce speculation attack surface · 8e1eb3fa
      Dan Williams 提交于
      At entry userspace may have (maliciously) populated the extra registers
      outside the syscall calling convention with arbitrary values that could
      be useful in a speculative execution (Spectre style) attack.
      
      Clear these registers to minimize the kernel's attack surface.
      
      Note, this only clears the extra registers and not the unused
      registers for syscalls less than 6 arguments, since those registers are
      likely to be clobbered well before their values could be put to use
      under speculation.
      
      Note, Linus found that the XOR instructions can be executed with
      minimized cost if interleaved with the PUSH instructions, and Ingo's
      analysis found that R10 and R11 should be included in the register
      clearing beyond the typical 'extra' syscall calling convention
      registers.
      Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Reported-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Cc: <stable@vger.kernel.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/151787988577.7847.16733592218894189003.stgit@dwillia2-desk3.amr.corp.intel.com
      [ Made small improvements to the changelog and the code comments. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      8e1eb3fa
    • M
      membarrier/x86: Provide core serializing command · 10bcc80e
      Mathieu Desnoyers 提交于
      There are two places where core serialization is needed by membarrier:
      
      1) When returning from the membarrier IPI,
      2) After scheduler updates curr to a thread with a different mm, before
         going back to user-space, since the curr->mm is used by membarrier to
         check whether it needs to send an IPI to that CPU.
      
      x86-32 uses IRET as return from interrupt, and both IRET and SYSEXIT to go
      back to user-space. The IRET instruction is core serializing, but not
      SYSEXIT.
      
      x86-64 uses IRET as return from interrupt, which takes care of the IPI.
      However, it can return to user-space through either SYSRETL (compat
      code), SYSRETQ, or IRET. Given that SYSRET{L,Q} is not core serializing,
      we rely instead on write_cr3() performed by switch_mm() to provide core
      serialization after changing the current mm, and deal with the special
      case of kthread -> uthread (temporarily keeping current mm into
      active_mm) by adding a sync_core() in that specific case.
      
      Use the new sync_core_before_usermode() to guarantee this.
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrea Parri <parri.andrea@gmail.com>
      Cc: Andrew Hunter <ahh@google.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Avi Kivity <avi@scylladb.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Dave Watson <davejwatson@fb.com>
      Cc: David Sehr <sehr@google.com>
      Cc: Greg Hackmann <ghackmann@google.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Maged Michael <maged.michael@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-api@vger.kernel.org
      Cc: linux-arch@vger.kernel.org
      Link: http://lkml.kernel.org/r/20180129202020.8515-10-mathieu.desnoyers@efficios.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      10bcc80e
  21. 31 1月, 2018 1 次提交
    • V
      x86/hyperv: Reenlightenment notifications support · 93286261
      Vitaly Kuznetsov 提交于
      Hyper-V supports Live Migration notification. This is supposed to be used
      in conjunction with TSC emulation: when a VM is migrated to a host with
      different TSC frequency for some short period the host emulates the
      accesses to TSC and sends an interrupt to notify about the event. When the
      guest is done updating everything it can disable TSC emulation and
      everything will start working fast again.
      
      These notifications weren't required until now as Hyper-V guests are not
      supposed to use TSC as a clocksource: in Linux the TSC is even marked as
      unstable on boot. Guests normally use 'tsc page' clocksource and host
      updates its values on migrations automatically.
      
      Things change when with nested virtualization: even when the PV
      clocksources (kvm-clock or tsc page) are passed through to the nested
      guests the TSC frequency and frequency changes need to be know..
      
      Hyper-V Top Level Functional Specification (as of v5.0b) wrongly specifies
      EAX:BIT(12) of CPUID:0x40000009 as the feature identification bit. The
      right one to check is EAX:BIT(13) of CPUID:0x40000003. I was assured that
      the fix in on the way.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Stephen Hemminger <sthemmin@microsoft.com>
      Cc: kvm@vger.kernel.org
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Haiyang Zhang <haiyangz@microsoft.com>
      Cc: "Michael Kelley (EOSG)" <Michael.H.Kelley@microsoft.com>
      Cc: Roman Kagan <rkagan@virtuozzo.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: devel@linuxdriverproject.org
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Cathy Avery <cavery@redhat.com>
      Cc: Mohammed Gamal <mmorsy@redhat.com>
      Link: https://lkml.kernel.org/r/20180124132337.30138-4-vkuznets@redhat.com
      93286261
  22. 30 1月, 2018 2 次提交
  23. 28 1月, 2018 1 次提交
  24. 19 1月, 2018 1 次提交
  25. 15 1月, 2018 1 次提交
    • D
      x86/retpoline: Fill RSB on context switch for affected CPUs · c995efd5
      David Woodhouse 提交于
      On context switch from a shallow call stack to a deeper one, as the CPU
      does 'ret' up the deeper side it may encounter RSB entries (predictions for
      where the 'ret' goes to) which were populated in userspace.
      
      This is problematic if neither SMEP nor KPTI (the latter of which marks
      userspace pages as NX for the kernel) are active, as malicious code in
      userspace may then be executed speculatively.
      
      Overwrite the CPU's return prediction stack with calls which are predicted
      to return to an infinite loop, to "capture" speculation if this
      happens. This is required both for retpoline, and also in conjunction with
      IBRS for !SMEP && !KPTI.
      
      On Skylake+ the problem is slightly different, and an *underflow* of the
      RSB may cause errant branch predictions to occur. So there it's not so much
      overwrite, as *filling* the RSB to attempt to prevent it getting
      empty. This is only a partial solution for Skylake+ since there are many
      other conditions which may result in the RSB becoming empty. The full
      solution on Skylake+ is to use IBRS, which will prevent the problem even
      when the RSB becomes empty. With IBRS, the RSB-stuffing will not be
      required on context switch.
      
      [ tglx: Added missing vendor check and slighty massaged comments and
        	changelog ]
      Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NArjan van de Ven <arjan@linux.intel.com>
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: thomas.lendacky@amd.com
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jiri Kosina <jikos@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Kees Cook <keescook@google.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: https://lkml.kernel.org/r/1515779365-9032-1-git-send-email-dwmw@amazon.co.uk
      c995efd5