1. 14 6月, 2018 1 次提交
    • L
      Kbuild: rename CC_STACKPROTECTOR[_STRONG] config variables · 050e9baa
      Linus Torvalds 提交于
      The changes to automatically test for working stack protector compiler
      support in the Kconfig files removed the special STACKPROTECTOR_AUTO
      option that picked the strongest stack protector that the compiler
      supported.
      
      That was all a nice cleanup - it makes no sense to have the AUTO case
      now that the Kconfig phase can just determine the compiler support
      directly.
      
      HOWEVER.
      
      It also meant that doing "make oldconfig" would now _disable_ the strong
      stackprotector if you had AUTO enabled, because in a legacy config file,
      the sane stack protector configuration would look like
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        # CONFIG_CC_STACKPROTECTOR_NONE is not set
        # CONFIG_CC_STACKPROTECTOR_REGULAR is not set
        # CONFIG_CC_STACKPROTECTOR_STRONG is not set
        CONFIG_CC_STACKPROTECTOR_AUTO=y
      
      and when you ran this through "make oldconfig" with the Kbuild changes,
      it would ask you about the regular CONFIG_CC_STACKPROTECTOR (that had
      been renamed from CONFIG_CC_STACKPROTECTOR_REGULAR to just
      CONFIG_CC_STACKPROTECTOR), but it would think that the STRONG version
      used to be disabled (because it was really enabled by AUTO), and would
      disable it in the new config, resulting in:
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
        CONFIG_CC_STACKPROTECTOR=y
        # CONFIG_CC_STACKPROTECTOR_STRONG is not set
        CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
      
      That's dangerously subtle - people could suddenly find themselves with
      the weaker stack protector setup without even realizing.
      
      The solution here is to just rename not just the old RECULAR stack
      protector option, but also the strong one.  This does that by just
      removing the CC_ prefix entirely for the user choices, because it really
      is not about the compiler support (the compiler support now instead
      automatially impacts _visibility_ of the options to users).
      
      This results in "make oldconfig" actually asking the user for their
      choice, so that we don't have any silent subtle security model changes.
      The end result would generally look like this:
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
        CONFIG_STACKPROTECTOR=y
        CONFIG_STACKPROTECTOR_STRONG=y
        CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
      
      where the "CC_" versions really are about internal compiler
      infrastructure, not the user selections.
      Acked-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      050e9baa
  2. 10 4月, 2018 1 次提交
  3. 05 4月, 2018 1 次提交
  4. 24 3月, 2018 1 次提交
  5. 07 3月, 2018 1 次提交
    • M
      Drivers: hv: vmbus: Implement Direct Mode for stimer0 · 248e742a
      Michael Kelley 提交于
      The 2016 version of Hyper-V offers the option to operate the guest VM
      per-vcpu stimer's in Direct Mode, which means the timer interupts on its
      own vector rather than queueing a VMbus message. Direct Mode reduces
      timer processing overhead in both the hypervisor and the guest, and
      avoids having timer interrupts pollute the VMbus interrupt stream for
      the synthetic NIC and storage.  This patch enables Direct Mode by
      default on stimer0 when running on a version of Hyper-V that supports
      it.
      
      In prep for coming support of Hyper-V on ARM64, the arch independent
      portion of the code contains calls to routines that will be populated
      on ARM64 but are not needed and do nothing on x86.
      Signed-off-by: NMichael Kelley <mikelley@microsoft.com>
      Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      248e742a
  6. 28 2月, 2018 1 次提交
  7. 21 2月, 2018 7 次提交
    • D
      x86/entry/64: Open-code switch_to_thread_stack() · f3d415ea
      Dominik Brodowski 提交于
      Open-code the two instances which called switch_to_thread_stack(). This
      allows us to remove the wrapper around DO_SWITCH_TO_THREAD_STACK.
      
      While at it, update the UNWIND hint to reflect where the IRET frame is,
      and update the commentary to reflect what we are actually doing here.
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: dan.j.williams@intel.com
      Link: http://lkml.kernel.org/r/20180220210113.6725-7-linux@dominikbrodowski.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      f3d415ea
    • D
      x86/entry/64: Move ASM_CLAC to interrupt_entry() · b2855d8d
      Dominik Brodowski 提交于
      Moving ASM_CLAC to interrupt_entry means two instructions (addq / pushq
      and call interrupt_entry) are not covered by it. However, it offers a
      noticeable size reduction (-.2k):
      
         text	   data	    bss	    dec	    hex	filename
        16882	      0	      0	  16882	   41f2	entry_64.o-orig
        16623	      0	      0	  16623	   40ef	entry_64.o
      Suggested-by: NBrian Gerst <brgerst@gmail.com>
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: dan.j.williams@intel.com
      Link: http://lkml.kernel.org/r/20180220210113.6725-6-linux@dominikbrodowski.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b2855d8d
    • D
      x86/entry/64: Remove 'interrupt' macro · 3aa99fc3
      Dominik Brodowski 提交于
      It is now trivial to call interrupt_entry() and then the actual worker.
      Therefore, remove the interrupt macro and open code it all.
      Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: dan.j.williams@intel.com
      Link: http://lkml.kernel.org/r/20180220210113.6725-5-linux@dominikbrodowski.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3aa99fc3
    • D
      x86/entry/64: Move the switch_to_thread_stack() call to interrupt_entry() · 90a6acc4
      Dominik Brodowski 提交于
      We can also move the CLD, SWAPGS, and the switch_to_thread_stack() call
      to the interrupt_entry() helper function. As we do not want call depths
      of two, convert switch_to_thread_stack() to a macro.
      
      However, switch_to_thread_stack() has another user in entry_64_compat.S,
      which currently expects it to be a function. To keep the code changes
      in this patch minimal, create a wrapper function.
      
      The switch to a macro means that there is some binary code duplication
      if CONFIG_IA32_EMULATION=y is enabled. Therefore, the size reduction
      differs whether CONFIG_IA32_EMULATION is enabled or not:
      
      CONFIG_IA32_EMULATION=y (-0.13k):
         text	   data	    bss	    dec	    hex	filename
        17158	      0	      0	  17158	   4306	entry_64.o-orig
        17028	      0	      0	  17028	   4284	entry_64.o
      
      CONFIG_IA32_EMULATION=n (-0.27k):
         text	   data	    bss	    dec	    hex	filename
        17158	      0	      0	  17158	   4306	entry_64.o-orig
        16882	      0	      0	  16882	   41f2	entry_64.o
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: dan.j.williams@intel.com
      Link: http://lkml.kernel.org/r/20180220210113.6725-4-linux@dominikbrodowski.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      90a6acc4
    • D
      x86/entry/64: Move ENTER_IRQ_STACK from interrupt macro to interrupt_entry · 2ba64741
      Dominik Brodowski 提交于
      Moving the switch to IRQ stack from the interrupt macro to the helper
      function requires some trickery: All ENTER_IRQ_STACK really cares about
      is where the "original" stack -- meaning the GP registers etc. -- is
      stored. Therefore, we need to offset the stored RSP value by 8 whenever
      ENTER_IRQ_STACK is called from within a function. In such cases, and
      after switching to the IRQ stack, we need to push the "original" return
      address (i.e. the return address from the call to the interrupt entry
      function) to the IRQ stack.
      
      This trickery allows us to carve another .85k from the text size (it
      would be more except for the additional unwind hints):
      
         text	   data	    bss	    dec	    hex	filename
        18006	      0	      0	  18006	   4656	entry_64.o-orig
        17158	      0	      0	  17158	   4306	entry_64.o
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: dan.j.williams@intel.com
      Link: http://lkml.kernel.org/r/20180220210113.6725-3-linux@dominikbrodowski.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      2ba64741
    • D
      x86/entry/64: Move PUSH_AND_CLEAR_REGS from interrupt macro to helper function · 0e34d226
      Dominik Brodowski 提交于
      The PUSH_AND_CLEAR_REGS macro is able to insert the GP registers
      "above" the original return address. This allows us to move a sizeable
      part of the interrupt entry macro to an interrupt entry helper function:
      
         text	   data	    bss	    dec	    hex	filename
        21088	      0	      0	  21088	   5260	entry_64.o-orig
        18006	      0	      0	  18006	   4656	entry_64.o
      Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: dan.j.williams@intel.com
      Link: http://lkml.kernel.org/r/20180220210113.6725-2-linux@dominikbrodowski.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0e34d226
    • K
      x86/mm: Optimize boot-time paging mode switching cost · 39b95522
      Kirill A. Shutemov 提交于
      By this point we have functioning boot-time switching between 4- and
      5-level paging mode. But naive approach comes with cost.
      
      Numbers below are for kernel build, allmodconfig, 5 times.
      
      CONFIG_X86_5LEVEL=n:
      
       Performance counter stats for 'sh -c make -j100 -B -k >/dev/null' (5 runs):
      
         17308719.892691      task-clock:u (msec)       #   26.772 CPUs utilized            ( +-  0.11% )
                       0      context-switches:u        #    0.000 K/sec
                       0      cpu-migrations:u          #    0.000 K/sec
             331,993,164      page-faults:u             #    0.019 M/sec                    ( +-  0.01% )
      43,614,978,867,455      cycles:u                  #    2.520 GHz                      ( +-  0.01% )
      39,371,534,575,126      stalled-cycles-frontend:u #   90.27% frontend cycles idle     ( +-  0.09% )
      28,363,350,152,428      instructions:u            #    0.65  insn per cycle
                                                        #    1.39  stalled cycles per insn  ( +-  0.00% )
       6,316,784,066,413      branches:u                #  364.948 M/sec                    ( +-  0.00% )
         250,808,144,781      branch-misses:u           #    3.97% of all branches          ( +-  0.01% )
      
           646.531974142 seconds time elapsed                                          ( +-  1.15% )
      
      CONFIG_X86_5LEVEL=y:
      
       Performance counter stats for 'sh -c make -j100 -B -k >/dev/null' (5 runs):
      
         17411536.780625      task-clock:u (msec)       #   26.426 CPUs utilized            ( +-  0.10% )
                       0      context-switches:u        #    0.000 K/sec
                       0      cpu-migrations:u          #    0.000 K/sec
             331,868,663      page-faults:u             #    0.019 M/sec                    ( +-  0.01% )
      43,865,909,056,301      cycles:u                  #    2.519 GHz                      ( +-  0.01% )
      39,740,130,365,581      stalled-cycles-frontend:u #   90.59% frontend cycles idle     ( +-  0.05% )
      28,363,358,997,959      instructions:u            #    0.65  insn per cycle
                                                        #    1.40  stalled cycles per insn  ( +-  0.00% )
       6,316,784,937,460      branches:u                #  362.793 M/sec                    ( +-  0.00% )
         251,531,919,485      branch-misses:u           #    3.98% of all branches          ( +-  0.00% )
      
           658.886307752 seconds time elapsed                                          ( +-  0.92% )
      
      The patch tries to fix the performance regression by using
      cpu_feature_enabled(X86_FEATURE_LA57) instead of pgtable_l5_enabled in
      all hot code paths. These will statically patch the target code for
      additional performance.
      
      CONFIG_X86_5LEVEL=y + the patch:
      
       Performance counter stats for 'sh -c make -j100 -B -k >/dev/null' (5 runs):
      
         17381990.268506      task-clock:u (msec)       #   26.907 CPUs utilized            ( +-  0.19% )
                       0      context-switches:u        #    0.000 K/sec
                       0      cpu-migrations:u          #    0.000 K/sec
             331,862,625      page-faults:u             #    0.019 M/sec                    ( +-  0.01% )
      43,697,726,320,051      cycles:u                  #    2.514 GHz                      ( +-  0.03% )
      39,480,408,690,401      stalled-cycles-frontend:u #   90.35% frontend cycles idle     ( +-  0.05% )
      28,363,394,221,388      instructions:u            #    0.65  insn per cycle
                                                        #    1.39  stalled cycles per insn  ( +-  0.00% )
       6,316,794,985,573      branches:u                #  363.410 M/sec                    ( +-  0.00% )
         251,013,232,547      branch-misses:u           #    3.97% of all branches          ( +-  0.01% )
      
           645.991174661 seconds time elapsed                                          ( +-  1.19% )
      
      Unfortunately, this approach doesn't help with text size:
      
        vmlinux.before .text size:	8190319
        vmlinux.after .text size:	8200623
      
      The .text section is increased by about 4k. Not sure if we can do anything
      about this.
      Signed-off-by: NKirill A. Shuemov <kirill.shutemov@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20180216114948.68868-4-kirill.shutemov@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      39b95522
  8. 20 2月, 2018 1 次提交
  9. 17 2月, 2018 1 次提交
  10. 15 2月, 2018 1 次提交
    • I
      x86/entry/64: Fix CR3 restore in paranoid_exit() · e4865757
      Ingo Molnar 提交于
      Josh Poimboeuf noticed the following bug:
      
       "The paranoid exit code only restores the saved CR3 when it switches back
        to the user GS.  However, even in the kernel GS case, it's possible that
        it needs to restore a user CR3, if for example, the paranoid exception
        occurred in the syscall exit path between SWITCH_TO_USER_CR3_STACK and
        SWAPGS."
      
      Josh also confirmed via targeted testing that it's possible to hit this bug.
      
      Fix the bug by also restoring CR3 in the paranoid_exit_no_swapgs branch.
      
      The reason we haven't seen this bug reported by users yet is probably because
      "paranoid" entry points are limited to the following cases:
      
       idtentry double_fault       do_double_fault  has_error_code=1  paranoid=2
       idtentry debug              do_debug         has_error_code=0  paranoid=1 shift_ist=DEBUG_STACK
       idtentry int3               do_int3          has_error_code=0  paranoid=1 shift_ist=DEBUG_STACK
       idtentry machine_check      do_mce           has_error_code=0  paranoid=1
      
      Amongst those entry points only machine_check is one that will interrupt an
      IRQS-off critical section asynchronously - and machine check events are rare.
      
      The other main asynchronous entries are NMI entries, which can be very high-freq
      with perf profiling, but they are special: they don't use the 'idtentry' macro but
      are open coded and restore user CR3 unconditionally so don't have this bug.
      Reported-and-tested-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Reviewed-by: NAndy Lutomirski <luto@kernel.org>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20180214073910.boevmg65upbk3vqb@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e4865757
  11. 14 2月, 2018 1 次提交
  12. 13 2月, 2018 7 次提交
  13. 10 2月, 2018 1 次提交
    • N
      x86/mm/pti: Fix PTI comment in entry_SYSCALL_64() · 14b1fcc6
      Nadav Amit 提交于
      The comment is confusing since the path is taken when
      CONFIG_PAGE_TABLE_ISOLATION=y is disabled (while the comment says it is not
      taken).
      Signed-off-by: NNadav Amit <namit@vmware.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: nadav.amit@gmail.com
      Link: http://lkml.kernel.org/r/20180209170638.15161-1-namit@vmware.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      14b1fcc6
  14. 06 2月, 2018 3 次提交
    • D
      x86/entry/64: Clear registers for exceptions/interrupts, to reduce speculation attack surface · 3ac6d8c7
      Dan Williams 提交于
      Clear the 'extra' registers on entering the 64-bit kernel for exceptions
      and interrupts. The common registers are not cleared since they are
      likely clobbered well before they can be exploited in a speculative
      execution attack.
      
      Originally-From: Andi Kleen <ak@linux.intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Cc: <stable@vger.kernel.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/151787989146.7847.15749181712358213254.stgit@dwillia2-desk3.amr.corp.intel.com
      [ Made small improvements to the changelog and the code comments. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      3ac6d8c7
    • D
      x86/entry/64: Clear extra registers beyond syscall arguments, to reduce speculation attack surface · 8e1eb3fa
      Dan Williams 提交于
      At entry userspace may have (maliciously) populated the extra registers
      outside the syscall calling convention with arbitrary values that could
      be useful in a speculative execution (Spectre style) attack.
      
      Clear these registers to minimize the kernel's attack surface.
      
      Note, this only clears the extra registers and not the unused
      registers for syscalls less than 6 arguments, since those registers are
      likely to be clobbered well before their values could be put to use
      under speculation.
      
      Note, Linus found that the XOR instructions can be executed with
      minimized cost if interleaved with the PUSH instructions, and Ingo's
      analysis found that R10 and R11 should be included in the register
      clearing beyond the typical 'extra' syscall calling convention
      registers.
      Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Reported-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Cc: <stable@vger.kernel.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/151787988577.7847.16733592218894189003.stgit@dwillia2-desk3.amr.corp.intel.com
      [ Made small improvements to the changelog and the code comments. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      8e1eb3fa
    • M
      membarrier/x86: Provide core serializing command · 10bcc80e
      Mathieu Desnoyers 提交于
      There are two places where core serialization is needed by membarrier:
      
      1) When returning from the membarrier IPI,
      2) After scheduler updates curr to a thread with a different mm, before
         going back to user-space, since the curr->mm is used by membarrier to
         check whether it needs to send an IPI to that CPU.
      
      x86-32 uses IRET as return from interrupt, and both IRET and SYSEXIT to go
      back to user-space. The IRET instruction is core serializing, but not
      SYSEXIT.
      
      x86-64 uses IRET as return from interrupt, which takes care of the IPI.
      However, it can return to user-space through either SYSRETL (compat
      code), SYSRETQ, or IRET. Given that SYSRET{L,Q} is not core serializing,
      we rely instead on write_cr3() performed by switch_mm() to provide core
      serialization after changing the current mm, and deal with the special
      case of kthread -> uthread (temporarily keeping current mm into
      active_mm) by adding a sync_core() in that specific case.
      
      Use the new sync_core_before_usermode() to guarantee this.
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrea Parri <parri.andrea@gmail.com>
      Cc: Andrew Hunter <ahh@google.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Avi Kivity <avi@scylladb.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Dave Watson <davejwatson@fb.com>
      Cc: David Sehr <sehr@google.com>
      Cc: Greg Hackmann <ghackmann@google.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Maged Michael <maged.michael@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-api@vger.kernel.org
      Cc: linux-arch@vger.kernel.org
      Link: http://lkml.kernel.org/r/20180129202020.8515-10-mathieu.desnoyers@efficios.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      10bcc80e
  15. 31 1月, 2018 1 次提交
    • V
      x86/hyperv: Reenlightenment notifications support · 93286261
      Vitaly Kuznetsov 提交于
      Hyper-V supports Live Migration notification. This is supposed to be used
      in conjunction with TSC emulation: when a VM is migrated to a host with
      different TSC frequency for some short period the host emulates the
      accesses to TSC and sends an interrupt to notify about the event. When the
      guest is done updating everything it can disable TSC emulation and
      everything will start working fast again.
      
      These notifications weren't required until now as Hyper-V guests are not
      supposed to use TSC as a clocksource: in Linux the TSC is even marked as
      unstable on boot. Guests normally use 'tsc page' clocksource and host
      updates its values on migrations automatically.
      
      Things change when with nested virtualization: even when the PV
      clocksources (kvm-clock or tsc page) are passed through to the nested
      guests the TSC frequency and frequency changes need to be know..
      
      Hyper-V Top Level Functional Specification (as of v5.0b) wrongly specifies
      EAX:BIT(12) of CPUID:0x40000009 as the feature identification bit. The
      right one to check is EAX:BIT(13) of CPUID:0x40000003. I was assured that
      the fix in on the way.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Stephen Hemminger <sthemmin@microsoft.com>
      Cc: kvm@vger.kernel.org
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Haiyang Zhang <haiyangz@microsoft.com>
      Cc: "Michael Kelley (EOSG)" <Michael.H.Kelley@microsoft.com>
      Cc: Roman Kagan <rkagan@virtuozzo.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: devel@linuxdriverproject.org
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Cathy Avery <cavery@redhat.com>
      Cc: Mohammed Gamal <mmorsy@redhat.com>
      Link: https://lkml.kernel.org/r/20180124132337.30138-4-vkuznets@redhat.com
      93286261
  16. 30 1月, 2018 2 次提交
  17. 28 1月, 2018 1 次提交
  18. 19 1月, 2018 1 次提交
  19. 15 1月, 2018 1 次提交
    • D
      x86/retpoline: Fill RSB on context switch for affected CPUs · c995efd5
      David Woodhouse 提交于
      On context switch from a shallow call stack to a deeper one, as the CPU
      does 'ret' up the deeper side it may encounter RSB entries (predictions for
      where the 'ret' goes to) which were populated in userspace.
      
      This is problematic if neither SMEP nor KPTI (the latter of which marks
      userspace pages as NX for the kernel) are active, as malicious code in
      userspace may then be executed speculatively.
      
      Overwrite the CPU's return prediction stack with calls which are predicted
      to return to an infinite loop, to "capture" speculation if this
      happens. This is required both for retpoline, and also in conjunction with
      IBRS for !SMEP && !KPTI.
      
      On Skylake+ the problem is slightly different, and an *underflow* of the
      RSB may cause errant branch predictions to occur. So there it's not so much
      overwrite, as *filling* the RSB to attempt to prevent it getting
      empty. This is only a partial solution for Skylake+ since there are many
      other conditions which may result in the RSB becoming empty. The full
      solution on Skylake+ is to use IBRS, which will prevent the problem even
      when the RSB becomes empty. With IBRS, the RSB-stuffing will not be
      required on context switch.
      
      [ tglx: Added missing vendor check and slighty massaged comments and
        	changelog ]
      Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NArjan van de Ven <arjan@linux.intel.com>
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: thomas.lendacky@amd.com
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jiri Kosina <jikos@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Kees Cook <keescook@google.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: https://lkml.kernel.org/r/1515779365-9032-1-git-send-email-dwmw@amazon.co.uk
      c995efd5
  20. 12 1月, 2018 1 次提交
    • D
      x86/retpoline/entry: Convert entry assembler indirect jumps · 2641f08b
      David Woodhouse 提交于
      Convert indirect jumps in core 32/64bit entry assembler code to use
      non-speculative sequences when CONFIG_RETPOLINE is enabled.
      
      Don't use CALL_NOSPEC in entry_SYSCALL_64_fastpath because the return
      address after the 'call' instruction must be *precisely* at the
      .Lentry_SYSCALL_64_after_fastpath label for stub_ptregs_64 to work,
      and the use of alternatives will mess that up unless we play horrid
      games to prepend with NOPs and make the variants the same length. It's
      not worth it; in the case where we ALTERNATIVE out the retpoline, the
      first instruction at __x86.indirect_thunk.rax is going to be a bare
      jmp *%rax anyway.
      Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Acked-by: NArjan van de Ven <arjan@linux.intel.com>
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: thomas.lendacky@amd.com
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jiri Kosina <jikos@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Kees Cook <keescook@google.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: https://lkml.kernel.org/r/1515707194-20531-7-git-send-email-dwmw@amazon.co.uk
      2641f08b
  21. 24 12月, 2017 3 次提交
    • P
      x86/mm: Optimize RESTORE_CR3 · 21e94459
      Peter Zijlstra 提交于
      Most NMI/paranoid exceptions will not in fact change pagetables and would
      thus not require TLB flushing, however RESTORE_CR3 uses flushing CR3
      writes.
      
      Restores to kernel PCIDs can be NOFLUSH, because we explicitly flush the
      kernel mappings and now that we track which user PCIDs need flushing we can
      avoid those too when possible.
      
      This does mean RESTORE_CR3 needs an additional scratch_reg, luckily both
      sites have plenty available.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      21e94459
    • P
      x86/mm: Use/Fix PCID to optimize user/kernel switches · 6fd166aa
      Peter Zijlstra 提交于
      We can use PCID to retain the TLBs across CR3 switches; including those now
      part of the user/kernel switch. This increases performance of kernel
      entry/exit at the cost of more expensive/complicated TLB flushing.
      
      Now that we have two address spaces, one for kernel and one for user space,
      we need two PCIDs per mm. We use the top PCID bit to indicate a user PCID
      (just like we use the PFN LSB for the PGD). Since we do TLB invalidation
      from kernel space, the existing code will only invalidate the kernel PCID,
      we augment that by marking the corresponding user PCID invalid, and upon
      switching back to userspace, use a flushing CR3 write for the switch.
      
      In order to access the user_pcid_flush_mask we use PER_CPU storage, which
      means the previously established SWAPGS vs CR3 ordering is now mandatory
      and required.
      
      Having to do this memory access does require additional registers, most
      sites have a functioning stack and we can spill one (RAX), sites without
      functional stack need to otherwise provide the second scratch register.
      
      Note: PCID is generally available on Intel Sandybridge and later CPUs.
      Note: Up until this point TLB flushing was broken in this series.
      
      Based-on-code-from: Dave Hansen <dave.hansen@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      6fd166aa
    • D
      x86/mm/pti: Prepare the x86/entry assembly code for entry/exit CR3 switching · 8a09317b
      Dave Hansen 提交于
      PAGE_TABLE_ISOLATION needs to switch to a different CR3 value when it
      enters the kernel and switch back when it exits.  This essentially needs to
      be done before leaving assembly code.
      
      This is extra challenging because the switching context is tricky: the
      registers that can be clobbered can vary.  It is also hard to store things
      on the stack because there is an established ABI (ptregs) or the stack is
      entirely unsafe to use.
      
      Establish a set of macros that allow changing to the user and kernel CR3
      values.
      
      Interactions with SWAPGS:
      
        Previous versions of the PAGE_TABLE_ISOLATION code relied on having
        per-CPU scratch space to save/restore a register that can be used for the
        CR3 MOV.  The %GS register is used to index into our per-CPU space, so
        SWAPGS *had* to be done before the CR3 switch.  That scratch space is gone
        now, but the semantic that SWAPGS must be done before the CR3 MOV is
        retained.  This is good to keep because it is not that hard to do and it
        allows to do things like add per-CPU debugging information.
      
      What this does in the NMI code is worth pointing out.  NMIs can interrupt
      *any* context and they can also be nested with NMIs interrupting other
      NMIs.  The comments below ".Lnmi_from_kernel" explain the format of the
      stack during this situation.  Changing the format of this stack is hard.
      Instead of storing the old CR3 value on the stack, this depends on the
      *regular* register save/restore mechanism and then uses %r14 to keep CR3
      during the NMI.  It is callee-saved and will not be clobbered by the C NMI
      handlers that get called.
      
      [ PeterZ: ESPFIX optimization ]
      
      Based-on-code-from: Andy Lutomirski <luto@kernel.org>
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Cc: linux-mm@kvack.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      8a09317b
  22. 23 12月, 2017 1 次提交
    • D
      x86/entry: Rename SYSENTER_stack to CPU_ENTRY_AREA_entry_stack · 4fe2d8b1
      Dave Hansen 提交于
      If the kernel oopses while on the trampoline stack, it will print
      "<SYSENTER>" even if SYSENTER is not involved.  That is rather confusing.
      
      The "SYSENTER" stack is used for a lot more than SYSENTER now.  Give it a
      better string to display in stack dumps, and rename the kernel code to
      match.
      
      Also move the 32-bit code over to the new naming even though it still uses
      the entry stack only for SYSENTER.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      4fe2d8b1
  23. 17 12月, 2017 1 次提交
    • A
      x86/entry/64: Make cpu_entry_area.tss read-only · c482feef
      Andy Lutomirski 提交于
      The TSS is a fairly juicy target for exploits, and, now that the TSS
      is in the cpu_entry_area, it's no longer protected by kASLR.  Make it
      read-only on x86_64.
      
      On x86_32, it can't be RO because it's written by the CPU during task
      switches, and we use a task gate for double faults.  I'd also be
      nervous about errata if we tried to make it RO even on configurations
      without double fault handling.
      
      [ tglx: AMD confirmed that there is no problem on 64-bit with TSS RO.  So
        	it's probably safe to assume that it's a non issue, though Intel
        	might have been creative in that area. Still waiting for
        	confirmation. ]
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bpetkov@suse.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Link: https://lkml.kernel.org/r/20171204150606.733700132@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c482feef