1. 03 8月, 2018 1 次提交
    • S
      x86/speculation: Support Enhanced IBRS on future CPUs · 706d5168
      Sai Praneeth 提交于
      Future Intel processors will support "Enhanced IBRS" which is an "always
      on" mode i.e. IBRS bit in SPEC_CTRL MSR is enabled once and never
      disabled.
      
      From the specification [1]:
      
       "With enhanced IBRS, the predicted targets of indirect branches
        executed cannot be controlled by software that was executed in a less
        privileged predictor mode or on another logical processor. As a
        result, software operating on a processor with enhanced IBRS need not
        use WRMSR to set IA32_SPEC_CTRL.IBRS after every transition to a more
        privileged predictor mode. Software can isolate predictor modes
        effectively simply by setting the bit once. Software need not disable
        enhanced IBRS prior to entering a sleep state such as MWAIT or HLT."
      
      If Enhanced IBRS is supported by the processor then use it as the
      preferred spectre v2 mitigation mechanism instead of Retpoline. Intel's
      Retpoline white paper [2] states:
      
       "Retpoline is known to be an effective branch target injection (Spectre
        variant 2) mitigation on Intel processors belonging to family 6
        (enumerated by the CPUID instruction) that do not have support for
        enhanced IBRS. On processors that support enhanced IBRS, it should be
        used for mitigation instead of retpoline."
      
      The reason why Enhanced IBRS is the recommended mitigation on processors
      which support it is that these processors also support CET which
      provides a defense against ROP attacks. Retpoline is very similar to ROP
      techniques and might trigger false positives in the CET defense.
      
      If Enhanced IBRS is selected as the mitigation technique for spectre v2,
      the IBRS bit in SPEC_CTRL MSR is set once at boot time and never
      cleared. Kernel also has to make sure that IBRS bit remains set after
      VMEXIT because the guest might have cleared the bit. This is already
      covered by the existing x86_spec_ctrl_set_guest() and
      x86_spec_ctrl_restore_host() speculation control functions.
      
      Enhanced IBRS still requires IBPB for full mitigation.
      
      [1] Speculative-Execution-Side-Channel-Mitigations.pdf
      [2] Retpoline-A-Branch-Target-Injection-Mitigation.pdf
      Both documents are available at:
      https://bugzilla.kernel.org/show_bug.cgi?id=199511Originally-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Signed-off-by: NSai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Tim C Chen <tim.c.chen@intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Ravi Shankar <ravi.v.shankar@intel.com>
      Link: https://lkml.kernel.org/r/1533148945-24095-1-git-send-email-sai.praneeth.prakhya@intel.com
      706d5168
  2. 20 7月, 2018 1 次提交
    • J
      x86/entry/32: Enter the kernel via trampoline stack · 45d7b255
      Joerg Roedel 提交于
      Use the entry-stack as a trampoline to enter the kernel. The entry-stack is
      already in the cpu_entry_area and will be mapped to userspace when PTI is
      enabled.
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NPavel Machek <pavel@ucw.cz>
      Cc: "H . Peter Anvin" <hpa@zytor.com>
      Cc: linux-mm@kvack.org
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Waiman Long <llong@redhat.com>
      Cc: "David H . Gutteridge" <dhgutteridge@sympatico.ca>
      Cc: joro@8bytes.org
      Link: https://lkml.kernel.org/r/1531906876-13451-8-git-send-email-joro@8bytes.org
      45d7b255
  3. 23 6月, 2018 1 次提交
  4. 14 6月, 2018 1 次提交
    • L
      Kbuild: rename CC_STACKPROTECTOR[_STRONG] config variables · 050e9baa
      Linus Torvalds 提交于
      The changes to automatically test for working stack protector compiler
      support in the Kconfig files removed the special STACKPROTECTOR_AUTO
      option that picked the strongest stack protector that the compiler
      supported.
      
      That was all a nice cleanup - it makes no sense to have the AUTO case
      now that the Kconfig phase can just determine the compiler support
      directly.
      
      HOWEVER.
      
      It also meant that doing "make oldconfig" would now _disable_ the strong
      stackprotector if you had AUTO enabled, because in a legacy config file,
      the sane stack protector configuration would look like
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        # CONFIG_CC_STACKPROTECTOR_NONE is not set
        # CONFIG_CC_STACKPROTECTOR_REGULAR is not set
        # CONFIG_CC_STACKPROTECTOR_STRONG is not set
        CONFIG_CC_STACKPROTECTOR_AUTO=y
      
      and when you ran this through "make oldconfig" with the Kbuild changes,
      it would ask you about the regular CONFIG_CC_STACKPROTECTOR (that had
      been renamed from CONFIG_CC_STACKPROTECTOR_REGULAR to just
      CONFIG_CC_STACKPROTECTOR), but it would think that the STRONG version
      used to be disabled (because it was really enabled by AUTO), and would
      disable it in the new config, resulting in:
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
        CONFIG_CC_STACKPROTECTOR=y
        # CONFIG_CC_STACKPROTECTOR_STRONG is not set
        CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
      
      That's dangerously subtle - people could suddenly find themselves with
      the weaker stack protector setup without even realizing.
      
      The solution here is to just rename not just the old RECULAR stack
      protector option, but also the strong one.  This does that by just
      removing the CC_ prefix entirely for the user choices, because it really
      is not about the compiler support (the compiler support now instead
      automatially impacts _visibility_ of the options to users).
      
      This results in "make oldconfig" actually asking the user for their
      choice, so that we don't have any silent subtle security model changes.
      The end result would generally look like this:
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
        CONFIG_STACKPROTECTOR=y
        CONFIG_STACKPROTECTOR_STRONG=y
        CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
      
      where the "CC_" versions really are about internal compiler
      infrastructure, not the user selections.
      Acked-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      050e9baa
  5. 06 6月, 2018 2 次提交
  6. 23 5月, 2018 1 次提交
  7. 19 5月, 2018 1 次提交
  8. 18 5月, 2018 1 次提交
  9. 17 5月, 2018 4 次提交
  10. 13 5月, 2018 2 次提交
  11. 10 5月, 2018 1 次提交
    • K
      x86/bugs: Rename _RDS to _SSBD · 9f65fb29
      Konrad Rzeszutek Wilk 提交于
      Intel collateral will reference the SSB mitigation bit in IA32_SPEC_CTL[2]
      as SSBD (Speculative Store Bypass Disable).
      
      Hence changing it.
      
      It is unclear yet what the MSR_IA32_ARCH_CAPABILITIES (0x10a) Bit(4) name
      is going to be. Following the rename it would be SSBD_NO but that rolls out
      to Speculative Store Bypass Disable No.
      
      Also fixed the missing space in X86_FEATURE_AMD_SSBD.
      
      [ tglx: Fixup x86_amd_rds_enable() and rds_tif_to_amd_ls_cfg() as well ]
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      9f65fb29
  12. 06 5月, 2018 1 次提交
  13. 03 5月, 2018 4 次提交
  14. 02 5月, 2018 1 次提交
    • T
      x86/cpu: Restore CPUID_8000_0008_EBX reload · c65732e4
      Thomas Gleixner 提交于
      The recent commt which addresses the x86_phys_bits corruption with
      encrypted memory on CPUID reload after a microcode update lost the reload
      of CPUID_8000_0008_EBX as well.
      
      As a consequence IBRS and IBRS_FW are not longer detected
      
      Restore the behaviour by bringing the reload of CPUID_8000_0008_EBX
      back. This restore has a twist due to the convoluted way the cpuid analysis
      works:
      
      CPUID_8000_0008_EBX is used by AMD to enumerate IBRB, IBRS, STIBP. On Intel
      EBX is not used. But the speculation control code sets the AMD bits when
      running on Intel depending on the Intel specific speculation control
      bits. This was done to use the same bits for alternatives.
      
      The change which moved the 8000_0008 evaluation out of get_cpu_cap() broke
      this nasty scheme due to ordering. So that on Intel the store to
      CPUID_8000_0008_EBX clears the IBRB, IBRS, STIBP bits which had been set
      before by software.
      
      So the actual CPUID_8000_0008_EBX needs to go back to the place where it
      was and the phys/virt address space calculation cannot touch it.
      
      In hindsight this should have used completely synthetic bits for IBRB,
      IBRS, STIBP instead of reusing the AMD bits, but that's for 4.18.
      
      /me needs to find time to cleanup that steaming pile of ...
      
      Fixes: d94a155c ("x86/cpu: Prevent cpuinfo_x86::x86_phys_bits adjustment corruption")
      Reported-by: NJörg Otte <jrg.otte@gmail.com>
      Reported-by: NTim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NJörg Otte <jrg.otte@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: kirill.shutemov@linux.intel.com
      Cc: Borislav Petkov <bp@alien8.de
      Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1805021043510.1668@nanos.tec.linutronix.de
      c65732e4
  15. 10 4月, 2018 1 次提交
  16. 17 3月, 2018 1 次提交
  17. 17 2月, 2018 2 次提交
  18. 15 2月, 2018 2 次提交
  19. 03 2月, 2018 1 次提交
    • A
      x86/pti: Mark constant arrays as __initconst · 4bf5d56d
      Arnd Bergmann 提交于
      I'm seeing build failures from the two newly introduced arrays that
      are marked 'const' and '__initdata', which are mutually exclusive:
      
      arch/x86/kernel/cpu/common.c:882:43: error: 'cpu_no_speculation' causes a section type conflict with 'e820_table_firmware_init'
      arch/x86/kernel/cpu/common.c:895:43: error: 'cpu_no_meltdown' causes a section type conflict with 'e820_table_firmware_init'
      
      The correct annotation is __initconst.
      
      Fixes: fec9434a ("x86/pti: Do not enable PTI on CPUs which are not vulnerable to Meltdown")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Thomas Garnier <thgarnie@google.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Link: https://lkml.kernel.org/r/20180202213959.611210-1-arnd@arndb.de
      4bf5d56d
  20. 31 1月, 2018 1 次提交
    • D
      x86/cpuid: Fix up "virtual" IBRS/IBPB/STIBP feature bits on Intel · 7fcae111
      David Woodhouse 提交于
      Despite the fact that all the other code there seems to be doing it, just
      using set_cpu_cap() in early_intel_init() doesn't actually work.
      
      For CPUs with PKU support, setup_pku() calls get_cpu_cap() after
      c->c_init() has set those feature bits. That resets those bits back to what
      was queried from the hardware.
      
      Turning the bits off for bad microcode is easy to fix. That can just use
      setup_clear_cpu_cap() to force them off for all CPUs.
      
      I was less keen on forcing the feature bits *on* that way, just in case
      of inconsistencies. I appreciate that the kernel is going to get this
      utterly wrong if CPU features are not consistent, because it has already
      applied alternatives by the time secondary CPUs are brought up.
      
      But at least if setup_force_cpu_cap() isn't being used, we might have a
      chance of *detecting* the lack of the corresponding bit and either
      panicking or refusing to bring the offending CPU online.
      
      So ensure that the appropriate feature bits are set within get_cpu_cap()
      regardless of how many extra times it's called.
      
      Fixes: 2961298e ("x86/cpufeatures: Clean up Spectre v2 related CPUID flags")
      Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: karahmed@amazon.de
      Cc: peterz@infradead.org
      Cc: bp@alien8.de
      Link: https://lkml.kernel.org/r/1517322623-15261-1-git-send-email-dwmw@amazon.co.uk
      7fcae111
  21. 26 1月, 2018 2 次提交
  22. 12 1月, 2018 2 次提交
    • D
      x86/spectre: Add boot time option to select Spectre v2 mitigation · da285121
      David Woodhouse 提交于
      Add a spectre_v2= option to select the mitigation used for the indirect
      branch speculation vulnerability.
      
      Currently, the only option available is retpoline, in its various forms.
      This will be expanded to cover the new IBRS/IBPB microcode features.
      
      The RETPOLINE_AMD feature relies on a serializing LFENCE for speculation
      control. For AMD hardware, only set RETPOLINE_AMD if LFENCE is a
      serializing instruction, which is indicated by the LFENCE_RDTSC feature.
      
      [ tglx: Folded back the LFENCE/AMD fixes and reworked it so IBRS
        	integration becomes simple ]
      Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: thomas.lendacky@amd.com
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jiri Kosina <jikos@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Kees Cook <keescook@google.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: https://lkml.kernel.org/r/1515707194-20531-5-git-send-email-dwmw@amazon.co.uk
      da285121
    • D
      x86/retpoline: Add initial retpoline support · 76b04384
      David Woodhouse 提交于
      Enable the use of -mindirect-branch=thunk-extern in newer GCC, and provide
      the corresponding thunks. Provide assembler macros for invoking the thunks
      in the same way that GCC does, from native and inline assembler.
      
      This adds X86_FEATURE_RETPOLINE and sets it by default on all CPUs. In
      some circumstances, IBRS microcode features may be used instead, and the
      retpoline can be disabled.
      
      On AMD CPUs if lfence is serialising, the retpoline can be dramatically
      simplified to a simple "lfence; jmp *\reg". A future patch, after it has
      been verified that lfence really is serialising in all circumstances, can
      enable this by setting the X86_FEATURE_RETPOLINE_AMD feature bit in addition
      to X86_FEATURE_RETPOLINE.
      
      Do not align the retpoline in the altinstr section, because there is no
      guarantee that it stays aligned when it's copied over the oldinstr during
      alternative patching.
      
      [ Andi Kleen: Rename the macros, add CONFIG_RETPOLINE option, export thunks]
      [ tglx: Put actual function CALL/JMP in front of the macros, convert to
        	symbolic labels ]
      [ dwmw2: Convert back to numeric labels, merge objtool fixes ]
      Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NArjan van de Ven <arjan@linux.intel.com>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: thomas.lendacky@amd.com
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jiri Kosina <jikos@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Kees Cook <keescook@google.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: https://lkml.kernel.org/r/1515707194-20531-4-git-send-email-dwmw@amazon.co.uk
      76b04384
  23. 07 1月, 2018 1 次提交
  24. 05 1月, 2018 1 次提交
  25. 03 1月, 2018 1 次提交
  26. 24 12月, 2017 2 次提交
    • T
      x86/mm/pti: Force entry through trampoline when PTI active · 8d4b0678
      Thomas Gleixner 提交于
      Force the entry through the trampoline only when PTI is active. Otherwise
      go through the normal entry code.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      8d4b0678
    • T
      x86/cpufeatures: Add X86_BUG_CPU_INSECURE · a89f040f
      Thomas Gleixner 提交于
      Many x86 CPUs leak information to user space due to missing isolation of
      user space and kernel space page tables. There are many well documented
      ways to exploit that.
      
      The upcoming software migitation of isolating the user and kernel space
      page tables needs a misfeature flag so code can be made runtime
      conditional.
      
      Add the BUG bits which indicates that the CPU is affected and add a feature
      bit which indicates that the software migitation is enabled.
      
      Assume for now that _ALL_ x86 CPUs are affected by this. Exceptions can be
      made later.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      a89f040f
  27. 23 12月, 2017 1 次提交