1. 28 10月, 2019 1 次提交
    • P
      x86/speculation/taa: Add mitigation for TSX Async Abort · 1b42f017
      Pawan Gupta 提交于
      TSX Async Abort (TAA) is a side channel vulnerability to the internal
      buffers in some Intel processors similar to Microachitectural Data
      Sampling (MDS). In this case, certain loads may speculatively pass
      invalid data to dependent operations when an asynchronous abort
      condition is pending in a TSX transaction.
      
      This includes loads with no fault or assist condition. Such loads may
      speculatively expose stale data from the uarch data structures as in
      MDS. Scope of exposure is within the same-thread and cross-thread. This
      issue affects all current processors that support TSX, but do not have
      ARCH_CAP_TAA_NO (bit 8) set in MSR_IA32_ARCH_CAPABILITIES.
      
      On CPUs which have their IA32_ARCH_CAPABILITIES MSR bit MDS_NO=0,
      CPUID.MD_CLEAR=1 and the MDS mitigation is clearing the CPU buffers
      using VERW or L1D_FLUSH, there is no additional mitigation needed for
      TAA. On affected CPUs with MDS_NO=1 this issue can be mitigated by
      disabling the Transactional Synchronization Extensions (TSX) feature.
      
      A new MSR IA32_TSX_CTRL in future and current processors after a
      microcode update can be used to control the TSX feature. There are two
      bits in that MSR:
      
      * TSX_CTRL_RTM_DISABLE disables the TSX sub-feature Restricted
      Transactional Memory (RTM).
      
      * TSX_CTRL_CPUID_CLEAR clears the RTM enumeration in CPUID. The other
      TSX sub-feature, Hardware Lock Elision (HLE), is unconditionally
      disabled with updated microcode but still enumerated as present by
      CPUID(EAX=7).EBX{bit4}.
      
      The second mitigation approach is similar to MDS which is clearing the
      affected CPU buffers on return to user space and when entering a guest.
      Relevant microcode update is required for the mitigation to work.  More
      details on this approach can be found here:
      
        https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html
      
      The TSX feature can be controlled by the "tsx" command line parameter.
      If it is force-enabled then "Clear CPU buffers" (MDS mitigation) is
      deployed. The effective mitigation state can be read from sysfs.
      
       [ bp:
         - massage + comments cleanup
         - s/TAA_MITIGATION_TSX_DISABLE/TAA_MITIGATION_TSX_DISABLED/g - Josh.
         - remove partial TAA mitigation in update_mds_branch_idle() - Josh.
         - s/tsx_async_abort_cmdline/tsx_async_abort_parse_cmdline/g
       ]
      Signed-off-by: NPawan Gupta <pawan.kumar.gupta@linux.intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      1b42f017
  2. 28 8月, 2019 1 次提交
    • T
      x86/vmware: Add a header file for hypercall definitions · b4dd4f6e
      Thomas Hellstrom 提交于
      The new header is intended to be used by drivers using the backdoor.
      Follow the KVM example using alternatives self-patching to choose
      between vmcall, vmmcall and io instructions.
      
      Also define two new CPU feature flags to indicate hypervisor support
      for vmcall- and vmmcall instructions. The new XF86_FEATURE_VMW_VMMCALL
      flag is needed because using XF86_FEATURE_VMMCALL might break QEMU/KVM
      setups using the vmmouse driver. They rely on XF86_FEATURE_VMMCALL
      on AMD to get the kvm_hypercall() right. But they do not yet implement
      vmmcall for the VMware hypercall used by the vmmouse driver.
      
       [ bp: reflow hypercall %edx usage explanation comment. ]
      Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NDoug Covelli <dcovelli@vmware.com>
      Cc: Aaron Lewis <aaronlewis@google.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: linux-graphics-maintainer@vmware.com
      Cc: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
      Cc: Nicolas Ferre <nicolas.ferre@microchip.com>
      Cc: Robert Hoo <robert.hu@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: virtualization@lists.linux-foundation.org
      Cc: <pv-drivers@vmware.com>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/20190828080353.12658-3-thomas_os@shipmail.org
      b4dd4f6e
  3. 29 7月, 2019 1 次提交
  4. 22 7月, 2019 2 次提交
    • J
      x86: Remove X86_FEATURE_MFENCE_RDTSC · be261ffc
      Josh Poimboeuf 提交于
      AMD and Intel both have serializing lfence (X86_FEATURE_LFENCE_RDTSC).
      They've both had it for a long time, and AMD has had it enabled in Linux
      since Spectre v1 was announced.
      
      Back then, there was a proposal to remove the serializing mfence feature
      bit (X86_FEATURE_MFENCE_RDTSC), since both AMD and Intel have
      serializing lfence.  At the time, it was (ahem) speculated that some
      hypervisors might not yet support its removal, so it remained for the
      time being.
      
      Now a year-and-a-half later, it should be safe to remove.
      
      I asked Andrew Cooper about whether it's still needed:
      
        So if you're virtualised, you've got no choice in the matter.  lfence
        is either dispatch-serialising or not on AMD, and you won't be able to
        change it.
      
        Furthermore, you can't accurately tell what state the bit is in, because
        the MSR might not be virtualised at all, or may not reflect the true
        state in hardware.  Worse still, attempting to set the bit may not be
        successful even if there isn't a fault for doing so.
      
        Xen sets the DE_CFG bit unconditionally, as does Linux by the looks of
        things (see MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT).  ISTR other hypervisor
        vendors saying the same, but I don't have any information to hand.
      
        If you are running under a hypervisor which has been updated, then
        lfence will almost certainly be dispatch-serialising in practice, and
        you'll almost certainly see the bit already set in DE_CFG.  If you're
        running under a hypervisor which hasn't been patched since Spectre,
        you've already lost in many more ways.
      
        I'd argue that X86_FEATURE_MFENCE_RDTSC is not worth keeping.
      
      So remove it.  This will reduce some code rot, and also make it easier
      to hook barrier_nospec() up to a cmdline disable for performance
      raisins, without having to need an alternative_3() macro.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Link: https://lkml.kernel.org/r/d990aa51e40063acb9888e8c1b688e41355a9588.1562255067.git.jpoimboe@redhat.com
      be261ffc
    • G
      x86/cpufeatures: Enable a new AVX512 CPU feature · 018ebca8
      Gayatri Kammela 提交于
      Add a new AVX512 instruction group/feature for enumeration in
      /proc/cpuinfo: AVX512_VP2INTERSECT.
      
      CPUID.(EAX=7,ECX=0):EDX[bit 8]  AVX512_VP2INTERSECT
      
      Detailed information of CPUID bits for this feature can be found in
      the Intel Architecture Intsruction Set Extensions Programming Reference
      document (refer to Table 1-2). A copy of this document is available at
      https://bugzilla.kernel.org/show_bug.cgi?id=204215.
      Signed-off-by: NGayatri Kammela <gayatri.kammela@intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Link: https://lkml.kernel.org/r/20190717234632.32673-3-gayatri.kammela@intel.com
      018ebca8
  5. 09 7月, 2019 1 次提交
    • J
      x86/speculation: Prepare entry code for Spectre v1 swapgs mitigations · 18ec54fd
      Josh Poimboeuf 提交于
      
      Spectre v1 isn't only about array bounds checks.  It can affect any
      conditional checks.  The kernel entry code interrupt, exception, and NMI
      handlers all have conditional swapgs checks.  Those may be problematic in
      the context of Spectre v1, as kernel code can speculatively run with a user
      GS.
      
      For example:
      
      	if (coming from user space)
      		swapgs
      	mov %gs:<percpu_offset>, %reg
      	mov (%reg), %reg1
      
      When coming from user space, the CPU can speculatively skip the swapgs, and
      then do a speculative percpu load using the user GS value.  So the user can
      speculatively force a read of any kernel value.  If a gadget exists which
      uses the percpu value as an address in another load/store, then the
      contents of the kernel value may become visible via an L1 side channel
      attack.
      
      A similar attack exists when coming from kernel space.  The CPU can
      speculatively do the swapgs, causing the user GS to get used for the rest
      of the speculative window.
      
      The mitigation is similar to a traditional Spectre v1 mitigation, except:
      
        a) index masking isn't possible; because the index (percpu offset)
           isn't user-controlled; and
      
        b) an lfence is needed in both the "from user" swapgs path and the
           "from kernel" non-swapgs path (because of the two attacks described
           above).
      
      The user entry swapgs paths already have SWITCH_TO_KERNEL_CR3, which has a
      CR3 write when PTI is enabled.  Since CR3 writes are serializing, the
      lfences can be skipped in those cases.
      
      On the other hand, the kernel entry swapgs paths don't depend on PTI.
      
      To avoid unnecessary lfences for the user entry case, create two separate
      features for alternative patching:
      
        X86_FEATURE_FENCE_SWAPGS_USER
        X86_FEATURE_FENCE_SWAPGS_KERNEL
      
      Use these features in entry code to patch in lfences where needed.
      
      The features aren't enabled yet, so there's no functional change.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NDave Hansen <dave.hansen@intel.com>
      18ec54fd
  6. 24 6月, 2019 1 次提交
    • F
      x86/cpufeatures: Enumerate user wait instructions · 6dbbf5ec
      Fenghua Yu 提交于
      umonitor, umwait, and tpause are a set of user wait instructions.
      
      umonitor arms address monitoring hardware using an address. The
      address range is determined by using CPUID.0x5. A store to
      an address within the specified address range triggers the
      monitoring hardware to wake up the processor waiting in umwait.
      
      umwait instructs the processor to enter an implementation-dependent
      optimized state while monitoring a range of addresses. The optimized
      state may be either a light-weight power/performance optimized state
      (C0.1 state) or an improved power/performance optimized state
      (C0.2 state).
      
      tpause instructs the processor to enter an implementation-dependent
      optimized state C0.1 or C0.2 state and wake up when time-stamp counter
      reaches specified timeout.
      
      The three instructions may be executed at any privilege level.
      
      The instructions provide power saving method while waiting in
      user space. Additionally, they can allow a sibling hyperthread to
      make faster progress while this thread is waiting. One example of an
      application usage of umwait is when waiting for input data from another
      application, such as a user level multi-threaded packet processing
      engine.
      
      Availability of the user wait instructions is indicated by the presence
      of the CPUID feature flag WAITPKG CPUID.0x07.0x0:ECX[5].
      
      Detailed information on the instructions and CPUID feature WAITPKG flag
      can be found in the latest Intel Architecture Instruction Set Extensions
      and Future Features Programming Reference and Intel 64 and IA-32
      Architectures Software Developer's Manual.
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NAshok Raj <ashok.raj@intel.com>
      Reviewed-by: NAndy Lutomirski <luto@kernel.org>
      Cc: "Borislav Petkov" <bp@alien8.de>
      Cc: "H Peter Anvin" <hpa@zytor.com>
      Cc: "Peter Zijlstra" <peterz@infradead.org>
      Cc: "Tony Luck" <tony.luck@intel.com>
      Cc: "Ravi V Shankar" <ravi.v.shankar@intel.com>
      Link: https://lkml.kernel.org/r/1560994438-235698-2-git-send-email-fenghua.yu@intel.com
      6dbbf5ec
  7. 20 6月, 2019 2 次提交
    • F
      x86/cpufeatures: Enumerate the new AVX512 BFLOAT16 instructions · b302e4b1
      Fenghua Yu 提交于
      AVX512 BFLOAT16 instructions support 16-bit BFLOAT16 floating-point
      format (BF16) for deep learning optimization.
      
      BF16 is a short version of 32-bit single-precision floating-point
      format (FP32) and has several advantages over 16-bit half-precision
      floating-point format (FP16). BF16 keeps FP32 accumulation after
      multiplication without loss of precision, offers more than enough
      range for deep learning training tasks, and doesn't need to handle
      hardware exception.
      
      AVX512 BFLOAT16 instructions are enumerated in CPUID.7.1:EAX[bit 5]
      AVX512_BF16.
      
      CPUID.7.1:EAX contains only feature bits. Reuse the currently empty
      word 12 as a pure features word to hold the feature bits including
      AVX512_BF16.
      
      Detailed information of the CPUID bit and AVX512 BFLOAT16 instructions
      can be found in the latest Intel Architecture Instruction Set Extensions
      and Future Features Programming Reference.
      
       [ bp: Check CPUID(7) subleaf validity before accessing subleaf 1. ]
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: "Chang S. Bae" <chang.seok.bae@intel.com>
      Cc: Frederic Weisbecker <frederic@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jann Horn <jannh@google.com>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Nadav Amit <namit@vmware.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Pavel Tatashin <pasha.tatashin@oracle.com>
      Cc: Peter Feiner <pfeiner@google.com>
      Cc: Radim Krcmar <rkrcmar@redhat.com>
      Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
      Cc: "Ravi V Shankar" <ravi.v.shankar@intel.com>
      Cc: Robert Hoo <robert.hu@linux.intel.com>
      Cc: "Sean J Christopherson" <sean.j.christopherson@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Thomas Lendacky <Thomas.Lendacky@amd.com>
      Cc: x86 <x86@kernel.org>
      Link: https://lkml.kernel.org/r/1560794416-217638-3-git-send-email-fenghua.yu@intel.com
      b302e4b1
    • F
      x86/cpufeatures: Combine word 11 and 12 into a new scattered features word · acec0ce0
      Fenghua Yu 提交于
      It's a waste for the four X86_FEATURE_CQM_* feature bits to occupy two
      whole feature bits words. To better utilize feature words, re-define
      word 11 to host scattered features and move the four X86_FEATURE_CQM_*
      features into Linux defined word 11. More scattered features can be
      added in word 11 in the future.
      
      Rename leaf 11 in cpuid_leafs to CPUID_LNX_4 to reflect it's a
      Linux-defined leaf.
      
      Rename leaf 12 as CPUID_DUMMY which will be replaced by a meaningful
      name in the next patch when CPUID.7.1:EAX occupies world 12.
      
      Maximum number of RMID and cache occupancy scale are retrieved from
      CPUID.0xf.1 after scattered CQM features are enumerated. Carve out the
      code into a separate function.
      
      KVM doesn't support resctrl now. So it's safe to move the
      X86_FEATURE_CQM_* features to scattered features word 11 for KVM.
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Aaron Lewis <aaronlewis@google.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Babu Moger <babu.moger@amd.com>
      Cc: "Chang S. Bae" <chang.seok.bae@intel.com>
      Cc: "Sean J Christopherson" <sean.j.christopherson@intel.com>
      Cc: Frederic Weisbecker <frederic@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jann Horn <jannh@google.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: kvm ML <kvm@vger.kernel.org>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Nadav Amit <namit@vmware.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Pavel Tatashin <pasha.tatashin@oracle.com>
      Cc: Peter Feiner <pfeiner@google.com>
      Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
      Cc: Ravi V Shankar <ravi.v.shankar@intel.com>
      Cc: Sherry Hurwitz <sherry.hurwitz@amd.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Thomas Lendacky <Thomas.Lendacky@amd.com>
      Cc: x86 <x86@kernel.org>
      Link: https://lkml.kernel.org/r/1560794416-217638-2-git-send-email-fenghua.yu@intel.com
      acec0ce0
  8. 14 6月, 2019 1 次提交
    • A
      x86/cpufeatures: Add FDP_EXCPTN_ONLY and ZERO_FCS_FDS · cbb99c0f
      Aaron Lewis 提交于
      Add the CPUID enumeration for Intel's de-feature bits to accommodate
      passing these de-features through to kvm guests.
      
      These de-features are (from SDM vol 1, section 8.1.8):
       - X86_FEATURE_FDP_EXCPTN_ONLY: If CPUID.(EAX=07H,ECX=0H):EBX[bit 6] = 1, the
         data pointer (FDP) is updated only for the x87 non-control instructions that
         incur unmasked x87 exceptions.
       - X86_FEATURE_ZERO_FCS_FDS: If CPUID.(EAX=07H,ECX=0H):EBX[bit 13] = 1, the
         processor deprecates FCS and FDS; it saves each as 0000H.
      Signed-off-by: NAaron Lewis <aaronlewis@google.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NJim Mattson <jmattson@google.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Frederic Weisbecker <frederic@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: marcorr@google.com
      Cc: Peter Feiner <pfeiner@google.com>
      Cc: pshier@google.com
      Cc: Robert Hoo <robert.hu@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Thomas Lendacky <Thomas.Lendacky@amd.com>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/20190605220252.103406-1-aaronlewis@google.com
      cbb99c0f
  9. 07 3月, 2019 2 次提交
    • T
      x86/speculation/mds: Add BUG_MSBDS_ONLY · e261f209
      Thomas Gleixner 提交于
      This bug bit is set on CPUs which are only affected by Microarchitectural
      Store Buffer Data Sampling (MSBDS) and not by any other MDS variant.
      
      This is important because the Store Buffers are partitioned between
      Hyper-Threads so cross thread forwarding is not possible. But if a thread
      enters or exits a sleep state the store buffer is repartitioned which can
      expose data from one thread to the other. This transition can be mitigated.
      
      That means that for CPUs which are only affected by MSBDS SMT can be
      enabled, if the CPU is not affected by other SMT sensitive vulnerabilities,
      e.g. L1TF. The XEON PHI variants fall into that category. Also the
      Silvermont/Airmont ATOMs, but for them it's not really relevant as they do
      not support SMT, but mark them for completeness sake.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NFrederic Weisbecker <frederic@kernel.org>
      Reviewed-by: NJon Masters <jcm@redhat.com>
      Tested-by: NJon Masters <jcm@redhat.com>
      e261f209
    • A
      x86/speculation/mds: Add basic bug infrastructure for MDS · ed5194c2
      Andi Kleen 提交于
      Microarchitectural Data Sampling (MDS), is a class of side channel attacks
      on internal buffers in Intel CPUs. The variants are:
      
       - Microarchitectural Store Buffer Data Sampling (MSBDS) (CVE-2018-12126)
       - Microarchitectural Fill Buffer Data Sampling (MFBDS) (CVE-2018-12130)
       - Microarchitectural Load Port Data Sampling (MLPDS) (CVE-2018-12127)
      
      MSBDS leaks Store Buffer Entries which can be speculatively forwarded to a
      dependent load (store-to-load forwarding) as an optimization. The forward
      can also happen to a faulting or assisting load operation for a different
      memory address, which can be exploited under certain conditions. Store
      buffers are partitioned between Hyper-Threads so cross thread forwarding is
      not possible. But if a thread enters or exits a sleep state the store
      buffer is repartitioned which can expose data from one thread to the other.
      
      MFBDS leaks Fill Buffer Entries. Fill buffers are used internally to manage
      L1 miss situations and to hold data which is returned or sent in response
      to a memory or I/O operation. Fill buffers can forward data to a load
      operation and also write data to the cache. When the fill buffer is
      deallocated it can retain the stale data of the preceding operations which
      can then be forwarded to a faulting or assisting load operation, which can
      be exploited under certain conditions. Fill buffers are shared between
      Hyper-Threads so cross thread leakage is possible.
      
      MLDPS leaks Load Port Data. Load ports are used to perform load operations
      from memory or I/O. The received data is then forwarded to the register
      file or a subsequent operation. In some implementations the Load Port can
      contain stale data from a previous operation which can be forwarded to
      faulting or assisting loads under certain conditions, which again can be
      exploited eventually. Load ports are shared between Hyper-Threads so cross
      thread leakage is possible.
      
      All variants have the same mitigation for single CPU thread case (SMT off),
      so the kernel can treat them as one MDS issue.
      
      Add the basic infrastructure to detect if the current CPU is affected by
      MDS.
      
      [ tglx: Rewrote changelog ]
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Reviewed-by: NFrederic Weisbecker <frederic@kernel.org>
      Reviewed-by: NJon Masters <jcm@redhat.com>
      Tested-by: NJon Masters <jcm@redhat.com>
      ed5194c2
  10. 06 3月, 2019 1 次提交
  11. 21 12月, 2018 1 次提交
  12. 18 12月, 2018 1 次提交
    • T
      x86/speculation: Add support for STIBP always-on preferred mode · 20c3a2c3
      Thomas Lendacky 提交于
      Different AMD processors may have different implementations of STIBP.
      When STIBP is conditionally enabled, some implementations would benefit
      from having STIBP always on instead of toggling the STIBP bit through MSR
      writes. This preference is advertised through a CPUID feature bit.
      
      When conditional STIBP support is requested at boot and the CPU advertises
      STIBP always-on mode as preferred, switch to STIBP "on" support. To show
      that this transition has occurred, create a new spectre_v2_user_mitigation
      value and a new spectre_v2_user_strings message. The new mitigation value
      is used in spectre_v2_user_select_mitigation() to print the new mitigation
      message as well as to return a new string from stibp_state().
      Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Link: https://lkml.kernel.org/r/20181213230352.6937.74943.stgit@tlendack-t1.amdoffice.net
      20c3a2c3
  13. 08 11月, 2018 1 次提交
  14. 25 10月, 2018 2 次提交
    • F
      x86/cpufeatures: Enumerate MOVDIR64B instruction · ace6485a
      Fenghua Yu 提交于
      MOVDIR64B moves 64-bytes as direct-store with 64-bytes write atomicity.
      Direct store is implemented by using write combining (WC) for writing
      data directly into memory without caching the data.
      
      In low latency offload (e.g. Non-Volatile Memory, etc), MOVDIR64B writes
      work descriptors (and data in some cases) to device-hosted work-queues
      atomically without cache pollution.
      
      Availability of the MOVDIR64B instruction is indicated by the
      presence of the CPUID feature flag MOVDIR64B (CPUID.0x07.0x0:ECX[bit 28]).
      
      Please check the latest Intel Architecture Instruction Set Extensions
      and Future Features Programming Reference for more details on the CPUID
      feature MOVDIR64B flag.
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Ashok Raj <ashok.raj@intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi V Shankar <ravi.v.shankar@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1540418237-125817-3-git-send-email-fenghua.yu@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ace6485a
    • F
      x86/cpufeatures: Enumerate MOVDIRI instruction · 33823f4d
      Fenghua Yu 提交于
      MOVDIRI moves doubleword or quadword from register to memory through
      direct store which is implemented by using write combining (WC) for
      writing data directly into memory without caching the data.
      
      Programmable agents can handle streaming offload (e.g. high speed packet
      processing in network). Hardware implements a doorbell (tail pointer)
      register that is updated by software when adding new work-elements to
      the streaming offload work-queue.
      
      MOVDIRI can be used as the doorbell write which is a 4-byte or 8-byte
      uncachable write to MMIO. MOVDIRI has lower overhead than other ways
      to write the doorbell.
      
      Availability of the MOVDIRI instruction is indicated by the presence of
      the CPUID feature flag MOVDIRI(CPUID.0x07.0x0:ECX[bit 27]).
      
      Please check the latest Intel Architecture Instruction Set Extensions
      and Future Features Programming Reference for more details on the CPUID
      feature MOVDIRI flag.
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Ashok Raj <ashok.raj@intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi V Shankar <ravi.v.shankar@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1540418237-125817-2-git-send-email-fenghua.yu@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      33823f4d
  15. 03 8月, 2018 2 次提交
    • S
      x86/speculation: Support Enhanced IBRS on future CPUs · 706d5168
      Sai Praneeth 提交于
      Future Intel processors will support "Enhanced IBRS" which is an "always
      on" mode i.e. IBRS bit in SPEC_CTRL MSR is enabled once and never
      disabled.
      
      From the specification [1]:
      
       "With enhanced IBRS, the predicted targets of indirect branches
        executed cannot be controlled by software that was executed in a less
        privileged predictor mode or on another logical processor. As a
        result, software operating on a processor with enhanced IBRS need not
        use WRMSR to set IA32_SPEC_CTRL.IBRS after every transition to a more
        privileged predictor mode. Software can isolate predictor modes
        effectively simply by setting the bit once. Software need not disable
        enhanced IBRS prior to entering a sleep state such as MWAIT or HLT."
      
      If Enhanced IBRS is supported by the processor then use it as the
      preferred spectre v2 mitigation mechanism instead of Retpoline. Intel's
      Retpoline white paper [2] states:
      
       "Retpoline is known to be an effective branch target injection (Spectre
        variant 2) mitigation on Intel processors belonging to family 6
        (enumerated by the CPUID instruction) that do not have support for
        enhanced IBRS. On processors that support enhanced IBRS, it should be
        used for mitigation instead of retpoline."
      
      The reason why Enhanced IBRS is the recommended mitigation on processors
      which support it is that these processors also support CET which
      provides a defense against ROP attacks. Retpoline is very similar to ROP
      techniques and might trigger false positives in the CET defense.
      
      If Enhanced IBRS is selected as the mitigation technique for spectre v2,
      the IBRS bit in SPEC_CTRL MSR is set once at boot time and never
      cleared. Kernel also has to make sure that IBRS bit remains set after
      VMEXIT because the guest might have cleared the bit. This is already
      covered by the existing x86_spec_ctrl_set_guest() and
      x86_spec_ctrl_restore_host() speculation control functions.
      
      Enhanced IBRS still requires IBPB for full mitigation.
      
      [1] Speculative-Execution-Side-Channel-Mitigations.pdf
      [2] Retpoline-A-Branch-Target-Injection-Mitigation.pdf
      Both documents are available at:
      https://bugzilla.kernel.org/show_bug.cgi?id=199511Originally-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Signed-off-by: NSai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Tim C Chen <tim.c.chen@intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Ravi Shankar <ravi.v.shankar@intel.com>
      Link: https://lkml.kernel.org/r/1533148945-24095-1-git-send-email-sai.praneeth.prakhya@intel.com
      706d5168
    • P
      x86/cpufeatures: Add EPT_AD feature bit · 301d328a
      Peter Feiner 提交于
      Some Intel processors have an EPT feature whereby the accessed & dirty bits
      in EPT entries can be updated by HW. MSR IA32_VMX_EPT_VPID_CAP exposes the
      presence of this capability.
      
      There is no point in trying to use that new feature bit in the VMX code as
      VMX needs to read the MSR anyway to access other bits, but having the
      feature bit for EPT_AD in place helps virtualization management as it
      exposes "ept_ad" in /proc/cpuinfo/$proc/flags if the feature is present.
      
      [ tglx: Amended changelog ]
      Signed-off-by: NPeter Feiner <pfeiner@google.com>
      Signed-off-by: NPeter Shier <pshier@google.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NJim Mattson <jmattson@google.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Link: https://lkml.kernel.org/r/20180801180657.138051-1-pshier@google.com
      301d328a
  16. 21 6月, 2018 2 次提交
  17. 06 6月, 2018 2 次提交
  18. 17 5月, 2018 5 次提交
  19. 10 5月, 2018 1 次提交
    • K
      x86/bugs: Rename _RDS to _SSBD · 9f65fb29
      Konrad Rzeszutek Wilk 提交于
      Intel collateral will reference the SSB mitigation bit in IA32_SPEC_CTL[2]
      as SSBD (Speculative Store Bypass Disable).
      
      Hence changing it.
      
      It is unclear yet what the MSR_IA32_ARCH_CAPABILITIES (0x10a) Bit(4) name
      is going to be. Following the rename it would be SSBD_NO but that rolls out
      to Speculative Store Bypass Disable No.
      
      Also fixed the missing space in X86_FEATURE_AMD_SSBD.
      
      [ tglx: Fixup x86_amd_rds_enable() and rds_tif_to_amd_ls_cfg() as well ]
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      9f65fb29
  20. 03 5月, 2018 4 次提交
  21. 26 4月, 2018 1 次提交
  22. 12 3月, 2018 2 次提交
  23. 20 2月, 2018 1 次提交
  24. 28 1月, 2018 1 次提交
    • D
      x86/cpufeatures: Clean up Spectre v2 related CPUID flags · 2961298e
      David Woodhouse 提交于
      We want to expose the hardware features simply in /proc/cpuinfo as "ibrs",
      "ibpb" and "stibp". Since AMD has separate CPUID bits for those, use them
      as the user-visible bits.
      
      When the Intel SPEC_CTRL bit is set which indicates both IBRS and IBPB
      capability, set those (AMD) bits accordingly. Likewise if the Intel STIBP
      bit is set, set the AMD STIBP that's used for the generic hardware
      capability.
      
      Hide the rest from /proc/cpuinfo by putting "" in the comments. Including
      RETPOLINE and RETPOLINE_AMD which shouldn't be visible there. There are
      patches to make the sysfs vulnerabilities information non-readable by
      non-root, and the same should apply to all information about which
      mitigations are actually in use. Those *shouldn't* appear in /proc/cpuinfo.
      
      The feature bit for whether IBPB is actually used, which is needed for
      ALTERNATIVEs, is renamed to X86_FEATURE_USE_IBPB.
      Originally-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: ak@linux.intel.com
      Cc: dave.hansen@intel.com
      Cc: karahmed@amazon.de
      Cc: arjan@linux.intel.com
      Cc: torvalds@linux-foundation.org
      Cc: peterz@infradead.org
      Cc: bp@alien8.de
      Cc: pbonzini@redhat.com
      Cc: tim.c.chen@linux.intel.com
      Cc: gregkh@linux-foundation.org
      Link: https://lkml.kernel.org/r/1517070274-12128-2-git-send-email-dwmw@amazon.co.uk
      2961298e
  25. 26 1月, 2018 1 次提交