- 07 4月, 2022 1 次提交
-
-
In preparation of extending cc_platform_has() API to support TDX guest, use CPUID instruction to detect support for TDX guests in the early boot code (via tdx_early_init()). Since copy_bootdata() is the first user of cc_platform_has() API, detect the TDX guest status before it. Define a synthetic feature flag (X86_FEATURE_TDX_GUEST) and set this bit in a valid TDX guest platform. Signed-off-by: NKuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com> Reviewed-by: NAndi Kleen <ak@linux.intel.com> Reviewed-by: NTony Luck <tony.luck@intel.com> Reviewed-by: NDave Hansen <dave.hansen@linux.intel.com> Reviewed-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20220405232939.73860-2-kirill.shutemov@linux.intel.com
-
- 15 3月, 2022 1 次提交
-
-
由 Peter Zijlstra 提交于
The bits required to make the hardware go.. Of note is that, provided the syscall entry points are covered with ENDBR, #CP doesn't need to be an IST because we'll never hit the syscall gap. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20220308154318.582331711@infradead.org
-
- 21 2月, 2022 1 次提交
-
-
由 Peter Zijlstra (Intel) 提交于
The RETPOLINE_AMD name is unfortunate since it isn't necessarily AMD only, in fact Hygon also uses it. Furthermore it will likely be sufficient for some Intel processors. Therefore rename the thing to RETPOLINE_LFENCE to better describe what it is. Add the spectre_v2=retpoline,lfence option as an alias to spectre_v2=retpoline,amd to preserve existing setups. However, the output of /sys/devices/system/cpu/vulnerabilities/spectre_v2 will be changed. [ bp: Fix typos, massage. ] Co-developed-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
-
- 08 2月, 2022 1 次提交
-
-
由 Jim Mattson 提交于
These macros are for bits in CPUID.(EAX=7,ECX=0):EDX, not for bits in CPUID(EAX=7,ECX=1):EAX. Put them with their brethren. [ bp: Sort word 18 bits properly, as caught by Like Xu <like.xu.linux@gmail.com> ] Signed-off-by: NJim Mattson <jmattson@google.com> Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20220203194308.2469117-1-jmattson@google.com
-
- 04 2月, 2022 1 次提交
-
-
由 Ricardo Neri 提交于
Add the CPUID feature bit and the model-specific registers needed to identify and configure the Intel Hardware Feedback Interface. Acked-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NRicardo Neri <ricardo.neri-calderon@linux.intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 15 1月, 2022 1 次提交
-
-
由 Jing Liu 提交于
Extend CPUID emulation to support XFD, AMX_TILE, AMX_INT8 and AMX_BF16. Adding those bits into kvm_cpu_caps finally activates all previous logics in this series. Hide XFD on 32bit host kernels. Otherwise it leads to a weird situation where KVM tells userspace to migrate MSR_IA32_XFD and then rejects attempts to read/write the MSR. Signed-off-by: NJing Liu <jing2.liu@intel.com> Signed-off-by: NYang Zhong <yang.zhong@intel.com> Message-Id: <20220105123532.12586-17-yang.zhong@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 30 12月, 2021 1 次提交
-
-
由 Huang Rui 提交于
Add Collaborative Processor Performance Control feature flag for AMD processors. This feature flag will be used on the following AMD P-State driver. The AMD P-State driver has two approaches to implement the frequency control behavior. That depends on the CPU hardware implementation. One is "Full MSR Support" and another is "Shared Memory Support". The feature flag indicates the current processors with "Full MSR Support". Acked-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NHuang Rui <ray.huang@amd.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 26 10月, 2021 2 次提交
-
-
由 Chang S. Bae 提交于
The XSTATE initialization uses check_xstate_against_struct() to sanity check the size of XSTATE-enabled features. AMX is a XSAVE-enabled feature, and its size is not hard-coded but discoverable at run-time via CPUID. The AMX state is composed of state components 17 and 18, which are all user state components. The first component is the XTILECFG state of a 64-byte tile-related control register. The state component 18, called XTILEDATA, contains the actual tile data, and the state size varies on implementations. The architectural maximum, as defined in the CPUID(0x1d, 1): EAX[15:0], is a byte less than 64KB. The first implementation supports 8KB. Check the XTILEDATA state size dynamically. The feature introduces the new tile register, TMM. Define one register struct only and read the number of registers from CPUID. Cross-check the overall size with CPUID again. Signed-off-by: NChang S. Bae <chang.seok.bae@intel.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20211021225527.10184-21-chang.seok.bae@intel.com
-
由 Chang S. Bae 提交于
Intel's eXtended Feature Disable (XFD) feature is an extension of the XSAVE architecture. XFD allows the kernel to enable a feature state in XCR0 and to receive a #NM trap when a task uses instructions accessing that state. This is going to be used to postpone the allocation of a larger XSTATE buffer for a task to the point where it is actually using a related instruction after the permission to use that facility has been granted. XFD is not used by the kernel, but only applied to userspace. This is a matter of policy as the kernel knows how a fpstate is reallocated and the XFD state. The compacted XSAVE format is adjustable for dynamic features. Make XFD depend on XSAVES. Signed-off-by: NChang S. Bae <chang.seok.bae@intel.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NChang S. Bae <chang.seok.bae@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20211021225527.10184-13-chang.seok.bae@intel.com
-
- 15 6月, 2021 1 次提交
-
-
由 Pawan Gupta 提交于
Intel client processors that support the IA32_TSX_FORCE_ABORT MSR related to perf counter interaction [1] received a microcode update that deprecates the Transactional Synchronization Extension (TSX) feature. The bit FORCE_ABORT_RTM now defaults to 1, writes to this bit are ignored. A new bit TSX_CPUID_CLEAR clears the TSX related CPUID bits. The summary of changes to the IA32_TSX_FORCE_ABORT MSR are: Bit 0: FORCE_ABORT_RTM (legacy bit, new default=1) Status bit that indicates if RTM transactions are always aborted. This bit is essentially !SDV_ENABLE_RTM(Bit 2). Writes to this bit are ignored. Bit 1: TSX_CPUID_CLEAR (new bit, default=0) When set, CPUID.HLE = 0 and CPUID.RTM = 0. Bit 2: SDV_ENABLE_RTM (new bit, default=0) When clear, XBEGIN will always abort with EAX code 0. When set, XBEGIN will not be forced to abort (but will always abort in SGX enclaves). This bit is intended to be used on developer systems. If this bit is set, transactional atomicity correctness is not certain. SDV = Software Development Vehicle (SDV), i.e. developer systems. Performance monitoring counter 3 is usable in all cases, regardless of the value of above bits. Add support for a new CPUID bit - CPUID.RTM_ALWAYS_ABORT (CPUID 7.EDX[11]) - to indicate the status of always abort behavior. [1] [ bp: Look for document ID 604224, "Performance Monitoring Impact of Intel Transactional Synchronization Extension Memory". Since there's no way for us to have stable links to documents... ] [ bp: Massage and extend commit message. ] Signed-off-by: NPawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NAndi Kleen <ak@linux.intel.com> Reviewed-by: NTony Luck <tony.luck@intel.com> Tested-by: NNeelima Krishnan <neelima.krishnan@intel.com> Link: https://lkml.kernel.org/r/9add61915b4a4eedad74fbd869107863a28b428e.1623704845.git-series.pawan.kumar.gupta@linux.intel.com
-
- 02 6月, 2021 1 次提交
-
-
由 Andrew Cooper 提交于
AMD and Hygon CPUs have a CPUID bit for RAPL. Drop the fam17h suffix as it is stale already. Make use of this instead of a model check to work more nicely in virtual environments where RAPL typically isn't available. [ bp: drop the ../cpu/powerflags.c hunk which is superfluous as the "rapl" bit name appears already in flags. ] Signed-off-by: NAndrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210514135920.16093-1-andrew.cooper3@citrix.com
-
- 20 4月, 2021 1 次提交
-
-
由 Ricardo Neri 提交于
Add feature enumeration to identify a processor with Intel Hybrid Technology: one in which CPUs of more than one type are the same package. On a hybrid processor, all CPUs support the same homogeneous (i.e., symmetric) instruction set. All CPUs enumerate the same features in CPUID. Thus, software (user space and kernel) can run and migrate to any CPU in the system as well as utilize any of the enumerated features without any change or special provisions. The main difference among CPUs in a hybrid processor are power and performance properties. Signed-off-by: NRicardo Neri <ricardo.neri-calderon@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NTony Luck <tony.luck@intel.com> Reviewed-by: NLen Brown <len.brown@intel.com> Acked-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/1618237865-33448-2-git-send-email-kan.liang@linux.intel.com
-
- 29 3月, 2021 1 次提交
-
-
由 Fenghua Yu 提交于
A bus lock is acquired through either a split locked access to writeback (WB) memory or any locked access to non-WB memory. This is typically >1000 cycles slower than an atomic operation within a cache line. It also disrupts performance on other cores. Some CPUs have the ability to notify the kernel by a #DB trap after a user instruction acquires a bus lock and is executed. This allows the kernel to enforce user application throttling or mitigation. Both breakpoint and bus lock can trigger the #DB trap in the same instruction and the ordering of handling them is the kernel #DB handler's choice. The CPU feature flag to be shown in /proc/cpuinfo will be "bus_lock_detect". Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NTony Luck <tony.luck@intel.com> Link: https://lore.kernel.org/r/20210322135325.682257-2-fenghua.yu@intel.com
-
- 26 3月, 2021 1 次提交
-
-
由 Sean Christopherson 提交于
Add SGX1 and SGX2 feature flags, via CPUID.0x12.0x0.EAX, as scattered features, since adding a new leaf for only two bits would be wasteful. As part of virtualizing SGX, KVM will expose the SGX CPUID leafs to its guest, and to do so correctly needs to query hardware and kernel support for SGX1 and SGX2. Suppress both SGX1 and SGX2 from /proc/cpuinfo. SGX1 basically means SGX, and for SGX2 there is no concrete use case of using it in /proc/cpuinfo. Signed-off-by: NSean Christopherson <seanjc@google.com> Signed-off-by: NKai Huang <kai.huang@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NDave Hansen <dave.hansen@intel.com> Acked-by: NJarkko Sakkinen <jarkko@kernel.org> Link: https://lkml.kernel.org/r/d787827dbfca6b3210ac3e432e3ac1202727e786.1616136308.git.kai.huang@intel.com
-
- 15 3月, 2021 2 次提交
-
-
由 Peter Zijlstra 提交于
This ensures that a NOP is a NOP and not a random other instruction that is also a NOP. It allows simplification of dynamic code patching that wants to verify existing code before writing new instructions (ftrace, jump_label, static_call, etc..). Differentiating on NOPs is not a feature. This pessimises 32bit (DONTCARE) and 32bit on 64bit CPUs (CARELESS). 32bit is not a performance target. Everything x86_64 since AMD K10 (2007) and Intel IvyBridge (2012) is fine with using NOPL (as opposed to prefix NOP). And per FEATURE_NOPL being required for x86_64, all x86_64 CPUs can use NOPL. So stop caring about NOPs, simplify things and get on with life. [ The problem seems to be that some uarchs can only decode NOPL on a single front-end port while others have severe decode penalties for excessive prefixes. All modern uarchs can handle both, except Atom, which has prefix penalties. ] [ Also, much doubt you can actually measure any of this on normal workloads. ] After this, FEATURE_NOPL is unused except for required-features for x86_64. FEATURE_K8 is only used for PTI. [ bp: Kernel build measurements showed ~0.3s slowdown on Sandybridge which is hardly a slowdown. Get rid of X86_FEATURE_K7, while at it. ] Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: Alexei Starovoitov <alexei.starovoitov@gmail.com> # bpf Acked-by: NLinus Torvalds <torvalds@linuxfoundation.org> Link: https://lkml.kernel.org/r/20210312115749.065275711@infradead.org
-
由 Babu Moger 提交于
Newer AMD processors have a feature to virtualize the use of the SPEC_CTRL MSR. Presence of this feature is indicated via CPUID function 0x8000000A_EDX[20]: GuestSpecCtrl. When present, the SPEC_CTRL MSR is automatically virtualized. Signed-off-by: NBabu Moger <babu.moger@amd.com> Acked-by: NBorislav Petkov <bp@suse.de> Message-Id: <161188100272.28787.4097272856384825024.stgit@bmoger-ubuntu> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 12 3月, 2021 1 次提交
-
-
由 Juergen Gross 提交于
For being able to switch paravirt patching from special cased custom code sequences to ALTERNATIVE handling some X86_FEATURE_* are needed as new features. This enables to have the standard indirect pv call as the default code and to patch that with the non-Xen custom code sequence via ALTERNATIVE patching later. Make sure paravirt patching is performed before alternatives patching. Signed-off-by: NJuergen Gross <jgross@suse.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20210311142319.4723-9-jgross@suse.com
-
- 04 2月, 2021 2 次提交
-
-
由 Wei Huang 提交于
New AMD CPUs have a change that checks #VMEXIT intercept on special SVM instructions before checking their EAX against reserved memory region. This change is indicated by CPUID_0x8000000A_EDX[28]. If it is 1, #VMEXIT is triggered before #GP. KVM doesn't need to intercept and emulate #GP faults as #GP is supposed to be triggered. Co-developed-by: NBandan Das <bsd@redhat.com> Signed-off-by: NBandan Das <bsd@redhat.com> Signed-off-by: NWei Huang <wei.huang2@amd.com> Reviewed-by: NMaxim Levitsky <mlevitsk@redhat.com> Message-Id: <20210126081831.570253-4-wei.huang2@amd.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Kyung Min Park 提交于
Add AVX version of the Vector Neural Network (VNNI) Instructions. A processor supports AVX VNNI instructions if CPUID.0x07.0x1:EAX[4] is present. The following instructions are available when this feature is present. 1. VPDPBUS: Multiply and Add Unsigned and Signed Bytes 2. VPDPBUSDS: Multiply and Add Unsigned and Signed Bytes with Saturation 3. VPDPWSSD: Multiply and Add Signed Word Integers 4. VPDPWSSDS: Multiply and Add Signed Integers with Saturation The only in-kernel usage of this is kvm passthrough. The CPU feature flag is shown as "avx_vnni" in /proc/cpuinfo. This instruction is currently documented in the latest "extensions" manual (ISE). It will appear in the "main" manual (SDM) in the future. Signed-off-by: NKyung Min Park <kyung.min.park@intel.com> Signed-off-by: NYang Zhong <yang.zhong@intel.com> Reviewed-by: NTony Luck <tony.luck@intel.com> Message-Id: <20210105004909.42000-2-yang.zhong@intel.com> Acked-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 29 1月, 2021 1 次提交
-
-
由 Sean Christopherson 提交于
Collect the scattered SME/SEV related feature flags into a dedicated word. There are now five recognized features in CPUID.0x8000001F.EAX, with at least one more on the horizon (SEV-SNP). Using a dedicated word allows KVM to use its automagic CPUID adjustment logic when reporting the set of supported features to userspace. No functional change intended. Signed-off-by: NSean Christopherson <seanjc@google.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NBrijesh Singh <brijesh.singh@amd.com> Link: https://lkml.kernel.org/r/20210122204047.2860075-2-seanjc@google.com
-
- 15 12月, 2020 1 次提交
-
-
由 Tom Lendacky 提交于
On systems that do not have hardware enforced cache coherency between encrypted and unencrypted mappings of the same physical page, the hypervisor can use the VM page flush MSR (0xc001011e) to flush the cache contents of an SEV guest page. When a small number of pages are being flushed, this can be used in place of issuing a WBINVD across all CPUs. CPUID 0x8000001f_eax[2] is used to determine if the VM page flush MSR is available. Add a CPUID feature to indicate it is supported and define the MSR. Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Message-Id: <f1966379e31f9b208db5257509c4a089a87d33d0.1607620209.git.thomas.lendacky@amd.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 12 12月, 2020 1 次提交
-
-
由 Kyung Min Park 提交于
Enumerate AVX512 Half-precision floating point (FP16) CPUID feature flag. Compared with using FP32, using FP16 cut the number of bits required for storage in half, reducing the exponent from 8 bits to 5, and the mantissa from 23 bits to 10. Using FP16 also enables developers to train and run inference on deep learning models fast when all precision or magnitude (FP32) is not needed. A processor supports AVX512 FP16 if CPUID.(EAX=7,ECX=0):EDX[bit 23] is present. The AVX512 FP16 requires AVX512BW feature be implemented since the instructions for manipulating 32bit masks are associated with AVX512BW. The only in-kernel usage of this is kvm passthrough. The CPU feature flag is shown as "avx512_fp16" in /proc/cpuinfo. Signed-off-by: NKyung Min Park <kyung.min.park@intel.com> Acked-by: NDave Hansen <dave.hansen@intel.com> Reviewed-by: NTony Luck <tony.luck@intel.com> Message-Id: <20201208033441.28207-2-kyung.min.park@intel.com> Acked-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 17 11月, 2020 2 次提交
-
-
由 Sean Christopherson 提交于
The SGX Launch Control hardware helps restrict which enclaves the hardware will run. Launch control is intended to restrict what software can run with enclave protections, which helps protect the overall system from bad enclaves. For the kernel's purposes, there are effectively two modes in which the launch control hardware can operate: rigid and flexible. In its rigid mode, an entity other than the kernel has ultimate authority over which enclaves can be run (firmware, Intel, etc...). In its flexible mode, the kernel has ultimate authority over which enclaves can run. Enable X86_FEATURE_SGX_LC to enumerate when the CPU supports SGX Launch Control in general. Add MSR_IA32_SGXLEPUBKEYHASH{0, 1, 2, 3}, which when combined contain a SHA256 hash of a 3072-bit RSA public key. The hardware allows SGX enclaves signed with this public key to initialize and run [*]. Enclaves not signed with this key can not initialize and run. Add FEAT_CTL_SGX_LC_ENABLED, which informs whether the SGXLEPUBKEYHASH MSRs can be written by the kernel. If the MSRs do not exist or are read-only, the launch control hardware is operating in rigid mode. Linux does not and will not support creating enclaves when hardware is configured in rigid mode because it takes away the authority for launch decisions from the kernel. Note, this does not preclude KVM from virtualizing/exposing SGX to a KVM guest when launch control hardware is operating in rigid mode. [*] Intel SDM: 38.1.4 Intel SGX Launch Control Configuration Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Co-developed-by: NJarkko Sakkinen <jarkko@kernel.org> Signed-off-by: NJarkko Sakkinen <jarkko@kernel.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NJethro Beekman <jethro@fortanix.com> Link: https://lkml.kernel.org/r/20201112220135.165028-5-jarkko@kernel.org
-
由 Sean Christopherson 提交于
Populate X86_FEATURE_SGX feature from CPUID and tie it to the Kconfig option with disabled-features.h. IA32_FEATURE_CONTROL.SGX_ENABLE must be examined in addition to the CPUID bits to enable full SGX support. The BIOS must both set this bit and lock IA32_FEATURE_CONTROL for SGX to be supported (Intel SDM section 36.7.1). The setting or clearing of this bit has no impact on the CPUID bits above, which is why it needs to be detected separately. Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Co-developed-by: NJarkko Sakkinen <jarkko@kernel.org> Signed-off-by: NJarkko Sakkinen <jarkko@kernel.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NJethro Beekman <jethro@fortanix.com> Link: https://lkml.kernel.org/r/20201112220135.165028-4-jarkko@kernel.org
-
- 18 9月, 2020 2 次提交
-
-
由 Krish Sadhukhan 提交于
In some hardware implementations, coherency between the encrypted and unencrypted mappings of the same physical page is enforced. In such a system, it is not required for software to flush the page from all CPU caches in the system prior to changing the value of the C-bit for a page. This hardware- enforced cache coherency is indicated by EAX[10] in CPUID leaf 0x8000001f. [ bp: Use one of the free slots in word 3. ] Suggested-by: NTom Lendacky <thomas.lendacky@amd.com> Signed-off-by: NKrish Sadhukhan <krish.sadhukhan@oracle.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200917212038.5090-2-krish.sadhukhan@oracle.com
-
由 Fenghua Yu 提交于
Work submission instruction comes in two flavors. ENQCMD can be called both in ring 3 and ring 0 and always uses the contents of a PASID MSR when shipping the command to the device. ENQCMDS allows a kernel driver to submit commands on behalf of a user process. The driver supplies the PASID value in ENQCMDS. There isn't any usage of ENQCMD in the kernel as of now. The CPU feature flag is shown as "enqcmd" in /proc/cpuinfo. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NTony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/1600187413-163670-5-git-send-email-fenghua.yu@intel.com
-
- 08 9月, 2020 1 次提交
-
-
由 Tom Lendacky 提交于
Add CPU feature detection for Secure Encrypted Virtualization with Encrypted State. This feature enhances SEV by also encrypting the guest register state, making it in-accessible to the hypervisor. Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200907131613.12703-6-joro@8bytes.org
-
- 30 8月, 2020 1 次提交
-
-
由 Kyung Min Park 提交于
Intel TSX suspend load tracking instructions aim to give a way to choose which memory accesses do not need to be tracked in the TSX read set. Add TSX suspend load tracking CPUID feature flag TSXLDTRK for enumeration. A processor supports Intel TSX suspend load address tracking if CPUID.0x07.0x0:EDX[16] is present. Two instructions XSUSLDTRK, XRESLDTRK are available when this feature is present. The CPU feature flag is shown as "tsxldtrk" in /proc/cpuinfo. Signed-off-by: NKyung Min Park <kyung.min.park@intel.com> Signed-off-by: NCathy Zhang <cathy.zhang@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NTony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/1598316478-23337-2-git-send-email-cathy.zhang@intel.com
-
- 26 8月, 2020 1 次提交
-
-
由 Fenghua Yu 提交于
Some systems support per-thread Memory Bandwidth Allocation (MBA) which applies a throttling delay value to each hardware thread instead of to a core. Per-thread MBA is enumerated by CPUID. No feature flag is shown in /proc/cpuinfo. User applications need to check a resctrl throttling mode info file to know if the feature is supported. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NBabu Moger <babu.moger@amd.com> Reviewed-by: NReinette Chatre <reinette.chatre@intel.com> Link: https://lkml.kernel.org/r/1598296281-127595-2-git-send-email-fenghua.yu@intel.com
-
- 27 7月, 2020 1 次提交
-
-
由 Ricardo Neri 提交于
The Intel architecture defines a set of Serializing Instructions (a detailed definition can be found in Vol.3 Section 8.3 of the Intel "main" manual, SDM). However, these instructions do more than what is required, have side effects and/or may be rather invasive. Furthermore, some of these instructions are only available in kernel mode or may cause VMExits. Thus, software using these instructions only to serialize execution (as defined in the manual) must handle the undesired side effects. As indicated in the name, SERIALIZE is a new Intel architecture Serializing Instruction. Crucially, it does not have any of the mentioned side effects. Also, it does not cause VMExit and can be used in user mode. This new instruction is currently documented in the latest "extensions" manual (ISE). It will appear in the "main" manual in the future. Signed-off-by: NRicardo Neri <ricardo.neri-calderon@linux.intel.com> Signed-off-by: NIngo Molnar <mingo@kernel.org> Reviewed-by: NTony Luck <tony.luck@intel.com> Acked-by: NDave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20200727043132.15082-2-ricardo.neri-calderon@linux.intel.com
-
- 08 7月, 2020 1 次提交
-
-
由 Kan Liang 提交于
CPUID.(EAX=07H, ECX=0):EDX[19] indicates whether an Intel CPU supports Architectural LBRs. The "X86_FEATURE_..., word 18" is already mirrored from CPUID "0x00000007:0 (EDX)". Add X86_FEATURE_ARCH_LBR under the "word 18" section. The feature will appear as "arch_lbr" in /proc/cpuinfo. The Architectural Last Branch Records (LBR) feature enables recording of software path history by logging taken branches and other control flows. The feature will be supported in the perf_events subsystem. Signed-off-by: NKan Liang <kan.liang@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NDave Hansen <dave.hansen@intel.com> Link: https://lkml.kernel.org/r/1593780569-62993-2-git-send-email-kan.liang@linux.intel.com
-
- 16 6月, 2020 1 次提交
-
-
由 Borislav Petkov 提交于
... so that they get reused when needed. No functional changes. Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200604104150.2056-1-bp@alien8.de
-
- 20 4月, 2020 1 次提交
-
-
由 Mark Gross 提交于
SRBDS is an MDS-like speculative side channel that can leak bits from the random number generator (RNG) across cores and threads. New microcode serializes the processor access during the execution of RDRAND and RDSEED. This ensures that the shared buffer is overwritten before it is released for reuse. While it is present on all affected CPU models, the microcode mitigation is not needed on models that enumerate ARCH_CAPABILITIES[MDS_NO] in the cases where TSX is not supported or has been disabled with TSX_CTRL. The mitigation is activated by default on affected processors and it increases latency for RDRAND and RDSEED instructions. Among other effects this will reduce throughput from /dev/urandom. * Enable administrator to configure the mitigation off when desired using either mitigations=off or srbds=off. * Export vulnerability status via sysfs * Rename file-scoped macros to apply for non-whitelist table initializations. [ bp: Massage, - s/VULNBL_INTEL_STEPPING/VULNBL_INTEL_STEPPINGS/g, - do not read arch cap MSR a second time in tsx_fused_off() - just pass it in, - flip check in cpu_set_bug_bits() to save an indentation level, - reflow comments. jpoimboe: s/Mitigated/Mitigation/ in user-visible strings tglx: Dropped the fused off magic for now ] Signed-off-by: NMark Gross <mgross@linux.intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NTony Luck <tony.luck@intel.com> Reviewed-by: NPawan Gupta <pawan.kumar.gupta@linux.intel.com> Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com> Tested-by: NNeelima Krishnan <neelima.krishnan@intel.com>
-
- 22 3月, 2020 1 次提交
-
-
由 Wei Huang 提交于
Newer AMD CPUs support a feature called protected processor identification number (PPIN). This feature can be detected via CPUID_Fn80000008_EBX[23]. However, CPUID alone is not enough to read the processor identification number - MSR_AMD_PPIN_CTL also needs to be configured properly. If, for any reason, MSR_AMD_PPIN_CTL[PPIN_EN] can not be turned on, such as disabled in BIOS, the CPU capability bit X86_FEATURE_AMD_PPIN needs to be cleared. When the X86_FEATURE_AMD_PPIN capability is available, the identification number is issued together with the MCE error info in order to keep track of the source of MCE errors. [ bp: Massage. ] Co-developed-by: NSmita Koralahalli Channabasappa <smita.koralahallichannabasappa@amd.com> Signed-off-by: NSmita Koralahalli Channabasappa <smita.koralahallichannabasappa@amd.com> Signed-off-by: NWei Huang <wei.huang2@amd.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NTony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/20200321193800.3666964-1-wei.huang2@amd.com
-
- 12 3月, 2020 1 次提交
-
-
由 Kim Phillips 提交于
Family 19h CPUs are Zen-based and still share most architectural features with Family 17h CPUs, and therefore still need to call init_amd_zn() e.g., to set the RECLAIM_DISTANCE override. init_amd_zn() also sets X86_FEATURE_ZEN, which today is only used in amd_set_core_ssb_state(), which isn't called on some late model Family 17h CPUs, nor on any Family 19h CPUs: X86_FEATURE_AMD_SSBD replaces X86_FEATURE_LS_CFG_SSBD on those later model CPUs, where the SSBD mitigation is done via the SPEC_CTRL MSR instead of the LS_CFG MSR. Family 19h CPUs also don't have the erratum where the CPB feature bit isn't set, but that code can stay unchanged and run safely on Family 19h. Signed-off-by: NKim Phillips <kim.phillips@amd.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200311191451.13221-1-kim.phillips@amd.com
-
- 21 2月, 2020 1 次提交
-
-
由 Peter Zijlstra (Intel) 提交于
A split-lock occurs when an atomic instruction operates on data that spans two cache lines. In order to maintain atomicity the core takes a global bus lock. This is typically >1000 cycles slower than an atomic operation within a cache line. It also disrupts performance on other cores (which must wait for the bus lock to be released before their memory operations can complete). For real-time systems this may mean missing deadlines. For other systems it may just be very annoying. Some CPUs have the capability to raise an #AC trap when a split lock is attempted. Provide a command line option to give the user choices on how to handle this: split_lock_detect= off - not enabled (no traps for split locks) warn - warn once when an application does a split lock, but allow it to continue running. fatal - Send SIGBUS to applications that cause split lock On systems that support split lock detection the default is "warn". Note that if the kernel hits a split lock in any mode other than "off" it will OOPs. One implementation wrinkle is that the MSR to control the split lock detection is per-core, not per thread. This might result in some short lived races on HT systems in "warn" mode if Linux tries to enable on one thread while disabling on the other. Race analysis by Sean Christopherson: - Toggling of split-lock is only done in "warn" mode. Worst case scenario of a race is that a misbehaving task will generate multiple #AC exceptions on the same instruction. And this race will only occur if both siblings are running tasks that generate split-lock #ACs, e.g. a race where sibling threads are writing different values will only occur if CPUx is disabling split-lock after an #AC and CPUy is re-enabling split-lock after *its* previous task generated an #AC. - Transitioning between off/warn/fatal modes at runtime isn't supported and disabling is tracked per task, so hardware will always reach a steady state that matches the configured mode. I.e. split-lock is guaranteed to be enabled in hardware once all _TIF_SLD threads have been scheduled out. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Co-developed-by: NFenghua Yu <fenghua.yu@intel.com> Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Co-developed-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20200126200535.GB30377@agluck-desk2.amr.corp.intel.com
-
- 14 1月, 2020 1 次提交
-
-
由 Sean Christopherson 提交于
Add a new feature flag, X86_FEATURE_MSR_IA32_FEAT_CTL, to track whether IA32_FEAT_CTL has been initialized. This will allow KVM, and any future subsystems that depend on IA32_FEAT_CTL, to rely purely on cpufeatures to query platform support, e.g. allows a future patch to remove KVM's manual IA32_FEAT_CTL MSR checks. Various features (on platforms that support IA32_FEAT_CTL) are dependent on IA32_FEAT_CTL being configured and locked, e.g. VMX and LMCE. The MSR is always configured during boot, but only if the CPU vendor is recognized by the kernel. Because CPUID doesn't incorporate the current IA32_FEAT_CTL value in its reporting of relevant features, it's possible for a feature to be reported as supported in cpufeatures but not truly enabled, e.g. if the CPU supports VMX but the kernel doesn't recognize the CPU. As a result, without the flag, KVM would see VMX as supported even if IA32_FEAT_CTL hasn't been initialized, and so would need to manually read the MSR and check the various enabling bits to avoid taking an unexpected #GP on VMXON. Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20191221044513.21680-14-sean.j.christopherson@intel.com
-
- 08 1月, 2020 1 次提交
-
-
由 Tony Luck 提交于
>From the Intel Optimization Reference Manual: 3.7.6.1 Fast Short REP MOVSB Beginning with processors based on Ice Lake Client microarchitecture, REP MOVSB performance of short operations is enhanced. The enhancement applies to string lengths between 1 and 128 bytes long. Support for fast-short REP MOVSB is enumerated by the CPUID feature flag: CPUID [EAX=7H, ECX=0H).EDX.FAST_SHORT_REP_MOVSB[bit 4] = 1. There is no change in the REP STOS performance. Add an X86_FEATURE_FSRM flag for this. memmove() avoids REP MOVSB for short (< 32 byte) copies. Check FSRM and use REP MOVSB for short copies on systems that support it. [ bp: Massage and add comment. ] Signed-off-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20191216214254.26492-1-tony.luck@intel.com
-
- 04 11月, 2019 1 次提交
-
-
由 Vineela Tummalapalli 提交于
Some processors may incur a machine check error possibly resulting in an unrecoverable CPU lockup when an instruction fetch encounters a TLB multi-hit in the instruction TLB. This can occur when the page size is changed along with either the physical address or cache type. The relevant erratum can be found here: https://bugzilla.kernel.org/show_bug.cgi?id=205195 There are other processors affected for which the erratum does not fully disclose the impact. This issue affects both bare-metal x86 page tables and EPT. It can be mitigated by either eliminating the use of large pages or by using careful TLB invalidations when changing the page size in the page tables. Just like Spectre, Meltdown, L1TF and MDS, a new bit has been allocated in MSR_IA32_ARCH_CAPABILITIES (PSCHANGE_MC_NO) and will be set on CPUs which are mitigated against this issue. Signed-off-by: NVineela Tummalapalli <vineela.tummalapalli@intel.com> Co-developed-by: NPawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by: NPawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 28 10月, 2019 1 次提交
-
-
由 Pawan Gupta 提交于
TSX Async Abort (TAA) is a side channel vulnerability to the internal buffers in some Intel processors similar to Microachitectural Data Sampling (MDS). In this case, certain loads may speculatively pass invalid data to dependent operations when an asynchronous abort condition is pending in a TSX transaction. This includes loads with no fault or assist condition. Such loads may speculatively expose stale data from the uarch data structures as in MDS. Scope of exposure is within the same-thread and cross-thread. This issue affects all current processors that support TSX, but do not have ARCH_CAP_TAA_NO (bit 8) set in MSR_IA32_ARCH_CAPABILITIES. On CPUs which have their IA32_ARCH_CAPABILITIES MSR bit MDS_NO=0, CPUID.MD_CLEAR=1 and the MDS mitigation is clearing the CPU buffers using VERW or L1D_FLUSH, there is no additional mitigation needed for TAA. On affected CPUs with MDS_NO=1 this issue can be mitigated by disabling the Transactional Synchronization Extensions (TSX) feature. A new MSR IA32_TSX_CTRL in future and current processors after a microcode update can be used to control the TSX feature. There are two bits in that MSR: * TSX_CTRL_RTM_DISABLE disables the TSX sub-feature Restricted Transactional Memory (RTM). * TSX_CTRL_CPUID_CLEAR clears the RTM enumeration in CPUID. The other TSX sub-feature, Hardware Lock Elision (HLE), is unconditionally disabled with updated microcode but still enumerated as present by CPUID(EAX=7).EBX{bit4}. The second mitigation approach is similar to MDS which is clearing the affected CPU buffers on return to user space and when entering a guest. Relevant microcode update is required for the mitigation to work. More details on this approach can be found here: https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html The TSX feature can be controlled by the "tsx" command line parameter. If it is force-enabled then "Clear CPU buffers" (MDS mitigation) is deployed. The effective mitigation state can be read from sysfs. [ bp: - massage + comments cleanup - s/TAA_MITIGATION_TSX_DISABLE/TAA_MITIGATION_TSX_DISABLED/g - Josh. - remove partial TAA mitigation in update_mds_branch_idle() - Josh. - s/tsx_async_abort_cmdline/tsx_async_abort_parse_cmdline/g ] Signed-off-by: NPawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
-