- 13 7月, 2018 5 次提交
-
-
由 Thomas Gleixner 提交于
Avoid the conditional in the L1D flush control path. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NJiri Kosina <jkosina@suse.cz> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lkml.kernel.org/r/20180713142322.790914912@linutronix.de
-
由 Thomas Gleixner 提交于
In preparation of allowing run time control for L1D flushing, move the setup code to the module parameter handler. In case of pre module init parsing, just store the value and let vmx_init() do the actual setup after running kvm_init() so that enable_ept is having the correct state. During run-time invoke it directly from the parameter setter to prepare for run-time control. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NJiri Kosina <jkosina@suse.cz> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lkml.kernel.org/r/20180713142322.694063239@linutronix.de
-
由 Thomas Gleixner 提交于
If Extended Page Tables (EPT) are disabled or not supported, no L1D flushing is required. The setup function can just avoid setting up the L1D flush for the EPT=n case. Invoke it after the hardware setup has be done and enable_ept has the correct state and expose the EPT disabled state in the mitigation status as well. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NJiri Kosina <jkosina@suse.cz> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lkml.kernel.org/r/20180713142322.612160168@linutronix.de
-
由 Thomas Gleixner 提交于
The VMX module parameter to control the L1D flush should become writeable. The MSR list is set up at VM init per guest VCPU, but the run time switching is based on a static key which is global. Toggling the MSR list at run time might be feasible, but for now drop this optimization and use the regular MSR write to make run-time switching possible. The default mitigation is the conditional flush anyway, so for extra paranoid setups this will add some small overhead, but the extra code executed is in the noise compared to the flush itself. Aside of that the EPT disabled case is not handled correctly at the moment and the MSR list magic is in the way for fixing that as well. If it's really providing a significant advantage, then this needs to be revisited after the code is correct and the control is writable. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NJiri Kosina <jkosina@suse.cz> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lkml.kernel.org/r/20180713142322.516940445@linutronix.de
-
由 Thomas Gleixner 提交于
Store the effective mitigation of VMX in a status variable and use it to report the VMX state in the l1tf sysfs file. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NJiri Kosina <jkosina@suse.cz> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com> Link: https://lkml.kernel.org/r/20180713142322.433098358@linutronix.de
-
- 05 7月, 2018 10 次提交
-
-
由 Konrad Rzeszutek Wilk 提交于
If the L1D flush module parameter is set to 'always' and the IA32_FLUSH_CMD MSR is available, optimize the VMENTER code with the MSR save list. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Konrad Rzeszutek Wilk 提交于
The IA32_FLUSH_CMD MSR needs only to be written on VMENTER. Extend add_atomic_switch_msr() with an entry_only parameter to allow storing the MSR only in the guest (ENTRY) MSR array. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Konrad Rzeszutek Wilk 提交于
This allows to load a different number of MSRs depending on the context: VMEXIT or VMENTER. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Konrad Rzeszutek Wilk 提交于
.. to help find the MSR on either the guest or host MSR list. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Konrad Rzeszutek Wilk 提交于
There is no semantic change but this change allows an unbalanced amount of MSRs to be loaded on VMEXIT and VMENTER, i.e. the number of MSRs to save or restore on VMEXIT or VMENTER may be different. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Paolo Bonzini 提交于
Add the logic for flushing L1D on VMENTER. The flush depends on the static key being enabled and the new l1tf_flush_l1d flag being set. The flags is set: - Always, if the flush module parameter is 'always' - Conditionally at: - Entry to vcpu_run(), i.e. after executing user space - From the sched_in notifier, i.e. when switching to a vCPU thread. - From vmexit handlers which are considered unsafe, i.e. where sensitive data can be brought into L1D: - The emulator, which could be a good target for other speculative execution-based threats, - The MMU, which can bring host page tables in the L1 cache. - External interrupts - Nested operations that require the MMU (see above). That is vmptrld, vmptrst, vmclear,vmwrite,vmread. - When handling invept,invvpid [ tglx: Split out from combo patch and reduced to a single flag ] Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Paolo Bonzini 提交于
336996-Speculative-Execution-Side-Channel-Mitigations.pdf defines a new MSR (IA32_FLUSH_CMD aka 0x10B) which has similar write-only semantics to other MSRs defined in the document. The semantics of this MSR is to allow "finer granularity invalidation of caching structures than existing mechanisms like WBINVD. It will writeback and invalidate the L1 data cache, including all cachelines brought in by preceding instructions, without invalidating all caches (eg. L2 or LLC). Some processors may also invalidate the first level level instruction cache on a L1D_FLUSH command. The L1 data and instruction caches may be shared across the logical processors of a core." Use it instead of the loop based L1 flush algorithm. A copy of this document is available at https://bugzilla.kernel.org/show_bug.cgi?id=199511 [ tglx: Avoid allocating pages when the MSR is available ] Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Paolo Bonzini 提交于
To mitigate the L1 Terminal Fault vulnerability it's required to flush L1D on VMENTER to prevent rogue guests from snooping host memory. CPUs will have a new control MSR via a microcode update to flush L1D with a single MSR write, but in the absence of microcode a fallback to a software based flush algorithm is required. Add a software flush loop which is based on code from Intel. [ tglx: Split out from combo patch ] [ bpetkov: Polish the asm code ] Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Konrad Rzeszutek Wilk 提交于
Add a mitigation mode parameter "vmentry_l1d_flush" for CVE-2018-3620, aka L1 terminal fault. The valid arguments are: - "always" L1D cache flush on every VMENTER. - "cond" Conditional L1D cache flush, explained below - "never" Disable the L1D cache flush mitigation "cond" is trying to avoid L1D cache flushes on VMENTER if the code executed between VMEXIT and VMENTER is considered safe, i.e. is not bringing any interesting information into L1D which might exploited. [ tglx: Split out from a larger patch ] Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Konrad Rzeszutek Wilk 提交于
If the L1TF CPU bug is present we allow the KVM module to be loaded as the major of users that use Linux and KVM have trusted guests and do not want a broken setup. Cloud vendors are the ones that are uncomfortable with CVE 2018-3620 and as such they are the ones that should set nosmt to one. Setting 'nosmt' means that the system administrator also needs to disable SMT (Hyper-threading) in the BIOS, or via the 'nosmt' command line parameter, or via the /sys/devices/system/cpu/smt/control. See commit 05736e4a ("cpu/hotplug: Provide knobs to control SMT"). Other mitigations are to use task affinity, cpu sets, interrupt binding, etc - anything to make sure that _only_ the same guests vCPUs are running on sibling threads. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 15 6月, 2018 1 次提交
-
-
由 Arnd Bergmann 提交于
Arnd had sent this patch to the KVM mailing list, but it slipped through the cracks of maintainers hand-off, and therefore wasn't included in the pull request. The same issue had been fixed by Linus in commit dbee3d02 ("KVM: x86: VMX: fix build without hyper-v", 2018-06-12) as a self-described "quick-and-hacky build fix". However, checking the compile-time configuration symbol with IS_ENABLED is cleaner and it is enough to avoid the link error, so switch to Arnd's solution. Signed-off-by: NArnd Bergmann <arnd@arndb.de> [Rewritten commit message. - Paolo] Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 13 6月, 2018 1 次提交
-
-
由 Linus Torvalds 提交于
Commit ceef7d10 ("KVM: x86: VMX: hyper-v: Enlightened MSR-Bitmap support") broke the build with Hyper-V disabled, because it accesses ms_hyperv.nested_features without checking if that exists. This is the quick-and-hacky build fix. I suspect the proper fix is to replace the static_branch_unlikely(&enable_evmcs) tests with an inline helper function that also checks that CONFIG_HYPERV is enabled, since without that, enable_evmcs makes no sense. But I want a working build environment first and foremost, and I'm upset this slipped through in the first place. My primary build tests missed it because I tend to build with everything enabled, but it should have been caught in the kvm tree. Fixes: ceef7d10 ("KVM: x86: VMX: hyper-v: Enlightened MSR-Bitmap support") Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 12 6月, 2018 2 次提交
-
-
由 Paolo Bonzini 提交于
Int the next patch the emulator's .read_std and .write_std callbacks will grow another argument, which is not needed in kvm_read_guest_virt and kvm_write_guest_virt_system's callers. Since we have to make separate functions, let's give the currently existing names a nicer interface, too. Fixes: 129a72a0 ("KVM: x86: Introduce segmented_write_std", 2017-01-12) Cc: stable@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Felix Wilhelm 提交于
VMX instructions executed inside a L1 VM will always trigger a VM exit even when executed with cpl 3. This means we must perform the privilege check in software. Fixes: 70f3aac9("kvm: nVMX: Remove superfluous VMX instruction fault checks") Cc: stable@vger.kernel.org Signed-off-by: NFelix Wilhelm <fwilhelm@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 04 6月, 2018 3 次提交
-
-
由 Jim Mattson 提交于
Add support for "VMWRITE to any supported field in the VMCS" and enable this feature by default in L1's IA32_VMX_MISC MSR. If userspace clears the VMX capability bit, the old behavior will be restored. Note that this feature is a prerequisite for kvm in L1 to use VMCS shadowing, once that feature is available. Signed-off-by: NJim Mattson <jmattson@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Jim Mattson 提交于
Disallow changes to the VMX capability MSRs while the vCPU is in VMX operation. Although this does break the existing API, it helps to avoid some potentially tricky situations for which there is no architected behavior. Signed-off-by: NJim Mattson <jmattson@google.com> Reviewed-by: NKrish Sadhukhan <krish.sadhukhan@oracle.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Wanpeng Li 提交于
'Commit d0659d94 ("KVM: x86: add option to advance tscdeadline hrtimer expiration")' advances the tscdeadline (the timer is emulated by hrtimer) expiration in order that the latency which is incurred by hypervisor (apic_timer_fn -> vmentry) can be avoided. This patch adds the advance tscdeadline expiration support to which the tscdeadline timer is emulated by VMX preemption timer to reduce the hypervisor lantency (handle_preemption_timer -> vmentry). The guest can also set an expiration that is very small (for example in Linux if an hrtimer feeds a expiration in the past); in that case we set delta_tsc to 0, leading to an immediately vmexit when delta_tsc is not bigger than advance ns. This patch can reduce ~63% latency (~4450 cycles to ~1660 cycles on a haswell desktop) for kvm-unit-tests/tscdeadline_latency when testing busy waits. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: NWanpeng Li <wanpengli@tencent.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 02 6月, 2018 1 次提交
-
-
由 Marc Orr 提交于
The kvm struct has been bloating. For example, it's tens of kilo-bytes for x86, which turns out to be a large amount of memory to allocate contiguously via kzalloc. Thus, this patch does the following: 1. Uses architecture-specific routines to allocate the kvm struct via vzalloc for x86. 2. Switches arm to __KVM_HAVE_ARCH_VM_ALLOC so that it can use vzalloc when has_vhe() is true. Other architectures continue to default to kalloc, as they have a dependency on kalloc or have a small-enough struct kvm. Signed-off-by: NMarc Orr <marcorr@google.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 25 5月, 2018 3 次提交
-
-
由 Liran Alon 提交于
When vmcs12 uses VPID, all TLB entries populated by L2 are tagged with vmx->nested.vpid02. Currently, INVVPID executed by L1 is emulated by L0 by using INVVPID single/global-context to flush all TLB entries tagged with vmx->nested.vpid02 regardless of INVVPID type executed by L1. However, we can easily optimize the case of L1 INVVPID on an individual-address. Just INVVPID given individual-address tagged with vmx->nested.vpid02. Reviewed-by: NLiam Merwick <liam.merwick@oracle.com> Signed-off-by: NLiran Alon <liran.alon@oracle.com> Reviewed-by: NJim Mattson <jmattson@google.com> [Squashed with a preparatory patch that added the !operand.vpid line.] Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
-
由 Liran Alon 提交于
Since commit 5c614b35 ("KVM: nVMX: nested VPID emulation"), vmcs01 and vmcs02 don't share the same VPID. vmcs01 uses vmx->vpid while vmcs02 uses vmx->nested.vpid02. This was done such that TLB flush could be avoided when switching between L1 and L2. However, the above mentioned commit only changed L2 VMEntry logic to not flush TLB when switching from L1 to L2. It forgot to also remove the TLB flush which is done when simulating a VMExit from L2 to L1. To fix this issue, on VMExit from L2 to L1 we flush TLB only in case vmcs01 enables VPID and vmcs01->vpid==vmcs02->vpid. This happens when vmcs01 enables VPID and vmcs12 does not. Fixes: 5c614b35 ("KVM: nVMX: nested VPID emulation") Reviewed-by: NLiam Merwick <liam.merwick@oracle.com> Reviewed-by: NKrish Sadhukhan <krish.sadhukhan@oracle.com> Signed-off-by: NLiran Alon <liran.alon@oracle.com> Reviewed-by: NJim Mattson <jmattson@google.com> Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
-
由 Liran Alon 提交于
Reviewed-by: NLiam Merwick <liam.merwick@oracle.com> Signed-off-by: NLiran Alon <liran.alon@oracle.com> Reviewed-by: NJim Mattson <jmattson@google.com> Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
-
- 23 5月, 2018 3 次提交
-
-
由 Jim Mattson 提交于
Enforce the invariant that existing VMCS12 field offsets must not change. Experience has shown that without strict enforcement, this invariant will not be maintained. Signed-off-by: NJim Mattson <jmattson@google.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> [Changed the code to use BUILD_BUG_ON_MSG instead of better, but GCC 4.6 requiring _Static_assert. - Radim.] Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
-
由 Jim Mattson 提交于
Changing the VMCS12 layout will break save/restore compatibility with older kvm releases once the KVM_{GET,SET}_NESTED_STATE ioctls are accepted upstream. Google has already been using these ioctls for some time, and we implore the community not to disturb the existing layout. Move the four most recently added fields to preserve the offsets of the previously defined fields and reserve locations for the vmread and vmwrite bitmaps, which will be used in the virtualization of VMCS shadowing (to improve the performance of double-nesting). Signed-off-by: NJim Mattson <jmattson@google.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> [Kept the SDM order in vmcs_field_to_offset_table. - Radim] Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
-
由 Jim Mattson 提交于
When saving a vCPU's nested state, the vmcs02 is discarded. Only the shadow vmcs12 is saved. The shadow vmcs12 contains all of the information needed to reconstruct an equivalent vmcs02 on restore, but we have to be able to deal with two contexts: 1. The nested state was saved immediately after an emulated VM-entry, before the vmcs02 was ever launched. 2. The nested state was saved some time after the first successful launch of the vmcs02. Though it's an implementation detail rather than an architected bit, vmx->nested_run_pending serves to distinguish between these two cases. Hence, we save it as part of the vCPU's nested state. (Yes, this is ugly.) Even when restoring from a checkpoint, it may be necessary to build the vmcs02 as if prepare_vmcs02 was called from nested_vmx_run. So, the 'from_vmentry' argument should be dropped, and vmx->nested_run_pending should be consulted instead. The nested state restoration code then has to set vmx->nested_run_pending prior to calling prepare_vmcs02. It's important that the restoration code set vmx->nested_run_pending anyway, since the flag impacts things like interrupt delivery as well. Fixes: cf8b84f4 ("kvm: nVMX: Prepare for checkpointing L2 state") Signed-off-by: NJim Mattson <jmattson@google.com> Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
-
- 17 5月, 2018 3 次提交
-
-
由 Tom Lendacky 提交于
Expose the new virtualized architectural mechanism, VIRT_SSBD, for using speculative store bypass disable (SSBD) under SVM. This will allow guests to use SSBD on hardware that uses non-architectural mechanisms for enabling SSBD. [ tglx: Folded the migration fixup from Paolo Bonzini ] Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Thomas Gleixner 提交于
AMD is proposing a VIRT_SPEC_CTRL MSR to handle the Speculative Store Bypass Disable via MSR_AMD64_LS_CFG so that guests do not have to care about the bit position of the SSBD bit and thus facilitate migration. Also, the sibling coordination on Family 17H CPUs can only be done on the host. Extend x86_spec_ctrl_set_guest() and x86_spec_ctrl_restore_host() with an extra argument for the VIRT_SPEC_CTRL MSR. Hand in 0 from VMX and in SVM add a new virt_spec_ctrl member to the CPU data structure which is going to be used in later patches for the actual implementation. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Borislav Petkov 提交于
Intel and AMD have different CPUID bits hence for those use synthetic bits which get set on the respective vendor's in init_speculation_control(). So that debacles like what the commit message of c65732e4 ("x86/cpu: Restore CPUID_8000_0008_EBX reload") talks about don't happen anymore. Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-by: NJörg Otte <jrg.otte@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Link: https://lkml.kernel.org/r/20180504161815.GG9257@pd.tnic
-
- 15 5月, 2018 3 次提交
-
-
由 Jim Mattson 提交于
It is only possible to share the APIC access page between L1 and L2 if they also share the virtual-APIC page. If L2 has its own virtual-APIC page, then MMIO accesses to L1's TPR from L2 will access L2's TPR instead. Moreover, L1's local APIC has to be in xAPIC mode, which is another condition that hasn't been checked. Signed-off-by: NJim Mattson <jmattson@google.com> Signed-off-by: NKrish Sadhukhan <krish.sadhukhan@oracle.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Jim Mattson 提交于
Previously, we toggled between SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE and SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES, depending on whether or not the EXTD bit was set in MSR_IA32_APICBASE. However, if the local APIC is disabled, we should not set either of these APIC virtualization control bits. Signed-off-by: NJim Mattson <jmattson@google.com> Signed-off-by: NKrish Sadhukhan <krish.sadhukhan@oracle.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Vitaly Kuznetsov 提交于
Enlightened MSR-Bitmap is a natural extension of Enlightened VMCS: Hyper-V Top Level Functional Specification states: "The L1 hypervisor may collaborate with the L0 hypervisor to make MSR accesses more efficient. It can enable enlightened MSR bitmaps by setting the corresponding field in the enlightened VMCS to 1. When enabled, the L0 hypervisor does not monitor the MSR bitmaps for changes. Instead, the L1 hypervisor must invalidate the corresponding clean field after making changes to one of the MSR bitmaps." I reached out to Hyper-V team for additional details and I got the following information: "Current Hyper-V implementation works as following: If the enlightened MSR bitmap is not enabled: - All MSR accesses of L2 guests cause physical VM-Exits If the enlightened MSR bitmap is enabled: - Physical VM-Exits for L2 accesses to certain MSRs (currently FS_BASE, GS_BASE and KERNEL_GS_BASE) are avoided, thus making these MSR accesses faster." I tested my series with a tight rdmsrl loop in L2, for KERNEL_GS_BASE the results are: Without Enlightened MSR-Bitmap: 1300 cycles/read With Enlightened MSR-Bitmap: 120 cycles/read Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com> Tested-by: NLan Tianyu <Tianyu.Lan@microsoft.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 11 5月, 2018 1 次提交
-
-
由 Sean Christopherson 提交于
Update SECONDARY_EXEC_DESC for UMIP emulation if and only UMIP is actually being emulated. Skipping the VMCS update eliminates unnecessary VMREAD/VMWRITE when UMIP is supported in hardware, and on platforms that don't have SECONDARY_VM_EXEC_CONTROL. The latter case resolves a bug where KVM would fill the kernel log with warnings due to failed VMWRITEs on older platforms. Fixes: 0367f205 ("KVM: vmx: add support for emulating UMIP") Cc: stable@vger.kernel.org #4.16 Reported-by: NPaolo Zeppegno <pzeppegno@gmail.com> Suggested-by: NPaolo Bonzini <pbonzini@redhat.com> Suggested-by: NRadim KrÄmář <rkrcmar@redhat.com> Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 10 5月, 2018 1 次提交
-
-
由 Konrad Rzeszutek Wilk 提交于
Intel collateral will reference the SSB mitigation bit in IA32_SPEC_CTL[2] as SSBD (Speculative Store Bypass Disable). Hence changing it. It is unclear yet what the MSR_IA32_ARCH_CAPABILITIES (0x10a) Bit(4) name is going to be. Following the rename it would be SSBD_NO but that rolls out to Speculative Store Bypass Disable No. Also fixed the missing space in X86_FEATURE_AMD_SSBD. [ tglx: Fixup x86_amd_rds_enable() and rds_tif_to_amd_ls_cfg() as well ] Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 03 5月, 2018 3 次提交
-
-
由 Thomas Gleixner 提交于
Having everything in nospec-branch.h creates a hell of dependencies when adding the prctl based switching mechanism. Move everything which is not required in nospec-branch.h to spec-ctrl.h and fix up the includes in the relevant files. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: NIngo Molnar <mingo@kernel.org>
-
由 Konrad Rzeszutek Wilk 提交于
Expose the CPUID.7.EDX[31] bit to the guest, and also guard against various combinations of SPEC_CTRL MSR values. The handling of the MSR (to take into account the host value of SPEC_CTRL Bit(2)) is taken care of in patch: KVM/SVM/VMX/x86/spectre_v2: Support the combination of guest and host IBRS Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NIngo Molnar <mingo@kernel.org>
-
由 Konrad Rzeszutek Wilk 提交于
A guest may modify the SPEC_CTRL MSR from the value used by the kernel. Since the kernel doesn't use IBRS, this means a value of zero is what is needed in the host. But the 336996-Speculative-Execution-Side-Channel-Mitigations.pdf refers to the other bits as reserved so the kernel should respect the boot time SPEC_CTRL value and use that. This allows to deal with future extensions to the SPEC_CTRL interface if any at all. Note: This uses wrmsrl() instead of native_wrmsl(). I does not make any difference as paravirt will over-write the callq *0xfff.. with the wrmsrl assembler code. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NIngo Molnar <mingo@kernel.org>
-