提交 3c7670d5 编写于 作者: L Liran Alon 提交者: Greg Kroah-Hartman

KVM: VMX: Update shared MSRs to be saved/restored on MSR_EFER.LMA changes

[ Upstream commit f48b4711 ]

When guest transitions from/to long-mode by modifying MSR_EFER.LMA,
the list of shared MSRs to be saved/restored on guest<->host
transitions is updated (See vmx_set_efer() call to setup_msrs()).

On every entry to guest, vcpu_enter_guest() calls
vmx_prepare_switch_to_guest(). This function should also take care
of setting the shared MSRs to be saved/restored. However, the
function does nothing in case we are already running with loaded
guest state (vmx->loaded_cpu_state != NULL).

This means that even when guest modifies MSR_EFER.LMA which results
in updating the list of shared MSRs, it isn't being taken into account
by vmx_prepare_switch_to_guest() because it happens while we are
running with loaded guest state.

To fix above mentioned issue, add a flag to mark that the list of
shared MSRs has been updated and modify vmx_prepare_switch_to_guest()
to set shared MSRs when running with host state *OR* list of shared
MSRs has been updated.

Note that this issue was mistakenly introduced by commit
678e315e ("KVM: vmx: add dedicated utility to access guest's
kernel_gs_base") because previously vmx_set_efer() always called
vmx_load_host_state() which resulted in vmx_prepare_switch_to_guest() to
set shared MSRs.

Fixes: 678e315e ("KVM: vmx: add dedicated utility to access guest's kernel_gs_base")
Reported-by: NEyal Moscovici <eyal.moscovici@oracle.com>
Reviewed-by: NMihai Carabas <mihai.carabas@oracle.com>
Reviewed-by: NLiam Merwick <liam.merwick@oracle.com>
Reviewed-by: NJim Mattson <jmattson@google.com>
Signed-off-by: NLiran Alon <liran.alon@oracle.com>
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
Signed-off-by: NSasha Levin <sashal@kernel.org>
上级 8038f92d
...@@ -962,6 +962,7 @@ struct vcpu_vmx { ...@@ -962,6 +962,7 @@ struct vcpu_vmx {
struct shared_msr_entry *guest_msrs; struct shared_msr_entry *guest_msrs;
int nmsrs; int nmsrs;
int save_nmsrs; int save_nmsrs;
bool guest_msrs_dirty;
unsigned long host_idt_base; unsigned long host_idt_base;
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
u64 msr_host_kernel_gs_base; u64 msr_host_kernel_gs_base;
...@@ -2874,6 +2875,20 @@ static void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) ...@@ -2874,6 +2875,20 @@ static void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
vmx->req_immediate_exit = false; vmx->req_immediate_exit = false;
/*
* Note that guest MSRs to be saved/restored can also be changed
* when guest state is loaded. This happens when guest transitions
* to/from long-mode by setting MSR_EFER.LMA.
*/
if (!vmx->loaded_cpu_state || vmx->guest_msrs_dirty) {
vmx->guest_msrs_dirty = false;
for (i = 0; i < vmx->save_nmsrs; ++i)
kvm_set_shared_msr(vmx->guest_msrs[i].index,
vmx->guest_msrs[i].data,
vmx->guest_msrs[i].mask);
}
if (vmx->loaded_cpu_state) if (vmx->loaded_cpu_state)
return; return;
...@@ -2934,11 +2949,6 @@ static void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) ...@@ -2934,11 +2949,6 @@ static void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
vmcs_writel(HOST_GS_BASE, gs_base); vmcs_writel(HOST_GS_BASE, gs_base);
host_state->gs_base = gs_base; host_state->gs_base = gs_base;
} }
for (i = 0; i < vmx->save_nmsrs; ++i)
kvm_set_shared_msr(vmx->guest_msrs[i].index,
vmx->guest_msrs[i].data,
vmx->guest_msrs[i].mask);
} }
static void vmx_prepare_switch_to_host(struct vcpu_vmx *vmx) static void vmx_prepare_switch_to_host(struct vcpu_vmx *vmx)
...@@ -3418,6 +3428,7 @@ static void setup_msrs(struct vcpu_vmx *vmx) ...@@ -3418,6 +3428,7 @@ static void setup_msrs(struct vcpu_vmx *vmx)
move_msr_up(vmx, index, save_nmsrs++); move_msr_up(vmx, index, save_nmsrs++);
vmx->save_nmsrs = save_nmsrs; vmx->save_nmsrs = save_nmsrs;
vmx->guest_msrs_dirty = true;
if (cpu_has_vmx_msr_bitmap()) if (cpu_has_vmx_msr_bitmap())
vmx_update_msr_bitmap(&vmx->vcpu); vmx_update_msr_bitmap(&vmx->vcpu);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册