提交 17763049 编写于 作者: S Sean Christopherson 提交者: Zheng Zengkai

KVM: VMX: Disable preemption when probing user return MSRs

stable inclusion
from stable-5.10.38
commit 31f29749ee970c251b3a7e5b914108425940d089
bugzilla: 51875
CVE: NA

--------------------------------

commit 5104d7ff upstream.

Disable preemption when probing a user return MSR via RDSMR/WRMSR.  If
the MSR holds a different value per logical CPU, the WRMSR could corrupt
the host's value if KVM is preempted between the RDMSR and WRMSR, and
then rescheduled on a different CPU.

Opportunistically land the helper in common x86, SVM will use the helper
in a future commit.

Fixes: 4be53410 ("KVM: VMX: Initialize vmx->guest_msrs[] right after allocation")
Cc: stable@vger.kernel.org
Cc: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: NSean Christopherson <seanjc@google.com>
Message-Id: <20210504171734.1434054-6-seanjc@google.com>
Reviewed-by: NJim Mattson <jmattson@google.com>
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: NChen Jun <chenjun102@huawei.com>
Acked-by: NWeilong Chen <chenweilong@huawei.com>
Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
上级 f3a0af70
...@@ -1682,6 +1682,7 @@ int kvm_pv_send_ipi(struct kvm *kvm, unsigned long ipi_bitmap_low, ...@@ -1682,6 +1682,7 @@ int kvm_pv_send_ipi(struct kvm *kvm, unsigned long ipi_bitmap_low,
unsigned long icr, int op_64_bit); unsigned long icr, int op_64_bit);
void kvm_define_user_return_msr(unsigned index, u32 msr); void kvm_define_user_return_msr(unsigned index, u32 msr);
int kvm_probe_user_return_msr(u32 msr);
int kvm_set_user_return_msr(unsigned index, u64 val, u64 mask); int kvm_set_user_return_msr(unsigned index, u64 val, u64 mask);
u64 kvm_scale_tsc(struct kvm_vcpu *vcpu, u64 tsc); u64 kvm_scale_tsc(struct kvm_vcpu *vcpu, u64 tsc);
......
...@@ -6877,12 +6877,9 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu) ...@@ -6877,12 +6877,9 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu)
for (i = 0; i < ARRAY_SIZE(vmx_uret_msrs_list); ++i) { for (i = 0; i < ARRAY_SIZE(vmx_uret_msrs_list); ++i) {
u32 index = vmx_uret_msrs_list[i]; u32 index = vmx_uret_msrs_list[i];
u32 data_low, data_high;
int j = vmx->nr_uret_msrs; int j = vmx->nr_uret_msrs;
if (rdmsr_safe(index, &data_low, &data_high) < 0) if (kvm_probe_user_return_msr(index))
continue;
if (wrmsr_safe(index, data_low, data_high) < 0)
continue; continue;
vmx->guest_uret_msrs[j].slot = i; vmx->guest_uret_msrs[j].slot = i;
......
...@@ -365,6 +365,22 @@ static void kvm_on_user_return(struct user_return_notifier *urn) ...@@ -365,6 +365,22 @@ static void kvm_on_user_return(struct user_return_notifier *urn)
} }
} }
int kvm_probe_user_return_msr(u32 msr)
{
u64 val;
int ret;
preempt_disable();
ret = rdmsrl_safe(msr, &val);
if (ret)
goto out;
ret = wrmsrl_safe(msr, val);
out:
preempt_enable();
return ret;
}
EXPORT_SYMBOL_GPL(kvm_probe_user_return_msr);
void kvm_define_user_return_msr(unsigned slot, u32 msr) void kvm_define_user_return_msr(unsigned slot, u32 msr)
{ {
BUG_ON(slot >= KVM_MAX_NR_USER_RETURN_MSRS); BUG_ON(slot >= KVM_MAX_NR_USER_RETURN_MSRS);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册