提交 fe48bc79 编写于 作者: M Marc Zyngier 提交者: Yang Yingliang

KVM: arm64: Ensure I-cache isolation between vcpus of a same VM

mainline inclusion
from mainline-v5.12-rc3
commit 01dc9262
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4BLL0
CVE: NA

--------------------------------

It recently became apparent that the ARMv8 architecture has interesting
rules regarding attributes being used when fetching instructions
if the MMU is off at Stage-1.

In this situation, the CPU is allowed to fetch from the PoC and
allocate into the I-cache (unless the memory is mapped with
the XN attribute at Stage-2).

If we transpose this to vcpus sharing a single physical CPU,
it is possible for a vcpu running with its MMU off to influence
another vcpu running with its MMU on, as the latter is expected to
fetch from the PoU (and self-patching code doesn't flush below that
level).

In order to solve this, reuse the vcpu-private TLB invalidation
code to apply the same policy to the I-cache, nuking it every time
the vcpu runs on a physical CPU that ran another vcpu of the same
VM in the past.

This involve renaming __kvm_tlb_flush_local_vmid() to
__kvm_flush_cpu_context(), and inserting a local i-cache invalidation
there.

Cc: stable@vger.kernel.org
Signed-off-by: NMarc Zyngier <maz@kernel.org>
Acked-by: NWill Deacon <will@kernel.org>
Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210303164505.68492-1-maz@kernel.orgSigned-off-by: NZenghui Yu <yuzenghui@huawei.com>
Reviewed-by: NCheng Jian <cj.chengjian@huawei.com>
Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
上级 b4e67acc
...@@ -66,9 +66,9 @@ extern char __kvm_hyp_init[]; ...@@ -66,9 +66,9 @@ extern char __kvm_hyp_init[];
extern char __kvm_hyp_init_end[]; extern char __kvm_hyp_init_end[];
extern void __kvm_flush_vm_context(void); extern void __kvm_flush_vm_context(void);
extern void __kvm_flush_cpu_context(struct kvm_vcpu *vcpu);
extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
extern void __kvm_tlb_flush_vmid(struct kvm *kvm); extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu);
extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high); extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high);
......
...@@ -56,7 +56,7 @@ void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) ...@@ -56,7 +56,7 @@ void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
__kvm_tlb_flush_vmid(kvm); __kvm_tlb_flush_vmid(kvm);
} }
void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) void __hyp_text __kvm_flush_cpu_context(struct kvm_vcpu *vcpu)
{ {
struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm); struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm);
......
...@@ -63,9 +63,9 @@ extern char __kvm_hyp_init_end[]; ...@@ -63,9 +63,9 @@ extern char __kvm_hyp_init_end[];
extern char __kvm_hyp_vector[]; extern char __kvm_hyp_vector[];
extern void __kvm_flush_vm_context(void); extern void __kvm_flush_vm_context(void);
extern void __kvm_flush_cpu_context(struct kvm_vcpu *vcpu);
extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
extern void __kvm_tlb_flush_vmid(struct kvm *kvm); extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu);
extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high); extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high);
......
...@@ -149,7 +149,7 @@ void __hyp_text __kvm_tlb_flush_vmid(struct kvm *kvm) ...@@ -149,7 +149,7 @@ void __hyp_text __kvm_tlb_flush_vmid(struct kvm *kvm)
__tlb_switch_to_host()(kvm, flags); __tlb_switch_to_host()(kvm, flags);
} }
void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) void __hyp_text __kvm_flush_cpu_context(struct kvm_vcpu *vcpu)
{ {
struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm); struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm);
unsigned long flags; unsigned long flags;
...@@ -158,6 +158,7 @@ void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) ...@@ -158,6 +158,7 @@ void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu)
__tlb_switch_to_guest()(kvm, &flags); __tlb_switch_to_guest()(kvm, &flags);
__tlbi(vmalle1); __tlbi(vmalle1);
asm volatile("ic iallu");
dsb(nsh); dsb(nsh);
isb(); isb();
......
...@@ -424,11 +424,16 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) ...@@ -424,11 +424,16 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
cpu_data = this_cpu_ptr(&kvm_host_data); cpu_data = this_cpu_ptr(&kvm_host_data);
/* /*
* We guarantee that both TLBs and I-cache are private to each
* vcpu. If detecting that a vcpu from the same VM has
* previously run on the same physical CPU, call into the
* hypervisor code to nuke the relevant contexts.
*
* We might get preempted before the vCPU actually runs, but * We might get preempted before the vCPU actually runs, but
* over-invalidation doesn't affect correctness. * over-invalidation doesn't affect correctness.
*/ */
if (*last_ran != vcpu->vcpu_id) { if (*last_ran != vcpu->vcpu_id) {
kvm_call_hyp(__kvm_tlb_flush_local_vmid, vcpu); kvm_call_hyp(__kvm_flush_cpu_context, vcpu);
*last_ran = vcpu->vcpu_id; *last_ran = vcpu->vcpu_id;
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册