提交 7bfdf217 编写于 作者: L Lan Tianyu 提交者: Paolo Bonzini

KVM/x86: Call smp_wmb() before increasing tlbs_dirty

Update spte before increasing tlbs_dirty to make sure no tlb flush
in lost after spte is zapped. This pairs with the barrier in the
kvm_flush_remote_tlbs().
Signed-off-by: NLan Tianyu <tianyu.lan@intel.com>
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
上级 a30a0509
......@@ -960,6 +960,12 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
return 0;
if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) {
/*
* Update spte before increasing tlbs_dirty to make
* sure no tlb flush is lost after spte is zapped; see
* the comments in kvm_flush_remote_tlbs().
*/
smp_wmb();
vcpu->kvm->tlbs_dirty++;
continue;
}
......@@ -975,6 +981,11 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
if (gfn != sp->gfns[i]) {
drop_spte(vcpu->kvm, &sp->spt[i]);
/*
* The same as above where we are doing
* prefetch_invalid_gpte().
*/
smp_wmb();
vcpu->kvm->tlbs_dirty++;
continue;
}
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册