提交 66d73e12 编写于 作者: P Peter Feiner 提交者: Paolo Bonzini

KVM: X86: MMU: no mmu_notifier_seq++ in kvm_age_hva

The MMU notifier sequence number keeps GPA->HPA mappings in sync when
GPA->HPA lookups are done outside of the MMU lock (e.g., in
tdp_page_fault). Since kvm_age_hva doesn't change GPA->HPA, it's
unnecessary to increment the sequence number.
Signed-off-by: NPeter Feiner <pfeiner@google.com>
Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
上级 c63e4563
...@@ -1660,17 +1660,9 @@ int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end) ...@@ -1660,17 +1660,9 @@ int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
* This has some overhead, but not as much as the cost of swapping * This has some overhead, but not as much as the cost of swapping
* out actively used pages or breaking up actively used hugepages. * out actively used pages or breaking up actively used hugepages.
*/ */
if (!shadow_accessed_mask) { if (!shadow_accessed_mask)
/*
* We are holding the kvm->mmu_lock, and we are blowing up
* shadow PTEs. MMU notifier consumers need to be kept at bay.
* This is correct as long as we don't decouple the mmu_lock
* protected regions (like invalidate_range_start|end does).
*/
kvm->mmu_notifier_seq++;
return kvm_handle_hva_range(kvm, start, end, 0, return kvm_handle_hva_range(kvm, start, end, 0,
kvm_unmap_rmapp); kvm_unmap_rmapp);
}
return kvm_handle_hva_range(kvm, start, end, 0, kvm_age_rmapp); return kvm_handle_hva_range(kvm, start, end, 0, kvm_age_rmapp);
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册