• D
    KVM: Remove dirty handling from gfn_to_pfn_cache completely · cf1d88b3
    David Woodhouse 提交于
    It isn't OK to cache the dirty status of a page in internal structures
    for an indefinite period of time.
    
    Any time a vCPU exits the run loop to userspace might be its last; the
    VMM might do its final check of the dirty log, flush the last remaining
    dirty pages to the destination and complete a live migration. If we
    have internal 'dirty' state which doesn't get flushed until the vCPU
    is finally destroyed on the source after migration is complete, then
    we have lost data because that will escape the final copy.
    
    This problem already exists with the use of kvm_vcpu_unmap() to mark
    pages dirty in e.g. VMX nesting.
    
    Note that the actual Linux MM already considers the page to be dirty
    since we have a writeable mapping of it. This is just about the KVM
    dirty logging.
    
    For the nesting-style use cases (KVM_GUEST_USES_PFN) we will need to
    track which gfn_to_pfn_caches have been used and explicitly mark the
    corresponding pages dirty before returning to userspace. But we would
    have needed external tracking of that anyway, rather than walking the
    full list of GPCs to find those belonging to this vCPU which are dirty.
    
    So let's rely *solely* on that external tracking, and keep it simple
    rather than laying a tempting trap for callers to fall into.
    Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk>
    Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
    Message-Id: <20220303154127.202856-3-dwmw2@infradead.org>
    Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
    cf1d88b3
xen.c 28.5 KB