• P
    KVM: PPC: Book3S HV: Unify dirty page map between HPT and radix · e641a317
    Paul Mackerras 提交于
    Currently, the HPT code in HV KVM maintains a dirty bit per guest page
    in the rmap array, whether or not dirty page tracking has been enabled
    for the memory slot.  In contrast, the radix code maintains a dirty
    bit per guest page in memslot->dirty_bitmap, and only does so when
    dirty page tracking has been enabled.
    
    This changes the HPT code to maintain the dirty bits in the memslot
    dirty_bitmap like radix does.  This results in slightly less code
    overall, and will mean that we do not lose the dirty bits when
    transitioning between HPT and radix mode in future.
    
    There is one minor change to behaviour as a result.  With HPT, when
    dirty tracking was enabled for a memslot, we would previously clear
    all the dirty bits at that point (both in the HPT entries and in the
    rmap arrays), meaning that a KVM_GET_DIRTY_LOG ioctl immediately
    following would show no pages as dirty (assuming no vcpus have run
    in the meantime).  With this change, the dirty bits on HPT entries
    are not cleared at the point where dirty tracking is enabled, so
    KVM_GET_DIRTY_LOG would show as dirty any guest pages that are
    resident in the HPT and dirty.  This is consistent with what happens
    on radix.
    
    This also fixes a bug in the mark_pages_dirty() function for radix
    (in the sense that the function no longer exists).  In the case where
    a large page of 64 normal pages or more is marked dirty, the
    addressing of the dirty bitmap was incorrect and could write past
    the end of the bitmap.  Fortunately this case was never hit in
    practice because a 2MB large page is only 32 x 64kB pages, and we
    don't support backing the guest with 1GB huge pages at this point.
    Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
    e641a317
book3s_64_mmu_hv.c 51.9 KB