• H
    KVM: ppc: directly insert shadow mappings into the hardware TLB · 7924bd41
    Hollis Blanchard 提交于
    Formerly, we used to maintain a per-vcpu shadow TLB and on every entry to the
    guest would load this array into the hardware TLB. This consumed 1280 bytes of
    memory (64 entries of 16 bytes plus a struct page pointer each), and also
    required some assembly to loop over the array on every entry.
    
    Instead of saving a copy in memory, we can just store shadow mappings directly
    into the hardware TLB, accepting that the host kernel will clobber these as
    part of the normal 440 TLB round robin. When we do that we need less than half
    the memory, and we have decreased the exit handling time for all guest exits,
    at the cost of increased number of TLB misses because the host overwrites some
    guest entries.
    
    These savings will be increased on processors with larger TLBs or which
    implement intelligent flush instructions like tlbivax (which will avoid the
    need to walk arrays in software).
    
    In addition to that and to the code simplification, we have a greater chance of
    leaving other host userspace mappings in the TLB, instead of forcing all
    subsequent tasks to re-fault all their mappings.
    Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com>
    Signed-off-by: NAvi Kivity <avi@redhat.com>
    7924bd41
kvm_44x.h 1.5 KB