• J
    KVM: x86: optimize PKU branching in kvm_load_{guest|host}_xsave_state · 945024d7
    Jon Kohler 提交于
    kvm_load_{guest|host}_xsave_state handles xsave on vm entry and exit,
    part of which is managing memory protection key state. The latest
    arch.pkru is updated with a rdpkru, and if that doesn't match the base
    host_pkru (which about 70% of the time), we issue a __write_pkru.
    
    To improve performance, implement the following optimizations:
     1. Reorder if conditions prior to wrpkru in both
        kvm_load_{guest|host}_xsave_state.
    
        Flip the ordering of the || condition so that XFEATURE_MASK_PKRU is
        checked first, which when instrumented in our environment appeared
        to be always true and less overall work than kvm_read_cr4_bits.
    
        For kvm_load_guest_xsave_state, hoist arch.pkru != host_pkru ahead
        one position. When instrumented, I saw this be true roughly ~70% of
        the time vs the other conditions which were almost always true.
        With this change, we will avoid 3rd condition check ~30% of the time.
    
     2. Wrap PKU sections with CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS,
        as if the user compiles out this feature, we should not have
        these branches at all.
    Signed-off-by: NJon Kohler <jon@nutanix.com>
    Message-Id: <20220324004439.6709-1-jon@nutanix.com>
    Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
    945024d7
x86.c 340.8 KB