1. 05 6月, 2013 1 次提交
    • X
      KVM: MMU: fast invalidate all pages · 5304b8d3
      Xiao Guangrong 提交于
      The current kvm_mmu_zap_all is really slow - it is holding mmu-lock to
      walk and zap all shadow pages one by one, also it need to zap all guest
      page's rmap and all shadow page's parent spte list. Particularly, things
      become worse if guest uses more memory or vcpus. It is not good for
      scalability
      
      In this patch, we introduce a faster way to invalidate all shadow pages.
      KVM maintains a global mmu invalid generation-number which is stored in
      kvm->arch.mmu_valid_gen and every shadow page stores the current global
      generation-number into sp->mmu_valid_gen when it is created
      
      When KVM need zap all shadow pages sptes, it just simply increase the
      global generation-number then reload root shadow pages on all vcpus.
      Vcpu will create a new shadow page table according to current kvm's
      generation-number. It ensures the old pages are not used any more.
      Then the obsolete pages (sp->mmu_valid_gen != kvm->arch.mmu_valid_gen)
      are zapped by using lock-break technique
      Signed-off-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      5304b8d3
  2. 22 3月, 2013 1 次提交
  3. 13 3月, 2013 1 次提交
  4. 20 9月, 2012 3 次提交
    • A
      KVM: MMU: Optimize is_last_gpte() · 6fd01b71
      Avi Kivity 提交于
      Instead of branchy code depending on level, gpte.ps, and mmu configuration,
      prepare everything in a bitmap during mode changes and look it up during
      runtime.
      Reviewed-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      6fd01b71
    • A
      KVM: MMU: Optimize pte permission checks · 97d64b78
      Avi Kivity 提交于
      walk_addr_generic() permission checks are a maze of branchy code, which is
      performed four times per lookup.  It depends on the type of access, efer.nxe,
      cr0.wp, cr4.smep, and in the near future, cr4.smap.
      
      Optimize this away by precalculating all variants and storing them in a
      bitmap.  The bitmap is recalculated when rarely-changing variables change
      (cr0, cr4) and is indexed by the often-changing variables (page fault error
      code, pte access permissions).
      
      The permission check is moved to the end of the loop, otherwise an SMEP
      fault could be reported as a false positive, when PDE.U=1 but PTE.U=0.
      Noted by Xiao Guangrong.
      
      The result is short, branch-free code.
      Reviewed-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      97d64b78
    • A
      KVM: MMU: Push clean gpte write protection out of gpte_access() · 8ea667f2
      Avi Kivity 提交于
      gpte_access() computes the access permissions of a guest pte and also
      write-protects clean gptes.  This is wrong when we are servicing a
      write fault (since we'll be setting the dirty bit momentarily) but
      correct when instantiating a speculative spte, or when servicing a
      read fault (since we'll want to trap a following write in order to
      set the dirty bit).
      
      It doesn't seem to hurt in practice, but in order to make the code
      readable, push the write protection out of gpte_access() and into
      a new protect_clean_gpte() which is called explicitly when needed.
      Reviewed-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      8ea667f2
  5. 24 7月, 2011 2 次提交
  6. 24 10月, 2010 5 次提交
  7. 01 3月, 2010 5 次提交
  8. 10 9月, 2009 2 次提交
  9. 10 6月, 2009 1 次提交
  10. 24 3月, 2009 1 次提交
  11. 20 7月, 2008 1 次提交
    • A
      KVM: MMU: Fix false flooding when a pte points to page table · 1b7fcd32
      Avi Kivity 提交于
      The KVM MMU tries to detect when a speculative pte update is not actually
      used by demand fault, by checking the accessed bit of the shadow pte.  If
      the shadow pte has not been accessed, we deem that page table flooded and
      remove the shadow page table, allowing further pte updates to proceed
      without emulation.
      
      However, if the pte itself points at a page table and only used for write
      operations, the accessed bit will never be set since all access will happen
      through the emulator.
      
      This is exactly what happens with kscand on old (2.4.x) HIGHMEM kernels.
      The kernel points a kmap_atomic() pte at a page table, and then
      proceeds with read-modify-write operations to look at the dirty and accessed
      bits.  We get a false flood trigger on the kmap ptes, which results in the
      mmu spending all its time setting up and tearing down shadows.
      
      Fix by setting the shadow accessed bit on emulated accesses.
      Signed-off-by: NAvi Kivity <avi@qumranet.com>
      1b7fcd32
  12. 04 5月, 2008 2 次提交
  13. 27 4月, 2008 1 次提交
  14. 31 1月, 2008 1 次提交
  15. 30 1月, 2008 2 次提交