1. 22 3月, 2013 1 次提交
  2. 13 3月, 2013 1 次提交
  3. 20 9月, 2012 3 次提交
    • A
      KVM: MMU: Optimize is_last_gpte() · 6fd01b71
      Avi Kivity 提交于
      Instead of branchy code depending on level, gpte.ps, and mmu configuration,
      prepare everything in a bitmap during mode changes and look it up during
      runtime.
      Reviewed-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      6fd01b71
    • A
      KVM: MMU: Optimize pte permission checks · 97d64b78
      Avi Kivity 提交于
      walk_addr_generic() permission checks are a maze of branchy code, which is
      performed four times per lookup.  It depends on the type of access, efer.nxe,
      cr0.wp, cr4.smep, and in the near future, cr4.smap.
      
      Optimize this away by precalculating all variants and storing them in a
      bitmap.  The bitmap is recalculated when rarely-changing variables change
      (cr0, cr4) and is indexed by the often-changing variables (page fault error
      code, pte access permissions).
      
      The permission check is moved to the end of the loop, otherwise an SMEP
      fault could be reported as a false positive, when PDE.U=1 but PTE.U=0.
      Noted by Xiao Guangrong.
      
      The result is short, branch-free code.
      Reviewed-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      97d64b78
    • A
      KVM: MMU: Push clean gpte write protection out of gpte_access() · 8ea667f2
      Avi Kivity 提交于
      gpte_access() computes the access permissions of a guest pte and also
      write-protects clean gptes.  This is wrong when we are servicing a
      write fault (since we'll be setting the dirty bit momentarily) but
      correct when instantiating a speculative spte, or when servicing a
      read fault (since we'll want to trap a following write in order to
      set the dirty bit).
      
      It doesn't seem to hurt in practice, but in order to make the code
      readable, push the write protection out of gpte_access() and into
      a new protect_clean_gpte() which is called explicitly when needed.
      Reviewed-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      8ea667f2
  4. 24 7月, 2011 2 次提交
  5. 24 10月, 2010 5 次提交
  6. 01 3月, 2010 5 次提交
  7. 10 9月, 2009 2 次提交
  8. 10 6月, 2009 1 次提交
  9. 24 3月, 2009 1 次提交
  10. 20 7月, 2008 1 次提交
    • A
      KVM: MMU: Fix false flooding when a pte points to page table · 1b7fcd32
      Avi Kivity 提交于
      The KVM MMU tries to detect when a speculative pte update is not actually
      used by demand fault, by checking the accessed bit of the shadow pte.  If
      the shadow pte has not been accessed, we deem that page table flooded and
      remove the shadow page table, allowing further pte updates to proceed
      without emulation.
      
      However, if the pte itself points at a page table and only used for write
      operations, the accessed bit will never be set since all access will happen
      through the emulator.
      
      This is exactly what happens with kscand on old (2.4.x) HIGHMEM kernels.
      The kernel points a kmap_atomic() pte at a page table, and then
      proceeds with read-modify-write operations to look at the dirty and accessed
      bits.  We get a false flood trigger on the kmap ptes, which results in the
      mmu spending all its time setting up and tearing down shadows.
      
      Fix by setting the shadow accessed bit on emulated accesses.
      Signed-off-by: NAvi Kivity <avi@qumranet.com>
      1b7fcd32
  11. 04 5月, 2008 2 次提交
  12. 27 4月, 2008 1 次提交
  13. 31 1月, 2008 1 次提交
  14. 30 1月, 2008 2 次提交