1. 03 3月, 2016 1 次提交
  2. 26 11月, 2015 1 次提交
    • T
      KVM: x86: MMU: Consolidate BUG_ON checks for reverse-mapped sptes · 77fbbbd2
      Takuya Yoshikawa 提交于
      At some call sites of rmap_get_first() and rmap_get_next(), BUG_ON is
      placed right after the call to detect unrelated sptes which must not be
      found in the reverse-mapping list.
      
      Move this check in rmap_get_first/next() so that all call sites, not
      just the users of the for_each_rmap_spte() macro, will be checked the
      same way.
      
      One thing to keep in mind is that kvm_mmu_unlink_parents() also uses
      rmap_get_first() to handle parent sptes.  The change will not break it
      because parent sptes are present, at least until drop_parent_pte()
      actually unlinks them, and not mmio-sptes.
      Signed-off-by: NTakuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      77fbbbd2
  3. 05 6月, 2015 1 次提交
  4. 20 5月, 2015 1 次提交
  5. 11 5月, 2015 1 次提交
  6. 03 9月, 2014 1 次提交
    • D
      kvm: fix potentially corrupt mmio cache · ee3d1570
      David Matlack 提交于
      vcpu exits and memslot mutations can run concurrently as long as the
      vcpu does not aquire the slots mutex. Thus it is theoretically possible
      for memslots to change underneath a vcpu that is handling an exit.
      
      If we increment the memslot generation number again after
      synchronize_srcu_expedited(), vcpus can safely cache memslot generation
      without maintaining a single rcu_dereference through an entire vm exit.
      And much of the x86/kvm code does not maintain a single rcu_dereference
      of the current memslots during each exit.
      
      We can prevent the following case:
      
         vcpu (CPU 0)                             | thread (CPU 1)
      --------------------------------------------+--------------------------
      1  vm exit                                  |
      2  srcu_read_unlock(&kvm->srcu)             |
      3  decide to cache something based on       |
           old memslots                           |
      4                                           | change memslots
                                                  | (increments generation)
      5                                           | synchronize_srcu(&kvm->srcu);
      6  retrieve generation # from new memslots  |
      7  tag cache with new memslot generation    |
      8  srcu_read_unlock(&kvm->srcu)             |
      ...                                         |
         <action based on cache occurs even       |
          though the caching decision was based   |
          on the old memslots>                    |
      ...                                         |
         <action *continues* to occur until next  |
          memslot generation change, which may    |
          be never>                               |
                                                  |
      
      By incrementing the generation after synchronizing with kvm->srcu readers,
      we ensure that the generation retrieved in (6) will become invalid soon
      after (8).
      
      Keeping the existing increment is not strictly necessary, but we
      do keep it and just move it for consistency from update_memslots to
      install_new_memslots.  It invalidates old cached MMIOs immediately,
      instead of having to wait for the end of synchronize_srcu_expedited,
      which makes the code more clearly correct in case CPU 1 is preempted
      right after synchronize_srcu() returns.
      
      To avoid halving the generation space in SPTEs, always presume that the
      low bit of the generation is zero when reconstructing a generation number
      out of an SPTE.  This effectively disables MMIO caching in SPTEs during
      the call to synchronize_srcu_expedited.  Using the low bit this way is
      somewhat like a seqcount---where the protected thing is a cache, and
      instead of retrying we can simply punt if we observe the low bit to be 1.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NDavid Matlack <dmatlack@google.com>
      Reviewed-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Reviewed-by: NDavid Matlack <dmatlack@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      ee3d1570
  7. 27 6月, 2013 6 次提交
  8. 19 6月, 2013 1 次提交
  9. 14 1月, 2013 1 次提交
  10. 07 3月, 2012 1 次提交
  11. 12 7月, 2011 1 次提交
  12. 07 5月, 2011 1 次提交
  13. 31 3月, 2011 1 次提交
  14. 01 8月, 2010 5 次提交
  15. 19 5月, 2010 1 次提交
  16. 17 5月, 2010 2 次提交