1. 06 8月, 2012 9 次提交
  2. 26 7月, 2012 2 次提交
  3. 23 7月, 2012 2 次提交
  4. 20 7月, 2012 4 次提交
  5. 19 7月, 2012 1 次提交
  6. 04 7月, 2012 1 次提交
  7. 03 7月, 2012 1 次提交
  8. 18 6月, 2012 1 次提交
  9. 06 6月, 2012 1 次提交
  10. 05 6月, 2012 1 次提交
  11. 17 5月, 2012 1 次提交
    • A
      KVM: MMU: Don't use RCU for lockless shadow walking · c142786c
      Avi Kivity 提交于
      Using RCU for lockless shadow walking can increase the amount of memory
      in use by the system, since RCU grace periods are unpredictable.  We also
      have an unconditional write to a shared variable (reader_counter), which
      isn't good for scaling.
      
      Replace that with a scheme similar to x86's get_user_pages_fast(): disable
      interrupts during lockless shadow walk to force the freer
      (kvm_mmu_commit_zap_page()) to wait for the TLB flush IPI to find the
      processor with interrupts enabled.
      
      We also add a new vcpu->mode, READING_SHADOW_PAGE_TABLES, to prevent
      kvm_flush_remote_tlbs() from avoiding the IPI.
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      c142786c
  12. 01 5月, 2012 1 次提交
  13. 24 4月, 2012 1 次提交
  14. 20 4月, 2012 1 次提交
    • A
      KVM: Fix page-crossing MMIO · f78146b0
      Avi Kivity 提交于
      MMIO that are split across a page boundary are currently broken - the
      code does not expect to be aborted by the exit to userspace for the
      first MMIO fragment.
      
      This patch fixes the problem by generalizing the current code for handling
      16-byte MMIOs to handle a number of "fragments", and changes the MMIO
      code to create those fragments.
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      f78146b0
  15. 12 4月, 2012 1 次提交
    • A
      KVM: unmap pages from the iommu when slots are removed · 32f6daad
      Alex Williamson 提交于
      We've been adding new mappings, but not destroying old mappings.
      This can lead to a page leak as pages are pinned using
      get_user_pages, but only unpinned with put_page if they still
      exist in the memslots list on vm shutdown.  A memslot that is
      destroyed while an iommu domain is enabled for the guest will
      therefore result in an elevated page reference count that is
      never cleared.
      
      Additionally, without this fix, the iommu is only programmed
      with the first translation for a gpa.  This can result in
      peer-to-peer errors if a mapping is destroyed and replaced by a
      new mapping at the same gpa as the iommu will still be pointing
      to the original, pinned memory address.
      Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      32f6daad
  16. 08 4月, 2012 5 次提交
  17. 20 3月, 2012 1 次提交
  18. 08 3月, 2012 4 次提交
  19. 05 3月, 2012 2 次提交
    • P
      KVM: Move gfn_to_memslot() to kvm_host.h · 9d4cba7f
      Paul Mackerras 提交于
      This moves __gfn_to_memslot() and search_memslots() from kvm_main.c to
      kvm_host.h to reduce the code duplication caused by the need for
      non-modular code in arch/powerpc/kvm/book3s_hv_rm_mmu.c to call
      gfn_to_memslot() in real mode.
      
      Rather than putting gfn_to_memslot() itself in a header, which would
      lead to increased code size, this puts __gfn_to_memslot() in a header.
      Then, the non-modular uses of gfn_to_memslot() are changed to call
      __gfn_to_memslot() instead.  This way there is only one place in the
      source code that needs to be changed should the gfn_to_memslot()
      implementation need to be modified.
      
      On powerpc, the Book3S HV style of KVM has code that is called from
      real mode which needs to call gfn_to_memslot() and thus needs this.
      (Module code is allocated in the vmalloc region, which can't be
      accessed in real mode.)
      
      With this, we can remove builtin_gfn_to_memslot() from book3s_hv_rm_mmu.c.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Acked-by: NAvi Kivity <avi@redhat.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      9d4cba7f
    • P
      KVM: Add barriers to allow mmu_notifier_retry to be used locklessly · a355aa54
      Paul Mackerras 提交于
      This adds an smp_wmb in kvm_mmu_notifier_invalidate_range_end() and an
      smp_rmb in mmu_notifier_retry() so that mmu_notifier_retry() will give
      the correct answer when called without kvm->mmu_lock being held.
      PowerPC Book3S HV KVM wants to use a bitlock per guest page rather than
      a single global spinlock in order to improve the scalability of updates
      to the guest MMU hashed page table, and so needs this.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Acked-by: NAvi Kivity <avi@redhat.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      a355aa54