1. 27 8月, 2012 1 次提交
  2. 07 7月, 2012 1 次提交
  3. 04 7月, 2012 1 次提交
  4. 03 7月, 2012 1 次提交
  5. 18 6月, 2012 1 次提交
  6. 05 6月, 2012 2 次提交
  7. 01 5月, 2012 1 次提交
  8. 24 4月, 2012 1 次提交
  9. 12 4月, 2012 1 次提交
    • A
      KVM: unmap pages from the iommu when slots are removed · 32f6daad
      Alex Williamson 提交于
      We've been adding new mappings, but not destroying old mappings.
      This can lead to a page leak as pages are pinned using
      get_user_pages, but only unpinned with put_page if they still
      exist in the memslots list on vm shutdown.  A memslot that is
      destroyed while an iommu domain is enabled for the guest will
      therefore result in an elevated page reference count that is
      never cleared.
      
      Additionally, without this fix, the iommu is only programmed
      with the first translation for a gpa.  This can result in
      peer-to-peer errors if a mapping is destroyed and replaced by a
      new mapping at the same gpa as the iommu will still be pointing
      to the original, pinned memory address.
      Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      32f6daad
  10. 08 4月, 2012 4 次提交
  11. 08 3月, 2012 7 次提交
  12. 05 3月, 2012 4 次提交
  13. 01 2月, 2012 1 次提交
    • T
      KVM: Fix __set_bit() race in mark_page_dirty() during dirty logging · 50e92b3c
      Takuya Yoshikawa 提交于
      It is possible that the __set_bit() in mark_page_dirty() is called
      simultaneously on the same region of memory, which may result in only
      one bit being set, because some callers do not take mmu_lock before
      mark_page_dirty().
      
      This problem is hard to produce because when we reach mark_page_dirty()
      beginning from, e.g., tdp_page_fault(), mmu_lock is being held during
      __direct_map():  making kvm-unit-tests' dirty log api test write to two
      pages concurrently was not useful for this reason.
      
      So we have confirmed that there can actually be race condition by
      checking if some callers really reach there without holding mmu_lock
      using spin_is_locked():  probably they were from kvm_write_guest_page().
      
      To fix this race, this patch changes the bit operation to the atomic
      version:  note that nr_dirty_pages also suffers from the race but we do
      not need exactly correct numbers for now.
      Signed-off-by: NTakuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      50e92b3c
  14. 27 12月, 2011 12 次提交
  15. 26 9月, 2011 1 次提交
    • S
      KVM: Intelligent device lookup on I/O bus · 743eeb0b
      Sasha Levin 提交于
      Currently the method of dealing with an IO operation on a bus (PIO/MMIO)
      is to call the read or write callback for each device registered
      on the bus until we find a device which handles it.
      
      Since the number of devices on a bus can be significant due to ioeventfds
      and coalesced MMIO zones, this leads to a lot of overhead on each IO
      operation.
      
      Instead of registering devices, we now register ranges which points to
      a device. Lookup is done using an efficient bsearch instead of a linear
      search.
      
      Performance test was conducted by comparing exit count per second with
      200 ioeventfds created on one byte and the guest is trying to access a
      different byte continuously (triggering usermode exits).
      Before the patch the guest has achieved 259k exits per second, after the
      patch the guest does 274k exits per second.
      
      Cc: Avi Kivity <avi@redhat.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NSasha Levin <levinsasha928@gmail.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      743eeb0b
  16. 24 7月, 2011 1 次提交
    • X
      KVM: MMU: mmio page fault support · ce88decf
      Xiao Guangrong 提交于
      The idea is from Avi:
      
      | We could cache the result of a miss in an spte by using a reserved bit, and
      | checking the page fault error code (or seeing if we get an ept violation or
      | ept misconfiguration), so if we get repeated mmio on a page, we don't need to
      | search the slot list/tree.
      | (https://lkml.org/lkml/2011/2/22/221)
      
      When the page fault is caused by mmio, we cache the info in the shadow page
      table, and also set the reserved bits in the shadow page table, so if the mmio
      is caused again, we can quickly identify it and emulate it directly
      
      Searching mmio gfn in memslots is heavy since we need to walk all memeslots, it
      can be reduced by this feature, and also avoid walking guest page table for
      soft mmu.
      
      [jan: fix operator precedence issue]
      Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com>
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      ce88decf