1. 08 3月, 2012 3 次提交
  2. 05 3月, 2012 5 次提交
  3. 01 2月, 2012 1 次提交
    • T
      KVM: Fix __set_bit() race in mark_page_dirty() during dirty logging · 50e92b3c
      Takuya Yoshikawa 提交于
      It is possible that the __set_bit() in mark_page_dirty() is called
      simultaneously on the same region of memory, which may result in only
      one bit being set, because some callers do not take mmu_lock before
      mark_page_dirty().
      
      This problem is hard to produce because when we reach mark_page_dirty()
      beginning from, e.g., tdp_page_fault(), mmu_lock is being held during
      __direct_map():  making kvm-unit-tests' dirty log api test write to two
      pages concurrently was not useful for this reason.
      
      So we have confirmed that there can actually be race condition by
      checking if some callers really reach there without holding mmu_lock
      using spin_is_locked():  probably they were from kvm_write_guest_page().
      
      To fix this race, this patch changes the bit operation to the atomic
      version:  note that nr_dirty_pages also suffers from the race but we do
      not need exactly correct numbers for now.
      Signed-off-by: NTakuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      50e92b3c
  4. 13 1月, 2012 1 次提交
  5. 27 12月, 2011 14 次提交
  6. 26 12月, 2011 1 次提交
    • A
      KVM: Device assignment permission checks · 3d27e23b
      Alex Williamson 提交于
      Only allow KVM device assignment to attach to devices which:
      
       - Are not bridges
       - Have BAR resources (assume others are special devices)
       - The user has permissions to use
      
      Assigning a bridge is a configuration error, it's not supported, and
      typically doesn't result in the behavior the user is expecting anyway.
      Devices without BAR resources are typically chipset components that
      also don't have host drivers.  We don't want users to hold such devices
      captive or cause system problems by fencing them off into an iommu
      domain.  We determine "permission to use" by testing whether the user
      has access to the PCI sysfs resource files.  By default a normal user
      will not have access to these files, so it provides a good indication
      that an administration agent has granted the user access to the device.
      
      [Yang Bai: add missing #include]
      [avi: fix comment style]
      Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
      Signed-off-by: NYang Bai <hamo.by@gmail.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      3d27e23b
  7. 25 12月, 2011 1 次提交
  8. 10 11月, 2011 1 次提交
    • O
      iommu/core: split mapping to page sizes as supported by the hardware · 7d3002cc
      Ohad Ben-Cohen 提交于
      When mapping a memory region, split it to page sizes as supported
      by the iommu hardware. Always prefer bigger pages, when possible,
      in order to reduce the TLB pressure.
      
      The logic to do that is now added to the IOMMU core, so neither the iommu
      drivers themselves nor users of the IOMMU API have to duplicate it.
      
      This allows a more lenient granularity of mappings; traditionally the
      IOMMU API took 'order' (of a page) as a mapping size, and directly let
      the low level iommu drivers handle the mapping, but now that the IOMMU
      core can split arbitrary memory regions into pages, we can remove this
      limitation, so users don't have to split those regions by themselves.
      
      Currently the supported page sizes are advertised once and they then
      remain static. That works well for OMAP and MSM but it would probably
      not fly well with intel's hardware, where the page size capabilities
      seem to have the potential to be different between several DMA
      remapping devices.
      
      register_iommu() currently sets a default pgsize behavior, so we can convert
      the IOMMU drivers in subsequent patches. After all the drivers
      are converted, the temporary default settings will be removed.
      
      Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
      to deal with bytes instead of page order.
      
      Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
      Signed-off-by: NOhad Ben-Cohen <ohad@wizery.com>
      Cc: David Brown <davidb@codeaurora.org>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Joerg Roedel <Joerg.Roedel@amd.com>
      Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
      Cc: KyongHo Cho <pullip.cho@samsung.com>
      Cc: Hiroshi DOYU <hdoyu@nvidia.com>
      Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
      Cc: kvm@vger.kernel.org
      Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
      7d3002cc
  9. 01 11月, 2011 2 次提交
  10. 21 10月, 2011 2 次提交
  11. 26 9月, 2011 6 次提交
  12. 24 9月, 2011 1 次提交
  13. 24 7月, 2011 2 次提交
    • A
      KVM: IOMMU: Disable device assignment without interrupt remapping · 3f68b031
      Alex Williamson 提交于
      IOMMU interrupt remapping support provides a further layer of
      isolation for device assignment by preventing arbitrary interrupt
      block DMA writes by a malicious guest from reaching the host.  By
      default, we should require that the platform provides interrupt
      remapping support, with an opt-in mechanism for existing behavior.
      
      Both AMD IOMMU and Intel VT-d2 hardware support interrupt
      remapping, however we currently only have software support on
      the Intel side.  Users wishing to re-enable device assignment
      when interrupt remapping is not supported on the platform can
      use the "allow_unsafe_assigned_interrupts=1" module option.
      
      [avi: break long lines]
      Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      3f68b031
    • X
      KVM: MMU: mmio page fault support · ce88decf
      Xiao Guangrong 提交于
      The idea is from Avi:
      
      | We could cache the result of a miss in an spte by using a reserved bit, and
      | checking the page fault error code (or seeing if we get an ept violation or
      | ept misconfiguration), so if we get repeated mmio on a page, we don't need to
      | search the slot list/tree.
      | (https://lkml.org/lkml/2011/2/22/221)
      
      When the page fault is caused by mmio, we cache the info in the shadow page
      table, and also set the reserved bits in the shadow page table, so if the mmio
      is caused again, we can quickly identify it and emulate it directly
      
      Searching mmio gfn in memslots is heavy since we need to walk all memeslots, it
      can be reduced by this feature, and also avoid walking guest page table for
      soft mmu.
      
      [jan: fix operator precedence issue]
      Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com>
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      ce88decf
新手
引导
客服 返回
顶部