1. 28 2月, 2022 2 次提交
  2. 20 12月, 2021 1 次提交
  3. 17 12月, 2021 4 次提交
  4. 27 11月, 2021 1 次提交
    • A
      iommu/vt-d: Fix unmap_pages support · 86dc40c7
      Alex Williamson 提交于
      When supporting only the .map and .unmap callbacks of iommu_ops,
      the IOMMU driver can make assumptions about the size and alignment
      used for mappings based on the driver provided pgsize_bitmap.  VT-d
      previously used essentially PAGE_MASK for this bitmap as any power
      of two mapping was acceptably filled by native page sizes.
      
      However, with the .map_pages and .unmap_pages interface we're now
      getting page-size and count arguments.  If we simply combine these
      as (page-size * count) and make use of the previous map/unmap
      functions internally, any size and alignment assumptions are very
      different.
      
      As an example, a given vfio device assignment VM will often create
      a 4MB mapping at IOVA pfn [0x3fe00 - 0x401ff].  On a system that
      does not support IOMMU super pages, the unmap_pages interface will
      ask to unmap 1024 4KB pages at the base IOVA.  dma_pte_clear_level()
      will recurse down to level 2 of the page table where the first half
      of the pfn range exactly matches the entire pte level.  We clear the
      pte, increment the pfn by the level size, but (oops) the next pte is
      on a new page, so we exit the loop an pop back up a level.  When we
      then update the pfn based on that higher level, we seem to assume
      that the previous pfn value was at the start of the level.  In this
      case the level size is 256K pfns, which we add to the base pfn and
      get a results of 0x7fe00, which is clearly greater than 0x401ff,
      so we're done.  Meanwhile we never cleared the ptes for the remainder
      of the range.  When the VM remaps this range, we're overwriting valid
      ptes and the VT-d driver complains loudly, as reported by the user
      report linked below.
      
      The fix for this seems relatively simple, if each iteration of the
      loop in dma_pte_clear_level() is assumed to clear to the end of the
      level pte page, then our next pfn should be calculated from level_pfn
      rather than our working pfn.
      
      Fixes: 3f34f125 ("iommu/vt-d: Implement map/unmap_pages() iommu_ops callback")
      Reported-by: NAjay Garg <ajaygargnsit@gmail.com>
      Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
      Tested-by: NGiovanni Cabiddu <giovanni.cabiddu@intel.com>
      Link: https://lore.kernel.org/all/20211002124012.18186-1-ajaygargnsit@gmail.com/
      Link: https://lore.kernel.org/r/163659074748.1617923.12716161410774184024.stgit@omenSigned-off-by: NLu Baolu <baolu.lu@linux.intel.com>
      Link: https://lore.kernel.org/r/20211126135556.397932-3-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
      86dc40c7
  5. 18 10月, 2021 6 次提交
  6. 19 8月, 2021 5 次提交
  7. 18 8月, 2021 2 次提交
  8. 26 7月, 2021 6 次提交
  9. 14 7月, 2021 2 次提交
  10. 25 6月, 2021 1 次提交
  11. 18 6月, 2021 1 次提交
  12. 10 6月, 2021 9 次提交