提交 acea0018 编写于 作者: D David Woodhouse

intel-iommu: Defer the iotlb flush and iova free for intel_unmap_sg() too.

I see no reason why we did this _only_ in intel_unmap_page().
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
上级 3d39cecc
...@@ -2815,11 +2815,18 @@ static void intel_unmap_sg(struct device *hwdev, struct scatterlist *sglist, ...@@ -2815,11 +2815,18 @@ static void intel_unmap_sg(struct device *hwdev, struct scatterlist *sglist,
/* free page tables */ /* free page tables */
dma_pte_free_pagetable(domain, start_pfn, last_pfn); dma_pte_free_pagetable(domain, start_pfn, last_pfn);
iommu_flush_iotlb_psi(iommu, domain->id, start_pfn, if (intel_iommu_strict) {
(last_pfn - start_pfn + 1)); iommu_flush_iotlb_psi(iommu, domain->id, start_pfn,
last_pfn - start_pfn + 1);
/* free iova */ /* free iova */
__free_iova(&domain->iovad, iova); __free_iova(&domain->iovad, iova);
} else {
add_unmap(domain, iova);
/*
* queue up the release of the unmap to save the 1/6th of the
* cpu used up by the iotlb flush operation...
*/
}
} }
static int intel_nontranslate_map_sg(struct device *hddev, static int intel_nontranslate_map_sg(struct device *hddev,
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册