提交 febb82c2 编写于 作者: N Nadav Amit 提交者: Joerg Roedel

iommu: Factor iommu_iotlb_gather_is_disjoint() out

Refactor iommu_iotlb_gather_add_page() and factor out the logic that
detects whether IOTLB gather range and a new range are disjoint. To be
used by the next patch that implements different gathering logic for
AMD.

Note that updating gather->pgsize unconditionally does not affect
correctness as the function had (and has) an invariant, in which
gather->pgsize always represents the flushing granularity of its range.
Arguably, “size" should never be zero, but lets assume for the matter of
discussion that it might.

If "size" equals to "gather->pgsize", then the assignment in question
has no impact.

Otherwise, if "size" is non-zero, then iommu_iotlb_sync() would
initialize the size and range (see iommu_iotlb_gather_init()), and the
invariant is kept.

Otherwise, "size" is zero, and "gather" already holds a range, so
gather->pgsize is non-zero and (gather->pgsize && gather->pgsize !=
size) is true. Therefore, again, iommu_iotlb_sync() would be called and
initialize the size.

Cc: Joerg Roedel <joro@8bytes.org>
Cc: Jiajun Cao <caojiajun@vmware.com>
Cc: Lu Baolu <baolu.lu@linux.intel.com>
Cc: iommu@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org>
Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
Acked-by: NWill Deacon <will@kernel.org>
Signed-off-by: NNadav Amit <namit@vmware.com>
Link: https://lore.kernel.org/r/20210723093209.714328-5-namit@vmware.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
上级 3136895c
...@@ -497,6 +497,28 @@ static inline void iommu_iotlb_sync(struct iommu_domain *domain, ...@@ -497,6 +497,28 @@ static inline void iommu_iotlb_sync(struct iommu_domain *domain,
iommu_iotlb_gather_init(iotlb_gather); iommu_iotlb_gather_init(iotlb_gather);
} }
/**
* iommu_iotlb_gather_is_disjoint - Checks whether a new range is disjoint
*
* @gather: TLB gather data
* @iova: start of page to invalidate
* @size: size of page to invalidate
*
* Helper for IOMMU drivers to check whether a new range and the gathered range
* are disjoint. For many IOMMUs, flushing the IOMMU in this case is better
* than merging the two, which might lead to unnecessary invalidations.
*/
static inline
bool iommu_iotlb_gather_is_disjoint(struct iommu_iotlb_gather *gather,
unsigned long iova, size_t size)
{
unsigned long start = iova, end = start + size - 1;
return gather->end != 0 &&
(end + 1 < gather->start || start > gather->end + 1);
}
/** /**
* iommu_iotlb_gather_add_range - Gather for address-based TLB invalidation * iommu_iotlb_gather_add_range - Gather for address-based TLB invalidation
* @gather: TLB gather data * @gather: TLB gather data
...@@ -533,20 +555,16 @@ static inline void iommu_iotlb_gather_add_page(struct iommu_domain *domain, ...@@ -533,20 +555,16 @@ static inline void iommu_iotlb_gather_add_page(struct iommu_domain *domain,
struct iommu_iotlb_gather *gather, struct iommu_iotlb_gather *gather,
unsigned long iova, size_t size) unsigned long iova, size_t size)
{ {
unsigned long start = iova, end = start + size - 1;
/* /*
* If the new page is disjoint from the current range or is mapped at * If the new page is disjoint from the current range or is mapped at
* a different granularity, then sync the TLB so that the gather * a different granularity, then sync the TLB so that the gather
* structure can be rewritten. * structure can be rewritten.
*/ */
if (gather->pgsize != size || if ((gather->pgsize && gather->pgsize != size) ||
end + 1 < gather->start || start > gather->end + 1) { iommu_iotlb_gather_is_disjoint(gather, iova, size))
if (gather->pgsize) iommu_iotlb_sync(domain, gather);
iommu_iotlb_sync(domain, gather);
gather->pgsize = size;
}
gather->pgsize = size;
iommu_iotlb_gather_add_range(gather, iova, size); iommu_iotlb_gather_add_range(gather, iova, size);
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册