arm64: tlbflush: Ensure start/end of address range are aligned to stride
mainline inclusion
from mainline-5.2
commit: 01d57485fcdb9f9101a10a18e32d5f8b023cab86
category: feature
feature: Reduce synchronous TLB invalidation on ARM64
bugzilla: NA
CVE: NA
--------------------------------------------------
Since commit 3d65b6bbc01e ("arm64: tlbi: Set MAX_TLBI_OPS to
PTRS_PER_PTE"), we resort to per-ASID invalidation when attempting to
perform more than PTRS_PER_PTE invalidation instructions in a single
call to __flush_tlb_range(). Whilst this is beneficial, the mmu_gather
code does not ensure that the end address of the range is rounded-up
to the stride when freeing intermediate page tables in pXX_free_tlb(),
which defeats our range checking.
Align the bounds passed into __flush_tlb_range().
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Reported-by: NHanjun Guo <guohanjun@huawei.com>
Tested-by: NHanjun Guo <guohanjun@huawei.com>
Reviewed-by: NHanjun Guo <guohanjun@huawei.com>
Signed-off-by: NWill Deacon <will.deacon@arm.com>
Signed-off-by: NHanjun Guo <guohanjun@huawei.com>
Reviewed-by: NXuefeng Wang <wxf.wang@hisilicon.com>
Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
Showing
想要评论请 注册 或 登录