From b492ff8c1e16b09017bf2d5133b317d664e89ead Mon Sep 17 00:00:00 2001 From: Yicong Yang Date: Wed, 23 Aug 2023 18:27:03 +0800 Subject: [PATCH] mm/tlbbatch: introduce arch_flush_tlb_batched_pending() maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I7U78A CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?id=4fd62a4892a3f576f424eb86e4eda510038934a9 ------------------------------------------- Currently we'll flush the mm in flush_tlb_batched_pending() to avoid race between reclaim unmaps pages by batched TLB flush and mprotect/munmap/etc. Other architectures like arm64 may only need a synchronization barrier(dsb) here rather than a full mm flush. So add arch_flush_tlb_batched_pending() to allow an arch-specific implementation here. This intends no functional changes on x86 since still a full mm flush for x86. Link: https://lkml.kernel.org/r/20230717131004.12662-4-yangyicong@huawei.com Signed-off-by: Yicong Yang Reviewed-by: Catalin Marinas Cc: Anshuman Khandual Cc: Anshuman Khandual Cc: Arnd Bergmann Cc: Barry Song Cc: Barry Song Cc: Darren Hart Cc: Jonathan Cameron Cc: Jonathan Corbet Cc: Kefeng Wang Cc: lipeifeng Cc: Mark Rutland Cc: Mel Gorman Cc: Nadav Amit Cc: Peter Zijlstra Cc: Punit Agrawal Cc: Ryan Roberts Cc: Steven Miao Cc: Will Deacon Cc: Xin Hao Cc: Zeng Tao Signed-off-by: Andrew Morton Conflicts: mm/rmap.c arch/x86/include/asm/tlbflush.h Signed-off-by: Jinjiang Tu --- arch/x86/include/asm/tlbflush.h | 5 +++++ mm/rmap.c | 2 +- 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 1404f43a5986..1f1e5add0b97 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -270,6 +270,11 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); } +static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm) +{ + flush_tlb_mm(mm); +} + extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); #endif /* !MODULE */ diff --git a/mm/rmap.c b/mm/rmap.c index 38eafa951fe0..816db3edc116 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -677,7 +677,7 @@ static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags) void flush_tlb_batched_pending(struct mm_struct *mm) { if (data_race(mm->tlb_flush_batched)) { - flush_tlb_mm(mm); + arch_flush_tlb_batched_pending(mm); /* * Do not allow the compiler to re-order the clearing of -- GitLab