You need to sign in or sign up before continuing.
提交 b492ff8c 编写于 作者: Y Yicong Yang 提交者: Jinjiang Tu

mm/tlbbatch: introduce arch_flush_tlb_batched_pending()

maillist inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I7U78A
CVE: NA

Reference: https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?id=4fd62a4892a3f576f424eb86e4eda510038934a9

-------------------------------------------

Currently we'll flush the mm in flush_tlb_batched_pending() to avoid race
between reclaim unmaps pages by batched TLB flush and mprotect/munmap/etc.
Other architectures like arm64 may only need a synchronization
barrier(dsb) here rather than a full mm flush.  So add
arch_flush_tlb_batched_pending() to allow an arch-specific implementation
here.  This intends no functional changes on x86 since still a full mm
flush for x86.

Link: https://lkml.kernel.org/r/20230717131004.12662-4-yangyicong@huawei.comSigned-off-by: NYicong Yang <yangyicong@hisilicon.com>
Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Barry Song <baohua@kernel.org>
Cc: Barry Song <v-songbaohua@oppo.com>
Cc: Darren Hart <darren@os.amperecomputing.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: lipeifeng <lipeifeng@oppo.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Nadav Amit <namit@vmware.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Punit Agrawal <punit.agrawal@bytedance.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Steven Miao <realmz6@gmail.com>
Cc: Will Deacon <will@kernel.org>
Cc: Xin Hao <xhao@linux.alibaba.com>
Cc: Zeng Tao <prime.zeng@hisilicon.com>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>

Conflicts:
	mm/rmap.c
	arch/x86/include/asm/tlbflush.h
Signed-off-by: NJinjiang Tu <tujinjiang@huawei.com>
上级 8d8d353f
...@@ -270,6 +270,11 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b ...@@ -270,6 +270,11 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b
cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm));
} }
static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
{
flush_tlb_mm(mm);
}
extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
#endif /* !MODULE */ #endif /* !MODULE */
......
...@@ -677,7 +677,7 @@ static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags) ...@@ -677,7 +677,7 @@ static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags)
void flush_tlb_batched_pending(struct mm_struct *mm) void flush_tlb_batched_pending(struct mm_struct *mm)
{ {
if (data_race(mm->tlb_flush_batched)) { if (data_race(mm->tlb_flush_batched)) {
flush_tlb_mm(mm); arch_flush_tlb_batched_pending(mm);
/* /*
* Do not allow the compiler to re-order the clearing of * Do not allow the compiler to re-order the clearing of
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册