提交 721c21c1 编写于 作者: W Will Deacon 提交者: Linus Torvalds

mm: mmu_gather: use tlb->end != 0 only for TLB invalidation

When batching up address ranges for TLB invalidation, we check tlb->end
!= 0 to indicate that some pages have actually been unmapped.

As of commit f045bbb9 ("mmu_gather: fix over-eager
tlb_flush_mmu_free() calling"), we use the same check for freeing these
pages in order to avoid a performance regression where we call
free_pages_and_swap_cache even when no pages are actually queued up.

Unfortunately, the range could have been reset (tlb->end = 0) by
tlb_end_vma, which has been shown to cause memory leaks on arm64.
Furthermore, investigation into these leaks revealed that the fullmm
case on task exit no longer invalidates the TLB, by virtue of tlb->end
 == 0 (in 3.18, need_flush would have been set).

This patch resolves the problem by reverting commit f045bbb9, using
instead tlb->local.nr as the predicate for page freeing in
tlb_flush_mmu_free and ensuring that tlb->end is initialised to a
non-zero value in the fullmm case.
Tested-by: NMark Langsdorf <mlangsdo@redhat.com>
Tested-by: NDave Hansen <dave@sr71.net>
Signed-off-by: NWill Deacon <will.deacon@arm.com>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
上级 eaa27f34
...@@ -136,8 +136,12 @@ static inline void __tlb_adjust_range(struct mmu_gather *tlb, ...@@ -136,8 +136,12 @@ static inline void __tlb_adjust_range(struct mmu_gather *tlb,
static inline void __tlb_reset_range(struct mmu_gather *tlb) static inline void __tlb_reset_range(struct mmu_gather *tlb)
{ {
tlb->start = TASK_SIZE; if (tlb->fullmm) {
tlb->end = 0; tlb->start = tlb->end = ~0;
} else {
tlb->start = TASK_SIZE;
tlb->end = 0;
}
} }
/* /*
......
...@@ -235,6 +235,9 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long ...@@ -235,6 +235,9 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long
static void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) static void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)
{ {
if (!tlb->end)
return;
tlb_flush(tlb); tlb_flush(tlb);
mmu_notifier_invalidate_range(tlb->mm, tlb->start, tlb->end); mmu_notifier_invalidate_range(tlb->mm, tlb->start, tlb->end);
#ifdef CONFIG_HAVE_RCU_TABLE_FREE #ifdef CONFIG_HAVE_RCU_TABLE_FREE
...@@ -247,7 +250,7 @@ static void tlb_flush_mmu_free(struct mmu_gather *tlb) ...@@ -247,7 +250,7 @@ static void tlb_flush_mmu_free(struct mmu_gather *tlb)
{ {
struct mmu_gather_batch *batch; struct mmu_gather_batch *batch;
for (batch = &tlb->local; batch; batch = batch->next) { for (batch = &tlb->local; batch && batch->nr; batch = batch->next) {
free_pages_and_swap_cache(batch->pages, batch->nr); free_pages_and_swap_cache(batch->pages, batch->nr);
batch->nr = 0; batch->nr = 0;
} }
...@@ -256,9 +259,6 @@ static void tlb_flush_mmu_free(struct mmu_gather *tlb) ...@@ -256,9 +259,6 @@ static void tlb_flush_mmu_free(struct mmu_gather *tlb)
void tlb_flush_mmu(struct mmu_gather *tlb) void tlb_flush_mmu(struct mmu_gather *tlb)
{ {
if (!tlb->end)
return;
tlb_flush_mmu_tlbonly(tlb); tlb_flush_mmu_tlbonly(tlb);
tlb_flush_mmu_free(tlb); tlb_flush_mmu_free(tlb);
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册