提交 85ae87c1 编写于 作者: T Tejun Heo

percpu: fix too lazy vunmap cache flushing

In pcpu_unmap(), flushing virtual cache on vunmap can't be delayed as
the page is going to be returned to the page allocator.  Only TLB
flushing can be put off such that vmalloc code can handle it lazily.
Fix it.

[ Impact: fix subtle virtual cache flush bug ]
Signed-off-by: NTejun Heo <tj@kernel.org>
Cc: Nick Piggin <npiggin@suse.de>
Cc: Ingo Molnar <mingo@elte.hu>
上级 f234012f
...@@ -549,14 +549,14 @@ static void pcpu_free_area(struct pcpu_chunk *chunk, int freeme) ...@@ -549,14 +549,14 @@ static void pcpu_free_area(struct pcpu_chunk *chunk, int freeme)
* @chunk: chunk of interest * @chunk: chunk of interest
* @page_start: page index of the first page to unmap * @page_start: page index of the first page to unmap
* @page_end: page index of the last page to unmap + 1 * @page_end: page index of the last page to unmap + 1
* @flush: whether to flush cache and tlb or not * @flush_tlb: whether to flush tlb or not
* *
* For each cpu, unmap pages [@page_start,@page_end) out of @chunk. * For each cpu, unmap pages [@page_start,@page_end) out of @chunk.
* If @flush is true, vcache is flushed before unmapping and tlb * If @flush is true, vcache is flushed before unmapping and tlb
* after. * after.
*/ */
static void pcpu_unmap(struct pcpu_chunk *chunk, int page_start, int page_end, static void pcpu_unmap(struct pcpu_chunk *chunk, int page_start, int page_end,
bool flush) bool flush_tlb)
{ {
unsigned int last = num_possible_cpus() - 1; unsigned int last = num_possible_cpus() - 1;
unsigned int cpu; unsigned int cpu;
...@@ -569,9 +569,8 @@ static void pcpu_unmap(struct pcpu_chunk *chunk, int page_start, int page_end, ...@@ -569,9 +569,8 @@ static void pcpu_unmap(struct pcpu_chunk *chunk, int page_start, int page_end,
* the whole region at once rather than doing it for each cpu. * the whole region at once rather than doing it for each cpu.
* This could be an overkill but is more scalable. * This could be an overkill but is more scalable.
*/ */
if (flush) flush_cache_vunmap(pcpu_chunk_addr(chunk, 0, page_start),
flush_cache_vunmap(pcpu_chunk_addr(chunk, 0, page_start), pcpu_chunk_addr(chunk, last, page_end));
pcpu_chunk_addr(chunk, last, page_end));
for_each_possible_cpu(cpu) for_each_possible_cpu(cpu)
unmap_kernel_range_noflush( unmap_kernel_range_noflush(
...@@ -579,7 +578,7 @@ static void pcpu_unmap(struct pcpu_chunk *chunk, int page_start, int page_end, ...@@ -579,7 +578,7 @@ static void pcpu_unmap(struct pcpu_chunk *chunk, int page_start, int page_end,
(page_end - page_start) << PAGE_SHIFT); (page_end - page_start) << PAGE_SHIFT);
/* ditto as flush_cache_vunmap() */ /* ditto as flush_cache_vunmap() */
if (flush) if (flush_tlb)
flush_tlb_kernel_range(pcpu_chunk_addr(chunk, 0, page_start), flush_tlb_kernel_range(pcpu_chunk_addr(chunk, 0, page_start),
pcpu_chunk_addr(chunk, last, page_end)); pcpu_chunk_addr(chunk, last, page_end));
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册