提交 76382b5b 编写于 作者: M Markus Pietrek 提交者: Paul Mundt

sh: Ensure all PG_dcache_dirty pages are written back.

With some of the cache rework an address aliasing optimization was added,
but this managed to fail on certain mappings resulting in pages with
PG_dcache_dirty set never writing back their dcache lines. This patch
reverts to the earlier behaviour of simply always writing back when the
dirty bit is set.
Signed-off-by: NMarkus Pietrek <Markus.Pietrek@emtrion.de>
Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
上级 9503e891
......@@ -133,12 +133,8 @@ void __update_cache(struct vm_area_struct *vma,
page = pfn_to_page(pfn);
if (pfn_valid(pfn)) {
int dirty = test_and_clear_bit(PG_dcache_dirty, &page->flags);
if (dirty) {
unsigned long addr = (unsigned long)page_address(page);
if (pages_do_alias(addr, address & PAGE_MASK))
__flush_purge_region((void *)addr, PAGE_SIZE);
}
if (dirty)
__flush_purge_region(page_address(page), PAGE_SIZE);
}
}
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册