-
由 Steven Price 提交于
stable inclusion from stable-v5.10.142 commit 47a73e5e6ba42e42db2d1400478b2e132e25ceb4 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I6CSFH Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=47a73e5e6ba42e42db2d1400478b2e132e25ceb4 -------------------------------- [ Upstream commit 8782fb61 ] The mmap lock protects the page walker from changes to the page tables during the walk. However a read lock is insufficient to protect those areas which don't have a VMA as munmap() detaches the VMAs before downgrading to a read lock and actually tearing down PTEs/page tables. For users of walk_page_range() the solution is to simply call pte_hole() immediately without checking the actual page tables when a VMA is not present. We now never call __walk_page_range() without a valid vma. For walk_page_range_novma() the locking requirements are tightened to require the mmap write lock to be taken, and then walking the pgd directly with 'no_vma' set. This in turn means that all page walkers either have a valid vma, or it's that special 'novma' case for page table debugging. As a result, all the odd '(!walk->vma && !walk->no_vma)' tests can be removed. Fixes: dd2283f2 ("mm: mmap: zap pages with read mmap_sem in munmap") Reported-by: NJann Horn <jannh@google.com> Signed-off-by: NSteven Price <steven.price@arm.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Konstantin Khlebnikov <koct9i@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NJialin Zhang <zhangjialin11@huawei.com> Reviewed-by: NZheng Zengkai <zhengzengkai@huawei.com>
16efd9d7