提交 1acf2e04 编写于 作者: D Davidlohr Bueso 提交者: Linus Torvalds

mm/nommu: share the i_mmap_rwsem

Shrinking/truncate logic can call nommu_shrink_inode_mappings() to verify
that any shared mappings of the inode in question aren't broken (dead
zone).  afaict the only user being ramfs to handle the size change
attribute.

Pretty much a no-brainer to share the lock.
Signed-off-by: NDavidlohr Bueso <dbueso@suse.de>
Acked-by: N"Kirill A. Shutemov" <kirill@shutemov.name>
Acked-by: NHugh Dickins <hughd@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Acked-by: NMel Gorman <mgorman@suse.de>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
上级 d28eb9c8
...@@ -2094,14 +2094,14 @@ int nommu_shrink_inode_mappings(struct inode *inode, size_t size, ...@@ -2094,14 +2094,14 @@ int nommu_shrink_inode_mappings(struct inode *inode, size_t size,
high = (size + PAGE_SIZE - 1) >> PAGE_SHIFT; high = (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
down_write(&nommu_region_sem); down_write(&nommu_region_sem);
i_mmap_lock_write(inode->i_mapping); i_mmap_lock_read(inode->i_mapping);
/* search for VMAs that fall within the dead zone */ /* search for VMAs that fall within the dead zone */
vma_interval_tree_foreach(vma, &inode->i_mapping->i_mmap, low, high) { vma_interval_tree_foreach(vma, &inode->i_mapping->i_mmap, low, high) {
/* found one - only interested if it's shared out of the page /* found one - only interested if it's shared out of the page
* cache */ * cache */
if (vma->vm_flags & VM_SHARED) { if (vma->vm_flags & VM_SHARED) {
i_mmap_unlock_write(inode->i_mapping); i_mmap_unlock_read(inode->i_mapping);
up_write(&nommu_region_sem); up_write(&nommu_region_sem);
return -ETXTBSY; /* not quite true, but near enough */ return -ETXTBSY; /* not quite true, but near enough */
} }
...@@ -2113,8 +2113,7 @@ int nommu_shrink_inode_mappings(struct inode *inode, size_t size, ...@@ -2113,8 +2113,7 @@ int nommu_shrink_inode_mappings(struct inode *inode, size_t size,
* we don't check for any regions that start beyond the EOF as there * we don't check for any regions that start beyond the EOF as there
* shouldn't be any * shouldn't be any
*/ */
vma_interval_tree_foreach(vma, &inode->i_mapping->i_mmap, vma_interval_tree_foreach(vma, &inode->i_mapping->i_mmap, 0, ULONG_MAX) {
0, ULONG_MAX) {
if (!(vma->vm_flags & VM_SHARED)) if (!(vma->vm_flags & VM_SHARED))
continue; continue;
...@@ -2129,7 +2128,7 @@ int nommu_shrink_inode_mappings(struct inode *inode, size_t size, ...@@ -2129,7 +2128,7 @@ int nommu_shrink_inode_mappings(struct inode *inode, size_t size,
} }
} }
i_mmap_unlock_write(inode->i_mapping); i_mmap_unlock_read(inode->i_mapping);
up_write(&nommu_region_sem); up_write(&nommu_region_sem);
return 0; return 0;
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册