- 04 10月, 2022 1 次提交
-
-
由 Vishal Moola (Oracle) 提交于
Removes 3 calls to compound_head(). Link: https://lkml.kernel.org/r/20220905214557.868606-1-vishal.moola@gmail.comSigned-off-by: NVishal Moola (Oracle) <vishal.moola@gmail.com> Cc: Matthew Wilcox <willy@infradead.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
-
- 27 9月, 2022 2 次提交
-
-
由 Yang Yang 提交于
Once upon a time, we only support accounting thrashing of page cache. Then Joonsoo introduced workingset detection for anonymous pages and we gained the ability to account thrashing of them[1]. For page cache thrashing accounting, there is no suitable place to do it in fs level likes swap_readpage(). So we have to do it in folio_wait_bit_common(). Then for anonymous pages thrashing accounting, we have to do it in both swap_readpage() and folio_wait_bit_common(). This likes PSI, so we should let thrashing accounting supports re-entrance detection. This patch is to prepare complete thrashing accounting, and is based on patch "filemap: make the accounting of thrashing more consistent". [1] commit aae466b0 ("mm/swap: implement workingset detection for anonymous LRU") Link: https://lkml.kernel.org/r/20220815071134.74551-1-yang.yang29@zte.com.cnSigned-off-by: NYang Yang <yang.yang29@zte.com.cn> Signed-off-by: NCGEL ZTE <cgel.zte@gmail.com> Reviewed-by: NRan Xiaokai <ran.xiaokai@zte.com.cn> Reviewed-by: Nwangyong <wang.yong12@zte.com.cn> Acked-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
-
由 Yang Yang 提交于
Once upon a time, we only support accounting thrashing of page cache. Then Joonsoo introduced workingset detection for anonymous pages and we gained the ability to account thrashing of them[1]. So let delayacct account both the thrashing of page cache and anonymous pages, this could make the codes more consistent and simpler. [1] commit aae466b0 ("mm/swap: implement workingset detection for anonymous LRU") Link: https://lkml.kernel.org/r/20220805033838.1714674-1-yang.yang29@zte.com.cnSigned-off-by: NYang Yang <yang.yang29@zte.com.cn> Signed-off-by: NCGEL ZTE <cgel.zte@gmail.com> Acked-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Balbir Singh <bsingharora@gmail.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Yang Yang <yang.yang29@zte.com.cn> Cc: David Hildenbrand <david@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
-
- 12 9月, 2022 3 次提交
-
-
由 Vishal Moola (Oracle) 提交于
All callers of find_get_pages_contig() have been removed, so it is no longer needed. Link: https://lkml.kernel.org/r/20220824004023.77310-8-vishal.moola@gmail.comSigned-off-by: NVishal Moola (Oracle) <vishal.moola@gmail.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Chris Mason <clm@fb.com> Cc: David Sterba <dsterba@suse.com> Cc: David Sterba <dsterb@suse.com> Cc: Josef Bacik <josef@toxicpanda.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Ryusuke Konishi <konishi.ryusuke@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
-
由 Vishal Moola (Oracle) 提交于
Patch series "Convert to filemap_get_folios_contig()", v3. This patch series replaces find_get_pages_contig() with filemap_get_folios_contig(). This patch (of 7): This function is meant to replace find_get_pages_contig(). Unlike find_get_pages_contig(), filemap_get_folios_contig() no longer takes in a target number of pages to find - It returns up to 15 contiguous folios. To be more consistent with filemap_get_folios(), filemap_get_folios_contig() now also updates the start index passed in, and takes an end index. Link: https://lkml.kernel.org/r/20220824004023.77310-1-vishal.moola@gmail.com Link: https://lkml.kernel.org/r/20220824004023.77310-2-vishal.moola@gmail.comSigned-off-by: NVishal Moola (Oracle) <vishal.moola@gmail.com> Cc: Chris Mason <clm@fb.com> Cc: Josef Bacik <josef@toxicpanda.com> Cc: David Sterba <dsterba@suse.com> Cc: Ryusuke Konishi <konishi.ryusuke@gmail.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Matthew Wilcox <willy@infradead.org> Cc: David Sterba <dsterb@suse.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
-
由 Shaoqin Huang 提交于
Replace three calls to compound_head() with one. Link: https://lkml.kernel.org/r/20220809023256.178194-1-shaoqin.huang@intel.comSigned-off-by: NShaoqin Huang <shaoqin.huang@intel.com> Cc: Matthew Wilcox <willy@infradead.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
-
- 30 7月, 2022 1 次提交
-
-
由 Miaohe Lin 提交于
Restructure the logic in filemap_write_and_wait_range to simplify the code and make it more consistent with file_write_and_wait_range. No functional change intended. Link: https://lkml.kernel.org/r/20220627132351.55680-1-linmiaohe@huawei.comSigned-off-by: NMiaohe Lin <linmiaohe@huawei.com> Reviewed-by: NMuchun Song <songmuchun@bytedance.com> Cc: Matthew Wilcox <willy@infradead.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
-
- 25 7月, 2022 1 次提交
-
-
由 Jens Axboe 提交于
If we're creating a page cache page with FGP_CREAT but FGP_NOWAIT is set, we should dial back the gfp flags to avoid frivolous blocking which is trivial to hit in low memory conditions: [ 10.117661] __schedule+0x8c/0x550 [ 10.118305] schedule+0x58/0xa0 [ 10.118897] schedule_timeout+0x30/0xdc [ 10.119610] __wait_for_common+0x88/0x114 [ 10.120348] wait_for_completion+0x1c/0x24 [ 10.121103] __flush_work.isra.0+0x16c/0x19c [ 10.121896] flush_work+0xc/0x14 [ 10.122496] __drain_all_pages+0x144/0x218 [ 10.123267] drain_all_pages+0x10/0x18 [ 10.123941] __alloc_pages+0x464/0x9e4 [ 10.124633] __folio_alloc+0x18/0x3c [ 10.125294] __filemap_get_folio+0x17c/0x204 [ 10.126084] iomap_write_begin+0xf8/0x428 [ 10.126829] iomap_file_buffered_write+0x144/0x24c [ 10.127710] xfs_file_buffered_write+0xe8/0x248 [ 10.128553] xfs_file_write_iter+0xa8/0x120 [ 10.129324] io_write+0x16c/0x38c [ 10.129940] io_issue_sqe+0x70/0x1cc [ 10.130617] io_queue_sqe+0x18/0xfc [ 10.131277] io_submit_sqes+0x5d4/0x600 [ 10.131946] __arm64_sys_io_uring_enter+0x224/0x600 [ 10.132752] invoke_syscall.constprop.0+0x70/0xc0 [ 10.133616] do_el0_svc+0xd0/0x118 [ 10.134238] el0_svc+0x78/0xa0 Clear IO, FS, and reclaim flags and mark the allocation as GFP_NOWAIT and add __GFP_NOWARN to avoid polluting dmesg with pointless allocations failures. A caller with FGP_NOWAIT must be expected to handle the resulting -EAGAIN return and retry from a suitable context without NOWAIT set. Reviewed-by: NShakeel Butt <shakeelb@google.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
- 29 6月, 2022 6 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
By passing ->read_folio to filemap_read_folio(), we can use filemap_read_folio() in do_read_cache_folio(). Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> -
由 Matthew Wilcox (Oracle) 提交于
If the call to filler() returns AOP_TRUNCATED_PAGE, we need to retry the page cache lookup. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> -
由 Matthew Wilcox (Oracle) 提交于
No functionality change intended; this simply moves code around to disentangle the function a little. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> -
由 Matthew Wilcox (Oracle) 提交于
All callers of find_get_pages_range(), pagevec_lookup_range() and pagevec_lookup() have now been removed. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NChristoph Hellwig <hch@lst.de> Acked-by: NChristian Brauner (Microsoft) <brauner@kernel.org>
-
由 Matthew Wilcox (Oracle) 提交于
This is the equivalent of find_get_pages() but fills a folio_batch instead of an array of pages. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NChristoph Hellwig <hch@lst.de> Acked-by: NChristian Brauner (Microsoft) <brauner@kernel.org>
-
由 Matthew Wilcox (Oracle) 提交于
These functions have no more users, so delete them. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Acked-by: NMike Kravetz <mike.kravetz@oracle.com> Reviewed-by: NMuchun Song <songmuchun@bytedance.com>
-
- 21 6月, 2022 2 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
If a read races with an invalidation followed by another read, it is possible for a folio to be replaced with a higher-order folio. If that happens, we'll see a sibling entry for the new folio in the next iteration of the loop. This manifests as a NULL pointer dereference while holding the RCU read lock. Handle this by simply returning. The next call will find the new folio and handle it correctly. The other ways of handling this rare race are more complex and it's just not worth it. Reported-by: NDave Chinner <david@fromorbit.com> Reported-by: NBrian Foster <bfoster@redhat.com> Debugged-by: NBrian Foster <bfoster@redhat.com> Tested-by: NBrian Foster <bfoster@redhat.com> Reviewed-by: NBrian Foster <bfoster@redhat.com> Fixes: cbd59c48 ("mm/filemap: use head pages in generic_file_buffered_read") Cc: stable@vger.kernel.org Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
-
由 Matthew Wilcox (Oracle) 提交于
We had an off-by-one error which meant that we never marked the first page in a read as accessed. This was visible as a slowdown when re-reading a file as pages were being evicted from cache too soon. In reviewing this code, we noticed a second bug where a multi-page folio would be marked as accessed multiple times when doing reads that were less than the size of the folio. Abstract the comparison of whether two file positions are in the same folio into a new function, fixing both of these bugs. Reported-by: NYu Kuai <yukuai3@huawei.com> Reviewed-by: NKent Overstreet <kent.overstreet@gmail.com> Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
-
- 10 6月, 2022 1 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
After we have unlocked the mmap_lock for I/O, the file is pinned, but the VMA is not. Checking this flag after that can be a use-after-free. It's not a terribly interesting use-after-free as it can only read one bit, and it's used to decide whether to read 2MB or 4MB. But it upsets the automated tools and it's generally bad practice anyway, so let's fix it. Reported-by: syzbot+5b96d55e5b54924c77ad@syzkaller.appspotmail.com Fixes: 4687fdbb ("mm/filemap: Support VM_HUGEPAGE for file mappings") Cc: stable@vger.kernel.org Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
-
- 13 5月, 2022 1 次提交
-
-
由 Peter Xu 提交于
This patch still does not use pte marker in any way, however it teaches the core mm about the pte marker idea. For example, handle_pte_marker() is introduced that will parse and handle all the pte marker faults. Many of the places are more about commenting it up - so that we know there's the possibility of pte marker showing up, and why we don't need special code for the cases. [peterx@redhat.com: userfaultfd.c needs swapops.h] Link: https://lkml.kernel.org/r/YmRlVj3cdizYJsr0@xz-m1.local Link: https://lkml.kernel.org/r/20220405014833.14015-1-peterx@redhat.comSigned-off-by: NPeter Xu <peterx@redhat.com> Cc: Alistair Popple <apopple@nvidia.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Axel Rasmussen <axelrasmussen@google.com> Cc: David Hildenbrand <david@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jerome Glisse <jglisse@redhat.com> Cc: "Kirill A . Shutemov" <kirill@shutemov.name> Cc: Matthew Wilcox <willy@infradead.org> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> Cc: Nadav Amit <nadav.amit@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
-
- 10 5月, 2022 9 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
All implementations now use free_folio so we can delete the callers and the method. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> -
由 Matthew Wilcox (Oracle) 提交于
Include documentation and convert the callers to use ->free_folio as well as ->freepage. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> -
由 Matthew Wilcox (Oracle) 提交于
All but two of the callers already have a folio; pass a folio into try_to_free_buffers(). This removes the last user of cancel_dirty_page() so remove that wrapper function too. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NJeff Layton <jlayton@kernel.org>
-
由 Matthew Wilcox (Oracle) 提交于
All users are now converted to release_folio Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NJeff Layton <jlayton@kernel.org>
-
由 Matthew Wilcox (Oracle) 提交于
This replaces aops->releasepage. Update the documentation, and call it if it exists. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NJeff Layton <jlayton@kernel.org>
-
由 Matthew Wilcox (Oracle) 提交于
Now that filler_t and aops->read_folio() have the same type, we can decide which one to use at the top of the function, and cache ->read_folio in the filler parameter. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> -
由 Matthew Wilcox (Oracle) 提交于
By making filler_t the same as read_folio, we can use the same function for both in gfs2. We can push the use of folios down one more level in jffs2 and nfs. We also increase type safety for future users of the various read_cache_page() family of functions by forcing the parameter to be a pointer to struct file (or NULL). Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NAndreas Gruenbacher <agruenba@redhat.com>
-
由 Matthew Wilcox (Oracle) 提交于
With all implementations of aops->readpage converted to aops->read_folio, we can stop checking whether it's set and remove the member from aops. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> -
由 Matthew Wilcox (Oracle) 提交于
Change all the callers of ->readpage to call ->read_folio in preference, if it exists. This is a transitional duplication, and will be removed by the end of the series. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
-
- 09 5月, 2022 2 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
These wrappers have no more users; remove them. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Matthew Wilcox (Oracle) 提交于
There are no more aop flags left, so remove the parameter. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
- 16 4月, 2022 1 次提交
-
-
由 Hugh Dickins 提交于
Chuck Lever reported fsx-based xfstests generic 075 091 112 127 failing when 5.18-rc1 NFS server exports tmpfs: bisected to recent tmpfs change. Whilst nfsd_splice_action() does contain some questionable handling of repeated pages, and Chuck was able to work around there, history from Mark Hemment makes clear that there might be similar dangers elsewhere: it was not a good idea for me to pass ZERO_PAGE down to unknown actors. Revert shmem_file_read_iter() to using ZERO_PAGE for holes only when iter_is_iovec(); in other cases, use the more natural iov_iter_zero() instead of copy_page_to_iter(). We would use iov_iter_zero() throughout, but the x86 clear_user() is not nearly so well optimized as copy to user (dd of 1T sparse tmpfs file takes 57 seconds rather than 44 seconds). And now pagecache_init() does not need to SetPageUptodate(ZERO_PAGE(0)): which had caused boot failure on arm noMMU STM32F7 and STM32H7 boards Link: https://lkml.kernel.org/r/9a978571-8648-e830-5735-1f4748ce2e30@google.com Fixes: 56a8c8eb ("tmpfs: do not allocate pages on read") Signed-off-by: NHugh Dickins <hughd@google.com> Reported-by: NPatrice CHOTARD <patrice.chotard@foss.st.com> Reported-by: NChuck Lever III <chuck.lever@oracle.com> Tested-by: NChuck Lever III <chuck.lever@oracle.com> Cc: Mark Hemment <markhemm@googlemail.com> Cc: Patrice CHOTARD <patrice.chotard@foss.st.com> Cc: Mikulas Patocka <mpatocka@redhat.com> Cc: Lukas Czerner <lczerner@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: "Darrick J. Wong" <djwong@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 02 4月, 2022 2 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
We can extract both the file pointer and the pos from the iocb. This simplifies each caller as well as allowing generic_perform_write() to see more of the iocb contents in the future. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NChristian Brauner <brauner@kernel.org> Reviewed-by: NAl Viro <viro@zeniv.linux.org.uk> Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Matthew Wilcox (Oracle) 提交于
All filesystems have now been converted to use ->readahead, so remove the ->readpages operation and fix all the comments that used to refer to it. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NAl Viro <viro@zeniv.linux.org.uk> Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 25 3月, 2022 4 次提交
-
-
由 Andreas Gruenbacher 提交于
When part of the user buffer passed to generic_perform_write() or iomap_file_buffered_write() cannot be faulted in for reading, the entire write currently fails. The correct behavior would be to write all the data that can be written, up to the point of failure. Commit a6294593 ("iov_iter: Turn iov_iter_fault_in_readable into fault_in_iov_iter_readable") gave us the information needed, so fix the page prefaulting in generic_perform_write() and iomap_write_iter() to only bail out when no pages could be faulted in. We already factor in that pages that are faulted in may no longer be resident by the time they are accessed. Paging out pages has the same effect as not faulting in those pages in the first place, so the code can already deal with that. Signed-off-by: NAndreas Gruenbacher <agruenba@redhat.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Hugh Dickins 提交于
filemap_unaccount_folio() has a WARN_ON_ONCE(folio_test_dirty(folio)). It is good to warn of late dirtying on a persistent filesystem, but late dirtying on tmpfs can only lose data which is expected to be thrown away; and it's a pity if that warning comes ONCE on tmpfs, then hides others which really matter. Make it conditional on mapping_cap_writeback(). Cleanup: then folio_account_cleaned() no longer needs to check that for itself, and so no longer needs to know the mapping. Link: https://lkml.kernel.org/r/b5a1106c-7226-a5c6-ad41-ad4832cae1f@google.comSigned-off-by: NHugh Dickins <hughd@google.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Jan Kara <jack@suse.de> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Hugh Dickins 提交于
The page_mapcount_reset() when folio_mapped() while mapping_exiting() was devised long before there were huge or compound pages in the cache. It is still valid for small pages, but not at all clear what's right to check and reset on large pages. Just don't try when folio_test_large(). Link: https://lkml.kernel.org/r/879c4426-4122-da9c-1a86-697f2c9a083@google.comSigned-off-by: NHugh Dickins <hughd@google.com> Cc: Matthew Wilcox <willy@infradead.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Hugh Dickins 提交于
The PG_waiters bit is not included in PAGE_FLAGS_CHECK_AT_FREE, and vmscan.c's free_unref_page_list() callers rely on that not to generate bad_page() alerts. So __page_cache_release(), put_pages_list() and release_pages() (and presumably copy-and-pasted free_zone_device_page()) are redundant and misleading to make a special point of clearing it (as the "__" implies, it could only safely be used on the freeing path). Delete __ClearPageWaiters(). Remark on this in one of the "possible" comments in folio_wake_bit(), and delete the superfluous comments. Link: https://lkml.kernel.org/r/3eafa969-5b1a-accf-88fe-318784c791a@google.comSigned-off-by: NHugh Dickins <hughd@google.com> Tested-by: NYu Zhao <yuzhao@google.com> Reviewed-by: NYang Shi <shy828301@gmail.com> Reviewed-by: NDavid Hildenbrand <david@redhat.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Yu Zhao <yuzhao@google.com> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 23 3月, 2022 2 次提交
-
-
由 Hugh Dickins 提交于
Mikulas asked in "Do we still need commit a0ee5ec5 ('tmpfs: allocate on read when stacked')?" in [1] Lukas noticed this unusual behavior of loop device backed by tmpfs in [2]. Normally, shmem_file_read_iter() copies the ZERO_PAGE when reading holes; but if it looks like it might be a read for "a stacking filesystem", it allocates actual pages to the page cache, and even marks them as dirty. And reads from the loop device do satisfy the test that is used. This oddity was added for an old version of unionfs, to help to limit its usage to the limited size of the tmpfs mount involved; but about the same time as the tmpfs mod went in (2.6.25), unionfs was reworked to proceed differently; and the mod kept just in case others needed it. Do we still need it? I cannot answer with more certainty than "Probably not". It's nasty enough that we really should try to delete it; but if a regression is reported somewhere, then we might have to revert later. It's not quite as simple as just removing the test (as Mikulas did): xfstests generic/013 hung because splice from tmpfs failed on page not up-to-date and page mapping unset. That can be fixed just by marking the ZERO_PAGE as Uptodate, which of course it is: do so in pagecache_init() - it might be useful to others than tmpfs. My intention, though, was to stop using the ZERO_PAGE here altogether: surely iov_iter_zero() is better for this case? Sadly not: it relies on clear_user(), and the x86 clear_user() is slower than its copy_user() [3]. But while we are still using the ZERO_PAGE, let's stop dirtying its struct page cacheline with unnecessary get_page() and put_page(). Link: https://lore.kernel.org/linux-mm/alpine.LRH.2.02.2007210510230.6959@file01.intranet.prod.int.rdu2.redhat.com/ [1] Link: https://lore.kernel.org/linux-mm/20211126075100.gd64odg2bcptiqeb@work/ [2] Link: https://lore.kernel.org/lkml/2f5ca5e4-e250-a41c-11fb-a7f4ebc7e1c9@google.com/ [3] Link: https://lkml.kernel.org/r/90bc5e69-9984-b5fa-a685-be55f2b64b@google.comSigned-off-by: NHugh Dickins <hughd@google.com> Reported-by: NMikulas Patocka <mpatocka@redhat.com> Reported-by: NLukas Czerner <lczerner@redhat.com> Acked-by: NDarrick J. Wong <djwong@kernel.org> Reviewed-by: NChristoph Hellwig <hch@lst.de> Cc: Zdenek Kabelac <zkabelac@redhat.com> Cc: "Darrick J. Wong" <djwong@kernel.org> Cc: Miklos Szeredi <miklos@szeredi.hu> Cc: Borislav Petkov <bp@suse.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Miaohe Lin 提交于
It's unused now. Remove it and clean up the relevant comment. Link: https://lkml.kernel.org/r/20220208134149.47299-1-linmiaohe@huawei.comSigned-off-by: NMiaohe Lin <linmiaohe@huawei.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: David Howells <dhowells@redhat.com> Cc: William Kucharski <william.kucharski@oracle.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Andreas Gruenbacher <agruenba@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 22 3月, 2022 2 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
If the VM_HUGEPAGE flag is set, attempt to allocate PMD-sized folios during readahead, even if we have no history of readahead being successful. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> -
由 Matthew Wilcox (Oracle) 提交于
do_page_cache_ra() was being exposed for the benefit of do_sync_mmap_readahead(). Switch it over to page_cache_ra_order() partly because it's a better interface but mostly for the benefit of the next patch. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
-