- 30 9月, 2022 2 次提交
-
-
由 Ritesh Harjani (IBM) 提交于
submit_bh/submit_bh_wbc are non-blocking functions which just submit the bio and return. The caller of submit_bh/submit_bh_wbc needs to wait on buffer till I/O completion and then check buffer head's b_state field to know if there was any I/O error. Hence there is no need for these functions to have any return type. Even now they always returns 0. Hence drop the return value and make their return type as void to avoid any confusion. Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJan Kara <jack@suse.cz> Signed-off-by: NRitesh Harjani (IBM) <ritesh.list@gmail.com> Link: https://lore.kernel.org/r/cb66ef823374cdd94d2d03083ce13de844fffd41.1660788334.git.ritesh.list@gmail.comSigned-off-by: NTheodore Ts'o <tytso@mit.edu>
-
由 Ritesh Harjani (IBM) 提交于
submit_bh always returns 0. This patch drops the useless return value of submit_bh from __sync_dirty_buffer(). Once all of submit_bh callers are cleaned up, we can make it's return type as void. Signed-off-by: NRitesh Harjani (IBM) <ritesh.list@gmail.com> Reviewed-by: NJan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/a98a6ddfac68f73d684c2724952e825bc1f4d238.1660788334.git.ritesh.list@gmail.comSigned-off-by: NTheodore Ts'o <tytso@mit.edu>
-
- 03 8月, 2022 2 次提交
-
-
由 Christoph Hellwig 提交于
All callers are gone, so remove the now dead code. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJan Kara <jack@suse.cz> Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
-
由 Matthew Wilcox (Oracle) 提交于
We can cache this information in a local variable instead of communicating from one part of the function to another via folio flags. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
-
- 16 7月, 2022 1 次提交
-
-
由 Bart Van Assche 提交于
Bring the ll_rw_block() kernel-doc header again in sync with the function prototype. Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Jan Kara <jack@suse.cz> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Fixes: 1420c4a5 ("fs/buffer: Combine two submit_bh() and ll_rw_block() arguments") Signed-off-by: NBart Van Assche <bvanassche@acm.org> Link: https://lore.kernel.org/r/20220715184735.2326034-2-bvanassche@acm.orgSigned-off-by: NJens Axboe <axboe@kernel.dk>
-
- 15 7月, 2022 2 次提交
-
-
由 Bart Van Assche 提交于
Both submit_bh() and ll_rw_block() accept a request operation type and request flags as their first two arguments. Micro-optimize these two functions by combining these first two arguments into a single argument. This patch does not change the behavior of any of the modified code. Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Jan Kara <jack@suse.cz> Acked-by: Song Liu <song@kernel.org> (for the md changes) Signed-off-by: NBart Van Assche <bvanassche@acm.org> Link: https://lore.kernel.org/r/20220714180729.1065367-48-bvanassche@acm.orgSigned-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Bart Van Assche 提交于
Improve static type checking by using the new blk_opf_t type for block layer request flags. Change WRITE into REQ_OP_WRITE. This patch does not change any functionality since REQ_OP_WRITE == WRITE == 1. Reviewed-by: NJan Kara <jack@suse.cz> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christoph Hellwig <hch@lst.de> Cc: Matthew Wilcox <willy@infradead.org> Signed-off-by: NBart Van Assche <bvanassche@acm.org> Link: https://lore.kernel.org/r/20220714180729.1065367-47-bvanassche@acm.orgSigned-off-by: NJens Axboe <axboe@kernel.dk>
-
- 29 6月, 2022 2 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
If a buffer is completed with an error, its uptodate flag will be clear, so the page_uptodate variable will have been set to 0. There's no need to check PageError here. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> -
由 Matthew Wilcox (Oracle) 提交于
Use a folio throughout this function. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NChristoph Hellwig <hch@lst.de> Acked-by: NChristian Brauner (Microsoft) <brauner@kernel.org>
-
- 10 5月, 2022 5 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
All callers now have a folio. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NJeff Layton <jlayton@kernel.org>
-
由 Matthew Wilcox (Oracle) 提交于
All but two of the callers already have a folio; pass a folio into try_to_free_buffers(). This removes the last user of cancel_dirty_page() so remove that wrapper function too. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NJeff Layton <jlayton@kernel.org>
-
由 Matthew Wilcox (Oracle) 提交于
With all implementations of aops->readpage converted to aops->read_folio, we can stop checking whether it's set and remove the member from aops. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> -
由 Matthew Wilcox (Oracle) 提交于
This function is NOT converted to handle large folios, so include an assert that the filesystem isn't passing one in. Otherwise, use the folio functions instead of the page functions, where they exist. Convert all filesystems which use block_read_full_page(). Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> -
由 Matthew Wilcox (Oracle) 提交于
Change all the callers of ->readpage to call ->read_folio in preference, if it exists. This is a transitional duplication, and will be removed by the end of the series. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
-
- 09 5月, 2022 7 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
- Calculate iblock directly instead of using a while loop - Move has_buffers to the end to remove a backwards jump - Use __filemap_get_folio() instead of grab_cache_page(), which removes a spurious FGP_ACCESSED flag. - Eliminate length and pos variables - Use folio APIs where they exist Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Matthew Wilcox (Oracle) 提交于
Pass a folio instead of a page to aops->is_dirty_writeback(). Convert both implementations and the caller. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Matthew Wilcox (Oracle) 提交于
pagecache_write_begin() and pagecache_write_end() are now trivial wrappers, so call the aops directly. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Matthew Wilcox (Oracle) 提交于
There are no more aop flags left, so remove the parameter. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Matthew Wilcox (Oracle) 提交于
There are no more aop flags left, so remove the parameter. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Matthew Wilcox (Oracle) 提交于
There are no more aop flags left, so remove the parameter. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
由 Matthew Wilcox (Oracle) 提交于
There are no more aop flags left, so remove the parameter. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
- 02 4月, 2022 1 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
This flag is no longer used, so remove it. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NAl Viro <viro@zeniv.linux.org.uk> Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 23 3月, 2022 1 次提交
-
-
由 Minchan Kim 提交于
Check lru_cache_disabled under bh_lru_lock. Otherwise, it could introduce race below and it fails to migrate pages containing buffer_head. CPU 0 CPU 1 bh_lru_install lru_cache_disable lru_cache_disabled = false atomic_inc(&lru_disable_count); invalidate_bh_lrus_cpu of CPU 0 bh_lru_lock __invalidate_bh_lrus bh_lru_unlock bh_lru_lock install the bh bh_lru_unlock WHen this race happens a CMA allocation fails, which is critical for the workload which depends on CMA. Link: https://lkml.kernel.org/r/20220308180709.2017638-1-minchan@kernel.org Fixes: 8cc621d2 ("mm: fs: invalidate BH LRU during page migration") Signed-off-by: NMinchan Kim <minchan@kernel.org> Cc: Chris Goldsworthy <cgoldswo@codeaurora.org> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: John Dias <joaodias@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 3月, 2022 1 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
Convert all callers; mostly this is just changing the aops to point at it, but a few implementations need a little more work. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Tested-by: NDamien Le Moal <damien.lemoal@opensource.wdc.com> Acked-by: NDamien Le Moal <damien.lemoal@opensource.wdc.com> Tested-by: Mike Marshall <hubcap@omnibond.com> # orangefs Tested-by: David Howells <dhowells@redhat.com> # afs
-
- 15 3月, 2022 2 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
Remove special-casing of a NULL invalidatepage, since there is no more block_invalidatepage. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Tested-by: NDamien Le Moal <damien.lemoal@opensource.wdc.com> Acked-by: NDamien Le Moal <damien.lemoal@opensource.wdc.com> Tested-by: Mike Marshall <hubcap@omnibond.com> # orangefs Tested-by: David Howells <dhowells@redhat.com> # afs
-
由 Matthew Wilcox (Oracle) 提交于
Since the uptodate property is maintained on a per-folio basis, the is_partially_uptodate method should also take a folio. Fix the types at the same time so it's clear that it returns true/false and takes the count in bytes, not blocks. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Tested-by: NDamien Le Moal <damien.lemoal@opensource.wdc.com> Acked-by: NDamien Le Moal <damien.lemoal@opensource.wdc.com> Tested-by: Mike Marshall <hubcap@omnibond.com> # orangefs Tested-by: David Howells <dhowells@redhat.com> # afs
-
- 08 3月, 2022 1 次提交
-
-
由 Christoph Hellwig 提交于
With the NVMe support for this gone, there are no consumers of these hints left, so remove them. Signed-off-by: NChristoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220304175556.407719-2-hch@lst.deSigned-off-by: NJens Axboe <axboe@kernel.dk>
-
- 02 2月, 2022 1 次提交
-
-
由 Christoph Hellwig 提交于
Pass the block_device and operation that we plan to use this bio for to bio_alloc to optimize the assignment. NULL/0 can be passed, both for the passthrough case on a raw request_queue and to temporarily avoid refactoring some nasty code. Also move the gfp_mask argument after the nr_vecs argument for a much more logical calling convention matching what most of the kernel does. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NChaitanya Kulkarni <kch@nvidia.com> Link: https://lore.kernel.org/r/20220124091107.642561-18-hch@lst.deSigned-off-by: NJens Axboe <axboe@kernel.dk>
-
- 17 12月, 2021 1 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
There are no plans to convert buffer_head infrastructure to use large folios, but __block_write_begin_int() is called from iomap, and it's more convenient and less error-prone if we pass in a folio from iomap. It also has a nice saving of almost 200 bytes of code from removing repeated calls to compound_head(). Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NDarrick J. Wong <djwong@kernel.org>
-
- 19 10月, 2021 2 次提交
-
-
由 Christoph Hellwig 提交于
No need to convert from bdev to inode and back. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NKees Cook <keescook@chromium.org> Reviewed-by: NJan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20211018101130.1838532-11-hch@lst.deSigned-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Christoph Hellwig 提交于
Use the proper helper to read the block device size. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NKees Cook <keescook@chromium.org> Reviewed-by: NJan Kara <jack@suse.cz> Reviewed-by: NChaitanya Kulkarni <kch@nvidia.com> Link: https://lore.kernel.org/r/20211018101130.1838532-10-hch@lst.deSigned-off-by: NJens Axboe <axboe@kernel.dk>
-
- 25 9月, 2021 1 次提交
-
-
由 Minchan Kim 提交于
The kernel test robot reported the regression of fio.write_iops[1] with commit 8cc621d2 ("mm: fs: invalidate BH LRU during page migration"). Since lru_add_drain is called frequently, invalidate bh_lrus there could increase bh_lrus cache miss ratio, which needs more IO in the end. This patch moves the bh_lrus invalidation from the hot path( e.g., zap_page_range, pagevec_release) to cold path(i.e., lru_add_drain_all, lru_cache_disable). Zhengjun Xing confirmed "I test the patch, the regression reduced to -2.9%" [1] https://lore.kernel.org/lkml/20210520083144.GD14190@xsang-OptiPlex-9020/ [2] 8cc621d2, mm: fs: invalidate BH LRU during page migration Link: https://lkml.kernel.org/r/20210907212347.1977686-1-minchan@kernel.orgSigned-off-by: NMinchan Kim <minchan@kernel.org> Reported-by: Nkernel test robot <oliver.sang@intel.com> Reviewed-by: NChris Goldsworthy <cgoldswo@codeaurora.org> Tested-by: N"Xing, Zhengjun" <zhengjun.xing@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 8月, 2021 1 次提交
-
-
由 Christoph Hellwig 提交于
__block_write_begin_int never modifies the passed in iomap, so mark it const. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NDarrick J. Wong <djwong@kernel.org> Signed-off-by: NDarrick J. Wong <djwong@kernel.org>
-
- 13 7月, 2021 1 次提交
-
-
由 Eric W. Biederman 提交于
The bdflush system call has been deprecated for a very long time. Recently Michael Schmitz tested[1] and found that the last known caller of of the bdflush system call is unaffected by it's removal. Since the code is not needed delete it. [1] https://lkml.kernel.org/r/36123b5d-daa0-6c2b-f2d4-a942f069fd54@gmail.com Link: https://lkml.kernel.org/r/87sg10quue.fsf_-_@disp2133Tested-by: NMichael Schmitz <schmitzmic@gmail.com> Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org> Reviewed-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NCyril Hrubis <chrubis@suse.cz> Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
-
- 30 6月, 2021 2 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
Patch series "Further set_page_dirty cleanups". Prompted by Christoph's recent patches, here are some more patches to improve the state of set_page_dirty(). They're all from the folio tree, so they've been tested to a certain extent. This patch (of 6): Nothing in __set_page_dirty() is specific to buffer_head, so move it to mm/page-writeback.c. That removes the only caller of account_page_dirtied() outside of page-writeback.c, so make it static. Link: https://lkml.kernel.org/r/20210615162342.1669332-1-willy@infradead.org Link: https://lkml.kernel.org/r/20210615162342.1669332-2-willy@infradead.orgSigned-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Jan Kara <jack@suse.cz> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Dan Williams <dan.j.williams@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Christoph Hellwig 提交于
Patch series "remove the implicit .set_page_dirty default". This series cleans up a few lose ends around ->set_page_dirty, most importantly removes the default to the buffer head based on if no method is wired up. This patch (of 3): __set_page_dirty is only used by built-in code. Link: https://lkml.kernel.org/r/20210614061512.3966143-1-hch@lst.de Link: https://lkml.kernel.org/r/20210614061512.3966143-2-hch@lst.deSigned-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: NJan Kara <jack@suse.cz> Reviewed-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 5月, 2021 1 次提交
-
-
由 Minchan Kim 提交于
Pages containing buffer_heads that are in one of the per-CPU buffer_head LRU caches will be pinned and thus cannot be migrated. This can prevent CMA allocations from succeeding, which are often used on platforms with co-processors (such as a DSP) that can only use physically contiguous memory. It can also prevent memory hot-unplugging from succeeding, which involves migrating at least MIN_MEMORY_BLOCK_SIZE bytes of memory, which ranges from 8 MiB to 1 GiB based on the architecture in use. Correspondingly, invalidate the BH LRU caches before a migration starts and stop any buffer_head from being cached in the LRU caches, until migration has finished. Link: https://lkml.kernel.org/r/20210319175127.886124-3-minchan@kernel.orgSigned-off-by: NMinchan Kim <minchan@kernel.org> Reported-by: NChris Goldsworthy <cgoldswo@codeaurora.org> Reported-by: NLaura Abbott <labbott@kernel.org> Tested-by: NOliver Sang <oliver.sang@intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: John Dias <joaodias@google.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 22 3月, 2021 1 次提交
-
-
由 Mikulas Patocka 提交于
This patch replaces a loop with a "tzcnt" instruction. Signed-off-by: NMikulas Patocka <mpatocka@redhat.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 25 2月, 2021 2 次提交
-
-
由 Johannes Weiner 提交于
alloc_page_buffers() currently uses get_mem_cgroup_from_page() for charging the buffers to the page owner, which does an rcu-protected page->memcg lookup and acquires a reference. But buffer allocation has the page lock held throughout, which pins the page to the memcg and thereby the memcg - neither rcu nor holding an extra reference during the allocation are necessary. Use a raw page_memcg() instead. This was the last user of get_mem_cgroup_from_page(), delete it. Link: https://lkml.kernel.org/r/20210209190126.97842-1-hannes@cmpxchg.orgSigned-off-by: NJohannes Weiner <hannes@cmpxchg.org> Reported-by: NMuchun Song <songmuchun@bytedance.com> Reviewed-by: NShakeel Butt <shakeelb@google.com> Acked-by: NRoman Gushchin <guro@fb.com> Acked-by: NMichal Hocko <mhocko@suse.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Yang Guo 提交于
clear_buffer_new() is used to clear buffer new stat. When PAGE_SIZE is 64K, most buffer heads in the list are not needed to clear. clear_buffer_new() has an enpensive atomic modification operation, Let's add checking buffer head before clear it as __block_write_begin_int does which is good for performance. Link: https://lkml.kernel.org/r/1612332890-57918-1-git-send-email-zhangshaokun@hisilicon.comSigned-off-by: NYang Guo <guoyang2@huawei.com> Signed-off-by: NShaokun Zhang <zhangshaokun@hisilicon.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Nick Piggin <npiggin@suse.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-