- 10 4月, 2021 2 次提交
-
-
由 Gao Xiang 提交于
When picking up inplace I/O pages, it should be traversed in reverse order in aligned with the traversal order of file-backed online pages. Also, index should be updated together when preloading compressed pages. Previously, only page-sized pclustersize was supported so no problem at all. Also rename `compressedpages' to `icpage_ptr' to reflect its functionality. Link: https://lore.kernel.org/r/20210407043927.10623-5-xiang@kernel.orgAcked-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Gao Xiang 提交于
Since multiple pcluster sizes could be used at once, the number of compressed pages will become a variable factor. It's necessary to introduce slab pools rather than a single slab cache now. This limits the pclustersize to 1M (Z_EROFS_PCLUSTER_MAX_SIZE), and get rid of the obsolete EROFS_FS_CLUSTER_PAGE_LIMIT, which has no use now. Link: https://lore.kernel.org/r/20210407043927.10623-4-xiang@kernel.orgAcked-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
- 07 4月, 2021 1 次提交
-
-
由 Gao Xiang 提交于
Formal big pcluster design is actually more powerful / flexable than the previous thought whose pclustersize was fixed as power-of-2 blocks, which was obviously inefficient and space-wasting. Instead, pclustersize can now be set independently for each pcluster, so various pcluster sizes can also be used together in one file if mkfs wants (for example, according to data type and/or compression ratio). Let's get rid of previous physical_clusterbits[] setting (also notice that corresponding on-disk fields are still 0 for now). Therefore, head1/2 can be used for at most 2 different algorithms in one file and again pclustersize is now independent of these. Link: https://lore.kernel.org/r/20210407043927.10623-2-xiang@kernel.orgAcked-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
- 03 4月, 2021 1 次提交
-
-
由 Ruiqi Gong 提交于
zmap.c: s/correspoinding/corresponding zdata.c: s/endding/ending Link: https://lore.kernel.org/r/20210331093920.31923-1-gongruiqi1@huawei.comReported-by: NHulk Robot <hulkci@huawei.com> Signed-off-by: NRuiqi Gong <gongruiqi1@huawei.com> Reviewed-by: NGao Xiang <hsiangkao@redhat.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
- 29 3月, 2021 3 次提交
-
-
由 Gao Xiang 提交于
Add a missing case which could cause unnecessary page allocation but not directly use inplace I/O instead, which increases runtime extra memory footprint. The detail is, considering an online file-backed page, the right half of the page is chosen to be cached (e.g. the end page of a readahead request) and some of its data doesn't exist in managed cache, so the pcluster will be definitely kept in the submission chain. (IOWs, it cannot be decompressed without I/O, e.g., due to the bypass queue). Currently, DELAYEDALLOC/TRYALLOC cases can be downgraded as NOINPLACE, and stop online pages from inplace I/O. After this patch, unneeded page allocations won't be observed in pickup_page_for_submission() then. Link: https://lore.kernel.org/r/20210321183227.5182-1-hsiangkao@aol.comSigned-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Huang Jianan 提交于
Sync decompression was introduced to get rid of additional kworker scheduling overhead. But there is no such overhead in non-atomic contexts. Therefore, it should be better to turn off sync decompression to avoid the current thread waiting in z_erofs_runqueue. Link: https://lore.kernel.org/r/20210317035448.13921-3-huangjianan@oppo.comReviewed-by: NGao Xiang <hsiangkao@redhat.com> Reviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NHuang Jianan <huangjianan@oppo.com> Signed-off-by: NGuo Weichao <guoweichao@oppo.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Huang Jianan 提交于
z_erofs_decompressqueue_endio may not be executed in the atomic context, for example, when dm-verity is turned on. In this scenario, data can be decompressed directly to get rid of additional kworker scheduling overhead. Link: https://lore.kernel.org/r/20210317035448.13921-2-huangjianan@oppo.comReviewed-by: NGao Xiang <hsiangkao@redhat.com> Reviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NHuang Jianan <huangjianan@oppo.com> Signed-off-by: NGuo Weichao <guoweichao@oppo.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
- 11 3月, 2021 1 次提交
-
-
由 Christoph Hellwig 提交于
Ever since the addition of multipage bio_vecs BIO_MAX_PAGES has been horribly confusingly misnamed. Rename it to BIO_MAX_VECS to stop confusing users of the bio API. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NMartin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20210311110137.1132391-2-hch@lst.deSigned-off-by: NJens Axboe <axboe@kernel.dk>
-
- 09 12月, 2020 1 次提交
-
-
由 Gao Xiang 提交于
Try to forcely switch to inplace I/O under low memory scenario in order to avoid direct memory reclaim due to cached page allocation. Link: https://lore.kernel.org/r/20201209123717.12430-1-hsiangkao@aol.comReviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
- 08 12月, 2020 3 次提交
-
-
由 Gao Xiang 提交于
simplify try_to_claim_pcluster() by directly using cmpxchg() here (the retry loop caused more overhead.) Also, move the chain loop detection in and rename it to z_erofs_try_to_claim_pcluster(). Link: https://lore.kernel.org/r/20201208095834.3133565-3-hsiangkao@redhat.comReviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Gao Xiang 提交于
Previously, it could be some concern to call add_to_page_cache_lru() with page->mapping == Z_EROFS_MAPPING_STAGING (!= NULL). In contrast, page->private is used instead now, so partially revert commit 5ddcee1f ("erofs: get rid of __stagingpage_alloc helper") with some adaption for simplicity. Link: https://lore.kernel.org/r/20201208095834.3133565-2-hsiangkao@redhat.comReviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Gao Xiang 提交于
Previously, we played around with magical page->mapping for short-lived temporary pages since we need to identify different types of pages in the same pcluster but both invalidated and short-lived temporary pages can have page->mapping == NULL. It was considered as safe because that temporary pages are all non-LRU / non-movable pages. This patch tends to use specific page->private to identify short-lived pages instead so it won't rely on page->mapping anymore. Details are described in "compress.h" as well. Link: https://lore.kernel.org/r/20201208095834.3133565-1-hsiangkao@redhat.comReviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
- 04 11月, 2020 1 次提交
-
-
由 Gao Xiang 提交于
pcluster should be only set up for all managed pages instead of temporary pages. Since it currently uses page->mapping to identify, the impact is minor for now. [ Update: Vladimir reported the kernel log becomes polluted because PAGE_FLAGS_CHECK_AT_FREE flag(s) set if the page allocation debug option is enabled. ] Link: https://lore.kernel.org/r/20201022145724.27284-1-hsiangkao@aol.com Fixes: 5ddcee1f ("erofs: get rid of __stagingpage_alloc helper") Cc: <stable@vger.kernel.org> # 5.5+ Tested-by: NVladimir Zapolskiy <vladimir@tuxera.com> Reviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
- 19 9月, 2020 3 次提交
-
-
由 Gao Xiang 提交于
Let's add REQ_RAHEAD flag so it'd be easier to identify readahead I/O requests in blktrace. Reviewed-by: NChao Yu <yuchao0@huawei.com> Link: https://lore.kernel.org/r/20200919072730.24989-3-hsiangkao@redhat.comSigned-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Gao Xiang 提交于
should_decompress_synchronously() has one single condition for now, so fold it instead. Reviewed-by: NChao Yu <yuchao0@huawei.com> Link: https://lore.kernel.org/r/20200919072730.24989-2-hsiangkao@redhat.comSigned-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Gao Xiang 提交于
variable `err' in z_erofs_submit_queue() isn't useful here, remove it instead. Reviewed-by: NChao Yu <yuchao0@huawei.com> Link: https://lore.kernel.org/r/20200919072730.24989-1-hsiangkao@redhat.comSigned-off-by: NGao Xiang <hsiangkao@redhat.com>
-
- 18 9月, 2020 1 次提交
-
-
由 Chao Yu 提交于
After commit 0615090c ("erofs: convert compressed files from readpages to readahead"), add_to_page_cache_lru() was moved to mm code, so that in below call path, no page will be cached into @pagepool list or grabbed from @pagepool list: - z_erofs_readpage - z_erofs_do_read_page - preload_compressed_pages - erofs_allocpage Let's get rid of this unneeded @pagepool parameter. Signed-off-by: NChao Yu <yuchao0@huawei.com> Link: https://lore.kernel.org/r/20200917011821.22767-1-yuchao0@huawei.comReviewed-by: NGao Xiang <hsiangkao@redhat.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
- 03 8月, 2020 2 次提交
-
-
由 Gao Xiang 提交于
The documentation [1] says that WQ_CPU_INTENSIVE is "meaningless" for unbound wq. I remove this flag from places where unbound queue is allocated. This is supposed to improve code readability. [1] https://www.kernel.org/doc/html/latest/core-api/workqueue.html#flagsSigned-off-by: NMaksym Planeta <mplaneta@os.inf.tu-dresden.de> [Gao Xiang: since the original treewide patch [2] hasn't been merged yet, handling the EROFS part only for the next cycle. ] [2] https://lore.kernel.org/r/20200213141823.2174236-1-mplaneta@os.inf.tu-dresden.de Link: https://lore.kernel.org/r/20200731024049.16495-1-hsiangkao@aol.comReviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Alexander A. Klimov 提交于
Rationale: Reduces attack surface on kernel devs opening the links for MITM as HTTPS traffic is much harder to manipulate. Deterministic algorithm: For each file: If not .svg: For each line: If doesn't contain `\bxmlns\b`: For each link, `\bhttp://[^# \t\r\n]*(?:\w|/)`: If neither `\bgnu\.org/license`, nor `\bmozilla\.org/MPL\b`: If both the HTTP and HTTPS versions return 200 OK and serve the same content: Replace HTTP with HTTPS. Reviewed-by: NGao Xiang <hsiangkao@redhat.com> Reviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NAlexander A. Klimov <grandmaster@al2klimov.de> Link: https://lore.kernel.org/r/20200713130944.34419-1-grandmaster@al2klimov.deSigned-off-by: NGao Xiang <hsiangkao@redhat.com>
-
- 17 7月, 2020 1 次提交
-
-
由 Kees Cook 提交于
Using uninitialized_var() is dangerous as it papers over real bugs[1] (or can in the future), and suppresses unrelated compiler warnings (e.g. "unused variable"). If the compiler thinks it is uninitialized, either simply initialize the variable or make compiler changes. In preparation for removing[2] the[3] macro[4], remove all remaining needless uses with the following script: git grep '\buninitialized_var\b' | cut -d: -f1 | sort -u | \ xargs perl -pi -e \ 's/\buninitialized_var\(([^\)]+)\)/\1/g; s:\s*/\* (GCC be quiet|to make compiler happy) \*/$::g;' drivers/video/fbdev/riva/riva_hw.c was manually tweaked to avoid pathological white-space. No outstanding warnings were found building allmodconfig with GCC 9.3.0 for x86_64, i386, arm64, arm, powerpc, powerpc64le, s390x, mips, sparc64, alpha, and m68k. [1] https://lore.kernel.org/lkml/20200603174714.192027-1-glider@google.com/ [2] https://lore.kernel.org/lkml/CA+55aFw+Vbj0i=1TGqCR5vQkCzWJ0QxK6CernOU6eedsudAixw@mail.gmail.com/ [3] https://lore.kernel.org/lkml/CA+55aFwgbgqhbp1fkxvRKEpzyR5J8n1vKT1VZdz9knmPuXhOeg@mail.gmail.com/ [4] https://lore.kernel.org/lkml/CA+55aFz2500WfbKXAx8s67wrm9=yVJu65TpLgN_ybYNv0VEOKA@mail.gmail.com/ Reviewed-by: Leon Romanovsky <leonro@mellanox.com> # drivers/infiniband and mlx4/mlx5 Acked-by: Jason Gunthorpe <jgg@mellanox.com> # IB Acked-by: Kalle Valo <kvalo@codeaurora.org> # wireless drivers Reviewed-by: Chao Yu <yuchao0@huawei.com> # erofs Signed-off-by: NKees Cook <keescook@chromium.org>
-
- 03 6月, 2020 2 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
Use the new readahead operation in erofs. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Reviewed-by: NDave Chinner <dchinner@redhat.com> Reviewed-by: NWilliam Kucharski <william.kucharski@oracle.com> Reviewed-by: NChao Yu <yuchao0@huawei.com> Acked-by: NGao Xiang <gaoxiang25@huawei.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Cong Wang <xiyou.wangcong@gmail.com> Cc: Darrick J. Wong <darrick.wong@oracle.com> Cc: Eric Biggers <ebiggers@google.com> Cc: Jaegeuk Kim <jaegeuk@kernel.org> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Joseph Qi <joseph.qi@linux.alibaba.com> Cc: Junxiao Bi <junxiao.bi@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Zi Yan <ziy@nvidia.com> Cc: Johannes Thumshirn <johannes.thumshirn@wdc.com> Cc: Miklos Szeredi <mszeredi@redhat.com> Link: http://lkml.kernel.org/r/20200414150233.24495-20-willy@infradead.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Matthew Wilcox (Oracle) 提交于
Use the new readahead operation in erofs Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Reviewed-by: NWilliam Kucharski <william.kucharski@oracle.com> Reviewed-by: NChao Yu <yuchao0@huawei.com> Acked-by: NGao Xiang <gaoxiang25@huawei.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Cong Wang <xiyou.wangcong@gmail.com> Cc: Darrick J. Wong <darrick.wong@oracle.com> Cc: Dave Chinner <dchinner@redhat.com> Cc: Eric Biggers <ebiggers@google.com> Cc: Jaegeuk Kim <jaegeuk@kernel.org> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Joseph Qi <joseph.qi@linux.alibaba.com> Cc: Junxiao Bi <junxiao.bi@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Zi Yan <ziy@nvidia.com> Cc: Johannes Thumshirn <johannes.thumshirn@wdc.com> Cc: Miklos Szeredi <mszeredi@redhat.com> Link: http://lkml.kernel.org/r/20200414150233.24495-19-willy@infradead.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 5月, 2020 1 次提交
-
-
由 Chao Yu 提交于
Convert the erofs to use new internal mount API as the old one will be obsoleted and removed. This allows greater flexibility in communication of mount parameters between userspace, the VFS and the filesystem. See Documentation/filesystems/mount_api.txt for more information. Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: David Howells <dhowells@redhat.com> Signed-off-by: NChao Yu <yuchao0@huawei.com> Link: https://lore.kernel.org/r/20200529104836.17843-1-hsiangkao@redhat.comSigned-off-by: NGao Xiang <hsiangkao@redhat.com>
-
- 03 3月, 2020 1 次提交
-
-
由 Gao Xiang 提交于
XArray has friendly APIs and it will replace the old radix tree in the near future. This convert makes use of __xa_cmpxchg when inserting on a just inserted item by other thread. In detail, instead of totally looking up again as what we did for the old radix tree, it will try to legitimize the current in-tree item in the XArray therefore more effective. In addition, naming is rather a challenge for non-English speaker like me. The basic idea of workstn is to provide a runtime sparse array with items arranged in the physical block number order. Such items (was called workgroup) can be used to record compress clusters or for later new features. However, both workgroup and workstn seem not good names from whatever point of view, so I'd like to rename them as pslot and managed_pslots to stand for physical slots. This patch handles the second as a part of the radix tree convert. Cc: Matthew Wilcox <willy@infradead.org> Link: https://lore.kernel.org/r/20200220024642.91529-1-gaoxiang25@huawei.comReviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <gaoxiang25@huawei.com>
-
- 21 1月, 2020 2 次提交
-
-
由 Gao Xiang 提交于
A label and extra variables will be eliminated, which is more cleaner. Link: https://lore.kernel.org/r/20200121064819.139469-1-gaoxiang25@huawei.comReviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <gaoxiang25@huawei.com>
-
由 Gao Xiang 提交于
No need to introduce such separated helper since cache strategy compile configs were changed into runtime options instead in v5.4. No logic changes. Link: https://lore.kernel.org/r/20200121064747.138987-1-gaoxiang25@huawei.comReviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <gaoxiang25@huawei.com>
-
- 07 1月, 2020 2 次提交
-
-
由 Vladimir Zapolskiy 提交于
All workgroups are registered with tag value set to 0, to simplify erofs_register_workgroup() interface the tag argument can be removed, if its only value is sent down to the function body. Signed-off-by: NVladimir Zapolskiy <vladimir@tuxera.com> Link: https://lore.kernel.org/r/20200102120118.14979-3-vladimir@tuxera.comReviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <gaoxiang25@huawei.com>
-
由 Vladimir Zapolskiy 提交于
It is feasible to simplify erofs_find_workgroup() interface by removing an unused function argument. While formally the argument is used in the function itself, its assigned value is ignored on the caller side. Signed-off-by: NVladimir Zapolskiy <vladimir@tuxera.com> Link: https://lore.kernel.org/r/20200102120118.14979-2-vladimir@tuxera.comReviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <gaoxiang25@huawei.com>
-
- 24 11月, 2019 4 次提交
-
-
由 Gao Xiang 提交于
VLE was an old informal name of fixed-sized output compression which came from published ATC'19 paper [1]. Drop those old annotations since erofs can handle all encoded clusters in block-aligned basis, which is wider than fixed-sized output compression after larger clustersize feature is fully implemented. Unaligned encoding won't be considered in EROFS since it's not friendly to inplace I/O and perhaps decompression inplace. a) Fixed-sized output compression with 16KB pcluster: ___________________________________ |xxxxxxxx|xxxxxxxx|xxxxxxxx|xxxxxxxx| |___ 0___|___ 1___|___ 2___|___ 3___| physical blocks b) Block-aligned fixed-sized input compression with 16KB pcluster: ___________________________________ |xxxxxxxx|xxxxxxxx|xxxxxxxx|xxx00000| |___ 0___|___ 1___|___ 2___|___ 3___| physical blocks c) Block-unaligned fixed-sized input compression with 16KB compression unit: ____________________________________________ |..xxxxxx|xxxxxxxx|xxxxxxxx|xxxxxxxx|x.......| |___ 0___|___ 1___|___ 2___|___ 3___|___ 4___| physical blocks Refine better names for those as well. [1] https://www.usenix.org/conference/atc19/presentation/gao Link: https://lore.kernel.org/r/20191108033733.63919-1-gaoxiang25@huawei.comReviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <gaoxiang25@huawei.com>
-
由 Gao Xiang 提交于
For those tasks waiting I/O for sync decompression, they should be better marked as IO wait state. Link: https://lore.kernel.org/r/20191008125616.183715-5-gaoxiang25@huawei.comReviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <gaoxiang25@huawei.com>
-
由 Gao Xiang 提交于
Previously, both z_erofs_unzip_io and z_erofs_unzip_io_sb record decompress queues for backend to use. The only difference is that z_erofs_unzip_io is used for on-stack sync decompression so that it doesn't have a super block field (since the caller can pass it in its context), but it increases complexity with only a pointer saving. Rename z_erofs_unzip_io to z_erofs_decompressqueue with a fixed super_block member and kill the other entirely, and it can fallback to sync decompression if memory allocation failure. Link: https://lore.kernel.org/r/20191008125616.183715-4-gaoxiang25@huawei.comReviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <gaoxiang25@huawei.com>
-
由 Gao Xiang 提交于
Now open code is much cleaner due to iterative development. Link: https://lore.kernel.org/r/20191124025217.12345-1-hsiangkao@aol.comReviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <gaoxiang25@huawei.com>
-
- 16 10月, 2019 2 次提交
-
-
由 Gao Xiang 提交于
After commit 4279f3f9 ("staging: erofs: turn cache strategies into mount options"), cache strategies are changed into mount options rather than old build configs. Let's kill useless code for obsoleted build options. Link: https://lore.kernel.org/r/20191008125616.183715-2-gaoxiang25@huawei.comReviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <gaoxiang25@huawei.com>
-
由 Gao Xiang 提交于
- change return value to int since collection is already returned within the collector. - better function naming. Link: https://lore.kernel.org/r/20191008125616.183715-1-gaoxiang25@huawei.comReviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <gaoxiang25@huawei.com>
-
- 01 10月, 2019 1 次提交
-
-
由 Gao Xiang 提交于
Fix a recent cleanup patch. noio (bypass) chain is handled asynchronously against submit chain, therefore inplace I/O or pagevec cannot be applied to such pages. Add detailed comment for this as well. Fixes: 97e86a85 ("staging: erofs: tidy up decompression frontend") Reviewed-by: NChao Yu <yuchao0@huawei.com> Link: https://lore.kernel.org/r/20190922100434.229340-1-gaoxiang25@huawei.comSigned-off-by: NGao Xiang <gaoxiang25@huawei.com>
-
- 06 9月, 2019 5 次提交
-
-
由 Gao Xiang 提交于
Add prefix "erofs_" to these functions and print sb->s_id as a prefix to erofs_{err, info} so that the user knows which file system is affected. Reported-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NGao Xiang <gaoxiang25@huawei.com> Link: https://lore.kernel.org/r/20190904020912.63925-23-gaoxiang25@huawei.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Gao Xiang 提交于
As Christoph suggested [1], "Please just use plain kmalloc everywhere and let the normal kernel error injection code take care of injeting any errors." [1] https://lore.kernel.org/r/20190829102426.GE20598@infradead.org/Reported-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NGao Xiang <gaoxiang25@huawei.com> Link: https://lore.kernel.org/r/20190904020912.63925-20-gaoxiang25@huawei.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Gao Xiang 提交于
Add erofs_ prefix to free_inode, alloc_inode, ... Reported-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NGao Xiang <gaoxiang25@huawei.com> Link: https://lore.kernel.org/r/20190904020912.63925-19-gaoxiang25@huawei.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Gao Xiang 提交于
As Christoph pointed out [1], " Why is there __submit_bio which really just obsfucates what is going on? Also why is __submit_bio using bio_set_op_attrs instead of opencode it as the comment right next to it asks you to? " Let's use submit_bio directly instead. [1] https://lore.kernel.org/r/20190830162812.GA10694@infradead.org/Reported-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NGao Xiang <gaoxiang25@huawei.com> Link: https://lore.kernel.org/r/20190904020912.63925-18-gaoxiang25@huawei.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Gao Xiang 提交于
As Christoph pointed out [1], "erofs_grab_bio tries to handle a bio_alloc failure, except that the function will not actually fail due the mempool backing it." Sorry about useless code, fix it now and localize erofs_grab_bio [2]. [1] https://lore.kernel.org/r/20190830162812.GA10694@infradead.org/ [2] https://lore.kernel.org/r/20190902122016.GL15931@infradead.org/Reported-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NGao Xiang <gaoxiang25@huawei.com> Link: https://lore.kernel.org/r/20190904020912.63925-16-gaoxiang25@huawei.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-