- 24 10月, 2021 1 次提交
-
-
由 Andreas Gruenbacher 提交于
Add a done_before argument to iomap_dio_rw that indicates how much of the request has already been transferred. When the request succeeds, we report that done_before additional bytes were tranferred. This is useful for finishing a request asynchronously when part of the request has already been completed synchronously. We'll use that to allow iomap_dio_rw to be used with page faults disabled: when a page fault occurs while submitting a request, we synchronously complete the part of the request that has already been submitted. The caller can then take care of the page fault and call iomap_dio_rw again for the rest of the request, passing in the number of bytes already tranferred. Signed-off-by: NAndreas Gruenbacher <agruenba@redhat.com> Reviewed-by: NDarrick J. Wong <djwong@kernel.org>
-
- 23 9月, 2021 2 次提交
-
-
由 Yue Hu 提交于
Currently, the whole indexes will only be compacted 4B if compacted_4b_initial > totalidx. So, the calculated compacted_2b is worthless for that case. It may waste CPU resources. No need to update compacted_4b_initial as mkfs since it's used to fulfill the alignment of the 1st compacted_2b pack and would handle the case above. We also need to clarify compacted_4b_end here. It's used for the last lclusters which aren't fitted in the previous compacted_2b packs. Some messages are from Xiang. Link: https://lore.kernel.org/r/20210914035915.1190-1-zbestahu@gmail.comSigned-off-by: NYue Hu <huyue2@yulong.com> Reviewed-by: NGao Xiang <hsiangkao@linux.alibaba.com> Reviewed-by: NChao Yu <chao@kernel.org> [ Gao Xiang: it's enough to use "compacted_4b_initial < totalidx". ] Signed-off-by: NGao Xiang <hsiangkao@linux.alibaba.com>
-
由 Gao Xiang 提交于
Unsupported chunk format should be checked with "if (vi->chunkformat & ~EROFS_CHUNK_FORMAT_ALL)" Found when checking with 4k-byte blockmap (although currently mkfs uses inode chunk indexes format by default.) Link: https://lore.kernel.org/r/20210922095141.233938-1-hsiangkao@linux.alibaba.com Fixes: c5aa903a ("erofs: support reading chunk-based uncompressed files") Reviewed-by: NLiu Bo <bo.liu@linux.alibaba.com> Reviewed-by: NChao Yu <chao@kernel.org> Signed-off-by: NGao Xiang <hsiangkao@linux.alibaba.com>
-
- 25 8月, 2021 1 次提交
-
-
由 Gao Xiang 提交于
Dan reported a new smatch warning [1] "fs/erofs/inode.c:210 erofs_read_inode() error: double free of 'copied'" Due to new chunk-based format handling logic, the error path can be called after kfree(copied). Set "copied = NULL" after kfree(copied) to fix this. [1] https://lore.kernel.org/r/202108251030.bELQozR7-lkp@intel.com Link: https://lore.kernel.org/r/20210825120757.11034-1-hsiangkao@linux.alibaba.com Fixes: c5aa903a ("erofs: support reading chunk-based uncompressed files") Reported-by: Nkernel test robot <lkp@intel.com> Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Reviewed-by: NChao Yu <chao@kernel.org> Signed-off-by: NGao Xiang <hsiangkao@linux.alibaba.com>
-
- 20 8月, 2021 2 次提交
-
-
由 Gao Xiang 提交于
Add runtime support for chunk-based uncompressed files described in the previous patch. Link: https://lore.kernel.org/r/20210820100019.208490-2-hsiangkao@linux.alibaba.comReviewed-by: NLiu Bo <bo.liu@linux.alibaba.com> Reviewed-by: NChao Yu <chao@kernel.org> Signed-off-by: NGao Xiang <hsiangkao@linux.alibaba.com>
-
由 Gao Xiang 提交于
Currently, uncompressed data except for tail-packing inline is consecutive on disk. In order to support chunk-based data deduplication, add a new corresponding inode data layout. In the future, the data source of chunks can be either (un)compressed. Link: https://lore.kernel.org/r/20210820100019.208490-1-hsiangkao@linux.alibaba.comReviewed-by: NLiu Bo <bo.liu@linux.alibaba.com> Reviewed-by: NChao Yu <chao@kernel.org> Signed-off-by: NGao Xiang <hsiangkao@linux.alibaba.com>
-
- 19 8月, 2021 3 次提交
-
-
由 Miklos Szeredi 提交于
Add a rcu argument to the ->get_acl() callback to allow get_cached_acl_rcu() to call the ->get_acl() method in the next patch. Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com>
-
由 Gao Xiang 提交于
This adds fiemap support for both uncompressed files and compressed files by using iomap infrastructure. Link: https://lore.kernel.org/r/20210813052931.203280-3-hsiangkao@linux.alibaba.comReviewed-by: NChao Yu <chao@kernel.org> Signed-off-by: NGao Xiang <hsiangkao@linux.alibaba.com>
-
由 Gao Xiang 提交于
Previously, there is no need to get the full decompressed length since EROFS supports partial decompression. However for some other cases such as fiemap, the full decompressed length is necessary for iomap to make it work properly. This patch adds a way to get the full decompressed length. Note that it takes more metadata overhead and it'd be avoided if possible in the performance sensitive scenario. Link: https://lore.kernel.org/r/20210818152231.243691-1-hsiangkao@linux.alibaba.comReviewed-by: NChao Yu <chao@kernel.org> Signed-off-by: NGao Xiang <hsiangkao@linux.alibaba.com>
-
- 11 8月, 2021 2 次提交
-
-
由 Yue Hu 提交于
The mapping is not used at all, remove it and update related code. Link: https://lore.kernel.org/r/20210810072416.1392-1-zbestahu@gmail.comReviewed-by: NGao Xiang <hsiangkao@linux.alibaba.com> Reviewed-by: NChao Yu <chao@kernel.org> Signed-off-by: NYue Hu <huyue2@yulong.com> Signed-off-by: NGao Xiang <hsiangkao@linux.alibaba.com>
-
由 Yue Hu 提交于
We already have the wrapper function to identify managed page. Link: https://lore.kernel.org/r/20210810065450.1320-1-zbestahu@gmail.comReviewed-by: NGao Xiang <hsiangkao@linux.alibaba.com> Reviewed-by: NChao Yu <chao@kernel.org> Signed-off-by: NYue Hu <huyue2@yulong.com> Signed-off-by: NGao Xiang <hsiangkao@linux.alibaba.com>
-
- 10 8月, 2021 3 次提交
-
-
由 Gao Xiang 提交于
Since tail-packing inline has been supported by iomap now, let's convert all EROFS uncompressed data I/O to iomap, which is pretty straight-forward. Link: https://lore.kernel.org/r/20210805003601.183063-4-hsiangkao@linux.alibaba.com Cc: linux-fsdevel@vger.kernel.org Reviewed-by: NChao Yu <chao@kernel.org> Signed-off-by: NGao Xiang <hsiangkao@linux.alibaba.com>
-
由 Gao Xiang 提交于
DAX is quite useful for some VM use cases in order to save guest memory extremely with minimal lightweight EROFS. In order to prepare for such use cases, add preliminary dax support for non-tailpacking regular files for now. Tested with the DRAM-emulated PMEM and the EROFS image generated by "mkfs.erofs -Enoinline_data enwik9.fsdax.img enwik9" Link: https://lore.kernel.org/r/20210805003601.183063-3-hsiangkao@linux.alibaba.com Cc: nvdimm@lists.linux.dev Cc: linux-fsdevel@vger.kernel.org Reviewed-by: NChao Yu <chao@kernel.org> Signed-off-by: NGao Xiang <hsiangkao@linux.alibaba.com>
-
由 Huang Jianan 提交于
Add iomap support for non-tailpacking uncompressed data in order to support DIO and DAX. Direct I/O is useful in certain scenarios for uncompressed files. For example, double pagecache can be avoid by direct I/O when loop device is used for uncompressed files containing upper layer compressed filesystem. This adds iomap DIO support for non-tailpacking cases first and tail-packing inline files are handled in the follow-up patch. Link: https://lore.kernel.org/r/20210805003601.183063-2-hsiangkao@linux.alibaba.com Cc: linux-fsdevel@vger.kernel.org Reviewed-by: NChao Yu <chao@kernel.org> Signed-off-by: NHuang Jianan <huangjianan@oppo.com> Signed-off-by: NGao Xiang <hsiangkao@linux.alibaba.com>
-
- 08 6月, 2021 3 次提交
-
-
由 Gao Xiang 提交于
- Remove my outdated misleading email address; - Get rid of all unnecessary trailing newline by accident. Link: https://lore.kernel.org/r/20210602160634.10757-1-xiang@kernel.orgReviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@linux.alibaba.com>
-
由 Yue Hu 提交于
No any behavior to variable occupied in z_erofs_attach_page() which is only caller to z_erofs_pagevec_enqueue(). Link: https://lore.kernel.org/r/20210419102623.2015-1-zbestahu@gmail.comSigned-off-by: NYue Hu <huyue2@yulong.com> Reviewed-by: NGao Xiang <xiang@kernel.org> Signed-off-by: NGao Xiang <xiang@kernel.org>
-
由 Wei Yongjun 提交于
'ret' will be overwritten to 0 if erofs_sb_has_sb_chksum() return true, thus 0 will return in some error handling cases. Fix to return negative error code -EINVAL instead of 0. Link: https://lore.kernel.org/r/20210519141657.3062715-1-weiyongjun1@huawei.com Fixes: b858a484 ("erofs: support superblock checksum") Cc: stable <stable@vger.kernel.org> # 5.5+ Reported-by: NHulk Robot <hulkci@huawei.com> Signed-off-by: NWei Yongjun <weiyongjun1@huawei.com> Reviewed-by: NGao Xiang <xiang@kernel.org> Reviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <xiang@kernel.org>
-
- 13 5月, 2021 1 次提交
-
-
由 Gao Xiang 提交于
If the 1st NONHEAD lcluster of a pcluster isn't CBLKCNT lcluster type rather than a HEAD or PLAIN type instead, which means its pclustersize _must_ be 1 lcluster (since its uncompressed size < 2 lclusters), as illustrated below: HEAD HEAD / PLAIN lcluster type ____________ ____________ |_:__________|_________:__| file data (uncompressed) . . .____________. |____________| pcluster data (compressed) Such on-disk case was explained before [1] but missed to be handled properly in the runtime implementation. It can be observed if manually generating 1 lcluster-sized pcluster with 2 lclusters (thus CBLKCNT doesn't exist.) Let's fix it now. [1] https://lore.kernel.org/r/20210407043927.10623-1-xiang@kernel.org Link: https://lore.kernel.org/r/20210510064715.29123-1-xiang@kernel.org Fixes: cec6e93b ("erofs: support parsing big pcluster compress indexes") Reviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <xiang@kernel.org>
-
- 10 4月, 2021 9 次提交
-
-
由 Gao Xiang 提交于
Enable COMPR_CFGS and BIG_PCLUSTER since the implementations are all settled properly. Link: https://lore.kernel.org/r/20210407043927.10623-11-xiang@kernel.orgAcked-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Gao Xiang 提交于
Prior to big pcluster, there was only one compressed page so it'd easy to map this. However, when big pcluster is enabled, more work needs to be done to handle multiple compressed pages. In detail, - (maptype 0) if there is only one compressed page + no need to copy inplace I/O, just map it directly what we did before; - (maptype 1) if there are more compressed pages + no need to copy inplace I/O, vmap such compressed pages instead; - (maptype 2) if inplace I/O needs to be copied, use per-CPU buffers for decompression then. Another thing is how to detect inplace decompression is feasable or not (it's still quite easy for non big pclusters), apart from the inplace margin calculation, inplace I/O page reusing order is also needed to be considered for each compressed page. Currently, if the compressed page is the xth page, it shouldn't be reused as [0 ... nrpages_out - nrpages_in + x], otherwise a full copy will be triggered. Although there are some extra optimization ideas for this, I'd like to make big pcluster work correctly first and obviously it can be further optimized later since it has nothing with the on-disk format at all. Link: https://lore.kernel.org/r/20210407043927.10623-10-xiang@kernel.orgAcked-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Gao Xiang 提交于
Different from non-compact indexes, several lclusters are packed as the compact form at once and an unique base blkaddr is stored for each pack, so each lcluster index would take less space on avarage (e.g. 2 bytes for COMPACT_2B.) btw, that is also why BIG_PCLUSTER switch should be consistent for compact head0/1. Prior to big pcluster, the size of all pclusters was 1 lcluster. Therefore, when a new HEAD lcluster was scanned, blkaddr would be bumped by 1 lcluster. However, that way doesn't work anymore for big pcluster since we actually don't know the compressed size of pclusters in advance (before reading CBLKCNT lcluster). So, instead, let blkaddr of each pack be the first pcluster blkaddr with a valid CBLKCNT, in detail, 1) if CBLKCNT starts at the pack, this first valid pcluster is itself, e.g. _____________________________________________________________ |_CBLKCNT0_|_NONHEAD_| .. |_HEAD_|_CBLKCNT1_| ... |_HEAD_| ... ^ = blkaddr base ^ += CBLKCNT0 ^ += CBLKCNT1 2) if CBLKCNT doesn't start at the pack, the first valid pcluster is the next pcluster, e.g. _________________________________________________________ | NONHEAD_| .. |_HEAD_|_CBLKCNT0_| ... |_HEAD_|_HEAD_| ... ^ = blkaddr base ^ += CBLKCNT0 ^ += 1 When a CBLKCNT is found, blkaddr will be increased by CBLKCNT lclusters, or a new HEAD is found immediately, bump blkaddr by 1 instead (see the picture above.) Also noted if CBLKCNT is the end of the pack, instead of storing delta1 (distance of the next HEAD lcluster) as normal NONHEADs, it still uses the compressed block count (delta0) since delta1 can be calculated indirectly but the block count can't. Adjust decoding logic to fit big pcluster compact indexes as well. Link: https://lore.kernel.org/r/20210407043927.10623-9-xiang@kernel.orgAcked-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Gao Xiang 提交于
When INCOMPAT_BIG_PCLUSTER sb feature is enabled, legacy compress indexes will also have the same on-disk header compact indexes to keep per-file configurations instead of leaving it zeroed. If ADVISE_BIG_PCLUSTER is set for a file, CBLKCNT will be loaded for each pcluster in this file by parsing 1st non-head lcluster. Link: https://lore.kernel.org/r/20210407043927.10623-8-xiang@kernel.orgAcked-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Gao Xiang 提交于
Adjust per-CPU buffers on demand since big pcluster definition is available. Also, bail out unsupported pcluster size according to Z_EROFS_PCLUSTER_MAX_SIZE. Link: https://lore.kernel.org/r/20210407043927.10623-7-xiang@kernel.orgAcked-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Gao Xiang 提交于
Big pcluster indicates the size of compressed data for each physical pcluster is no longer fixed as block size, but could be more than 1 block (more accurately, 1 logical pcluster) When big pcluster feature is enabled for head0/1, delta0 of the 1st non-head lcluster index will keep block count of this pcluster in lcluster size instead of 1. Or, the compressed size of pcluster should be 1 lcluster if pcluster has no non-head lcluster index. Also note that BIG_PCLUSTER feature reuses COMPR_CFGS feature since it depends on COMPR_CFGS and will be released together. Link: https://lore.kernel.org/r/20210407043927.10623-6-xiang@kernel.orgAcked-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Gao Xiang 提交于
When picking up inplace I/O pages, it should be traversed in reverse order in aligned with the traversal order of file-backed online pages. Also, index should be updated together when preloading compressed pages. Previously, only page-sized pclustersize was supported so no problem at all. Also rename `compressedpages' to `icpage_ptr' to reflect its functionality. Link: https://lore.kernel.org/r/20210407043927.10623-5-xiang@kernel.orgAcked-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Gao Xiang 提交于
Since multiple pcluster sizes could be used at once, the number of compressed pages will become a variable factor. It's necessary to introduce slab pools rather than a single slab cache now. This limits the pclustersize to 1M (Z_EROFS_PCLUSTER_MAX_SIZE), and get rid of the obsolete EROFS_FS_CLUSTER_PAGE_LIMIT, which has no use now. Link: https://lore.kernel.org/r/20210407043927.10623-4-xiang@kernel.orgAcked-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Gao Xiang 提交于
To deal the with the cases which inplace decompression is infeasible for some inplace I/O. Per-CPU buffers was introduced to get rid of page allocation latency and thrash for low-latency decompression algorithms such as lz4. For the big pcluster feature, introduce multipage per-CPU buffers to keep such inplace I/O pclusters temporarily as well but note that per-CPU pages are just consecutive virtually. When a new big pcluster fs is mounted, its max pclustersize will be read and per-CPU buffers can be growed if needed. Shrinking adjustable per-CPU buffers is more complex (because we don't know if such size is still be used), so currently just release them all when unloading. Link: https://lore.kernel.org/r/20210409190630.19569-1-xiang@kernel.orgAcked-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
- 07 4月, 2021 1 次提交
-
-
由 Gao Xiang 提交于
Formal big pcluster design is actually more powerful / flexable than the previous thought whose pclustersize was fixed as power-of-2 blocks, which was obviously inefficient and space-wasting. Instead, pclustersize can now be set independently for each pcluster, so various pcluster sizes can also be used together in one file if mkfs wants (for example, according to data type and/or compression ratio). Let's get rid of previous physical_clusterbits[] setting (also notice that corresponding on-disk fields are still 0 for now). Therefore, head1/2 can be used for at most 2 different algorithms in one file and again pclustersize is now independent of these. Link: https://lore.kernel.org/r/20210407043927.10623-2-xiang@kernel.orgAcked-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
- 03 4月, 2021 1 次提交
-
-
由 Ruiqi Gong 提交于
zmap.c: s/correspoinding/corresponding zdata.c: s/endding/ending Link: https://lore.kernel.org/r/20210331093920.31923-1-gongruiqi1@huawei.comReported-by: NHulk Robot <hulkci@huawei.com> Signed-off-by: NRuiqi Gong <gongruiqi1@huawei.com> Reviewed-by: NGao Xiang <hsiangkao@redhat.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
- 29 3月, 2021 10 次提交
-
-
由 Gao Xiang 提交于
Add a bitmap for available compression algorithms and a variable-sized on-disk table for compression options in preparation for upcoming big pcluster and LZMA algorithm, which follows the end of super block. To parse the compression options, the bitmap is scanned one by one. For each available algorithm, there is data followed by 2-byte `length' correspondingly (it's enough for most cases, or entire fs blocks should be used.) With such available algorithm bitmap, kernel itself can also refuse to mount such filesystem if any unsupported compression algorithm exists. Note that COMPR_CFGS feature will be enabled with BIG_PCLUSTER. Link: https://lore.kernel.org/r/20210329100012.12980-1-hsiangkao@aol.comReviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Gao Xiang 提交于
Introduce z_erofs_lz4_cfgs to store all lz4 configurations. Currently it's only max_distance, but will be used for new features later. Link: https://lore.kernel.org/r/20210329012308.28743-4-hsiangkao@aol.comReviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Huang Jianan 提交于
lz4 uses LZ4_DISTANCE_MAX to record history preservation. When using rolling decompression, a block with a higher compression ratio will cause a larger memory allocation (up to 64k). It may cause a large resource burden in extreme cases on devices with small memory and a large number of concurrent IOs. So appropriately reducing this value can improve performance. Decreasing this value will reduce the compression ratio (except when input_size <LZ4_DISTANCE_MAX). But considering that erofs currently only supports 4k output, reducing this value will not significantly reduce the compression benefits. The maximum value of LZ4_DISTANCE_MAX defined by lz4 is 64k, and we can only reduce this value. For the old kernel, it just can't reduce the memory allocation during rolling decompression without affecting the decompression result. Link: https://lore.kernel.org/r/20210329012308.28743-3-hsiangkao@aol.comReviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NHuang Jianan <huangjianan@oppo.com> Signed-off-by: NGuo Weichao <guoweichao@oppo.com> [ Gao Xiang: introduce struct erofs_sb_lz4_info for configurations. ] Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Gao Xiang 提交于
Introduce erofs_sb_has_xxx() to make long checks short, especially for later big pcluster & LZMA features. Link: https://lore.kernel.org/r/20210329012308.28743-2-hsiangkao@aol.comReviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Gao Xiang 提交于
If any unknown i_format fields are set (may be of some new incompat inode features), mark such inode as unsupported. Just in case of any new incompat i_format fields added in the future. Link: https://lore.kernel.org/r/20210329003614.6583-1-hsiangkao@aol.com Fixes: 431339ba ("staging: erofs: add inode operations") Cc: <stable@vger.kernel.org> # 4.19+ Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Yue Hu 提交于
Currently, erofs_map_blocks() will be called only from erofs_{bmap, read_raw_page} which are all for uncompressed files. So, the compression branch in erofs_map_blocks() is pointless. Let's remove it and use erofs_map_blocks_flatmode() directly. Also update related comments. Link: https://lore.kernel.org/r/20210325071008.573-1-zbestahu@gmail.comReviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NYue Hu <huyue2@yulong.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Gao Xiang 提交于
Add a missing case which could cause unnecessary page allocation but not directly use inplace I/O instead, which increases runtime extra memory footprint. The detail is, considering an online file-backed page, the right half of the page is chosen to be cached (e.g. the end page of a readahead request) and some of its data doesn't exist in managed cache, so the pcluster will be definitely kept in the submission chain. (IOWs, it cannot be decompressed without I/O, e.g., due to the bypass queue). Currently, DELAYEDALLOC/TRYALLOC cases can be downgraded as NOINPLACE, and stop online pages from inplace I/O. After this patch, unneeded page allocations won't be observed in pickup_page_for_submission() then. Link: https://lore.kernel.org/r/20210321183227.5182-1-hsiangkao@aol.comSigned-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Huang Jianan 提交于
Sync decompression was introduced to get rid of additional kworker scheduling overhead. But there is no such overhead in non-atomic contexts. Therefore, it should be better to turn off sync decompression to avoid the current thread waiting in z_erofs_runqueue. Link: https://lore.kernel.org/r/20210317035448.13921-3-huangjianan@oppo.comReviewed-by: NGao Xiang <hsiangkao@redhat.com> Reviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NHuang Jianan <huangjianan@oppo.com> Signed-off-by: NGuo Weichao <guoweichao@oppo.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Huang Jianan 提交于
z_erofs_decompressqueue_endio may not be executed in the atomic context, for example, when dm-verity is turned on. In this scenario, data can be decompressed directly to get rid of additional kworker scheduling overhead. Link: https://lore.kernel.org/r/20210317035448.13921-2-huangjianan@oppo.comReviewed-by: NGao Xiang <hsiangkao@redhat.com> Reviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NHuang Jianan <huangjianan@oppo.com> Signed-off-by: NGuo Weichao <guoweichao@oppo.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
由 Huang Jianan 提交于
Currently, err would be treated as io error. Therefore, it'd be better to ensure memory allocation during rolling decompression to avoid such io error. In the long term, we might consider adding another !Uptodate case for such case. Link: https://lore.kernel.org/r/20210316031515.90954-1-huangjianan@oppo.comReviewed-by: NGao Xiang <hsiangkao@redhat.com> Reviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NHuang Jianan <huangjianan@oppo.com> Signed-off-by: NGuo Weichao <guoweichao@oppo.com> Signed-off-by: NGao Xiang <hsiangkao@redhat.com>
-
- 11 3月, 2021 1 次提交
-
-
由 Christoph Hellwig 提交于
Ever since the addition of multipage bio_vecs BIO_MAX_PAGES has been horribly confusingly misnamed. Rename it to BIO_MAX_VECS to stop confusing users of the bio API. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NMartin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20210311110137.1132391-2-hch@lst.deSigned-off-by: NJens Axboe <axboe@kernel.dk>
-