- 03 8月, 2022 40 次提交
-
-
由 Md Haris Iqbal 提交于
The structure rnbd_srv_session maintains a list and an xarray of rnbd_srv_dev. There is no need to keep both as one of them can serve the purpose. Since one of the places where the lookup of rnbd_srv_dev using rnbd_srv_session is IO path, an xarray would serve us better than a list traversal. Hence remove sess_dev_list from rnbd_srv_session, and replace its uses from xarray. Signed-off-by: NMd Haris Iqbal <haris.iqbal@ionos.com> Reviewed-by: NAleksei Marov <aleksei.marov@ionos.com> Signed-off-by: NJack Wang <jinpu.wang@ionos.com> Link: https://lore.kernel.org/r/20220707143122.460362-3-haris.iqbal@ionos.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Md Haris Iqbal 提交于
After setting keep_id if the mutex trylock fails, the keep_id stays set for the rest of the sess_dev lifetime. Therefore, set keep_id to true after mutex_trylock succeeds, so that a failure of trylock does'nt touch keep_id. Fixes: b168e1d8 ("block/rnbd-srv: Prevent a deadlock generated by accessing sysfs in parallel") Cc: gi-oh.kim@ionos.com Signed-off-by: NMd Haris Iqbal <haris.iqbal@ionos.com> Signed-off-by: NJack Wang <jinpu.wang@ionos.com> Link: https://lore.kernel.org/r/20220707143122.460362-2-haris.iqbal@ionos.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Hannes Reinecke 提交于
Each authentication step is required to be completed within the KATO interval (or two minutes if not set). So add a workqueue function to reset the transaction ID and the expected next protocol step; this will automatically the next authentication command referring to the terminated authentication. Signed-off-by: NHannes Reinecke <hare@suse.de> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Hannes Reinecke 提交于
Implement Diffie-Hellman key exchange using FFDHE groups for NVMe In-Band Authentication. This patch adds a new host configfs attribute 'dhchap_dhgroup' to select the FFDHE group to use. Signed-off-by: NHannes Reinecke <hare@suse.de> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Hannes Reinecke 提交于
Implement NVMe-oF In-Band authentication according to NVMe TPAR 8006. This patch adds three additional configfs entries 'dhchap_key', 'dhchap_ctrl_key', and 'dhchap_hash' to the 'host' configfs directory. The 'dhchap_key' and 'dhchap_ctrl_key' entries need to be in the ASCII format as specified in NVMe Base Specification v2.0 section 8.13.5.8 'Secret representation'. 'dhchap_hash' defaults to 'hmac(sha256)', and can be written to to switch to a different HMAC algorithm. Signed-off-by: NHannes Reinecke <hare@suse.de> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Hannes Reinecke 提交于
Some fabrics commands can be sent via io queues, so add a new function nvmet_parse_fabrics_io_cmd() and rename the existing nvmet_parse_fabrics_cmd() to nvmet_parse_fabrics_admin_cmd(). Signed-off-by: NHannes Reinecke <hare@suse.de> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Hannes Reinecke 提交于
Implement Diffie-Hellman key exchange using FFDHE groups for NVMe In-Band Authentication. Signed-off-by: NHannes Reinecke <hare@suse.de> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Hannes Reinecke 提交于
Implement NVMe-oF In-Band authentication according to NVMe TPAR 8006. This patch adds two new fabric options 'dhchap_secret' to specify the pre-shared key (in ASCII respresentation according to NVMe 2.0 section 8.13.5.8 'Secret representation') and 'dhchap_ctrl_secret' to specify the pre-shared controller key for bi-directional authentication of both the host and the controller. Re-authentication can be triggered by writing the PSK into the new controller sysfs attribute 'dhchap_secret' or 'dhchap_ctrl_secret'. Signed-off-by: NHannes Reinecke <hare@suse.de> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Signed-off-by: NChristoph Hellwig <hch@lst.de> [axboe: fold in clang build fix] Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Hannes Reinecke 提交于
The 'connect' command might fail with NVME_SC_AUTH_REQUIRED, so we should be decoding this error, too. Signed-off-by: NHannes Reinecke <hare@suse.de> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Reviewed-by: NChaitanya Kulkarni <kch@nvidia.com> Reviewed-by: NHimanshu Madhani <himanshu.madhani@oracle.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Hannes Reinecke 提交于
Add new definitions for NVMe In-band authentication as defined in the NVMe Base Specification v2.0. Signed-off-by: NHannes Reinecke <hare@suse.de> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Reviewed-by: NHimanshu Madhani <himanshu.madhani@oracle.com> Reviewed-by: NChaitanya Kulkarni <kch@nvidia.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Hannes Reinecke 提交于
Add RFC4648-compliant base64 encoding and decoding routines, based on the base64url encoding in fs/crypto/fname.c. Signed-off-by: NHannes Reinecke <hare@suse.de> Reviewed-by: NHimanshu Madhani <himanshu.madhani@oracle.com> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Cc: Eric Biggers <ebiggers@kernel.org> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Hannes Reinecke 提交于
Add helper function to determine if a given key-agreement protocol primitive is supported. Signed-off-by: NHannes Reinecke <hare@suse.de> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Reviewed-by: NChaitanya Kulkarni <kch@nvidia.com> Reviewed-by: NHimanshu Madhani <himanshu.madhani@oracle.com> Acked-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Hannes Reinecke 提交于
Add helper function to determine if a given synchronous hash is supported. Signed-off-by: NHannes Reinecke <hare@suse.de> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Reviewed-by: NChaitanya Kulkarni <kch@nvidia.com> Reviewed-by: NHimanshu Madhani <himanshu.madhani@oracle.com> Acked-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Sagi Grimberg 提交于
A helper now exist, no need to open-code the same functionality. Signed-off-by: NSagi Grimberg <sagi@grimberg.me> Reviewed-by: NChaitanya Kulkarni <kch@nvidia.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Chaitanya Kulkarni 提交于
Only caller of the __nvme_submit_sync_cmd() with qid value not equal to NVME_QID_ANY is nvmf_connect_io_queues(), where qid value is alway set to > 0. [1] __nvme_submit_sync_cmd() callers with qid parameter from :- Caller | qid parameter ------------------------------------------------------ * nvme_fc_connect_io_queues() | nvmf_connect_io_queue() | qid > 0 * nvme_rdma_start_io_queues() | nvme_rdma_start_queue() | nvmf_connect_io_queues() | qid > 0 * nvme_tcp_start_io_queues() | nvme_tcp_start_queue() | nvmf_connect_io_queues() | qid > 0 * nvme_loop_connect_io_queues() | nvmf_connect_io_queues() | qid > 0 When qid value of the function parameter __nvme_submit_sync_cmd() is > 0 from above callers, we use blk_mq_alloc_request_hctx(), where we pass last parameter as 0 if qid functional parameter value is set to 0 with conditional operators, see 1002 :- 991 int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd, 992 union nvme_result *result, void *buffer, unsigned bufflen, 993 int qid, int at_head, blk_mq_req_flags_t flags) 994 { 995 struct request *req; 996 int ret; 997 998 if (qid == NVME_QID_ANY) 999 req = blk_mq_alloc_request(q, nvme_req_op(cmd), flags); 1000 else 1001 req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags, 1002 qid ? qid - 1 : 0); 1003 But qid function parameter value of the __nvme_submit_sync_cmd() will never be 0 from above caller list see [1], and all the other callers of __nvme_submit_sync_cmd() use NVME_QID_ANY as qid value :- 1. nvme_submit_sync_cmd() 2. nvme_features() 3. nvme_sec_submit() 4. nvmf_reg_read32() 5. nvmf_reg_read64() 6. nvmf_ref_write32() 7. nvmf_connect_admin_queue() Remove the conditional operator to pass the qid as 0 in the call to blk_mq_alloc_requst_hctx(). Signed-off-by: NChaitanya Kulkarni <kch@nvidia.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Chaitanya Kulkarni 提交于
The function __nvme_submit_sync_cmd() has following list of callers that sets the timeout value to 0 :- Callers | Timeout value ------------------------------------------------ nvme_submit_sync_cmd() | 0 nvme_features() | 0 nvme_sec_submit() | 0 nvmf_reg_read32() | 0 nvmf_reg_read64() | 0 nvmf_reg_write32() | 0 nvmf_connect_admin_queue() | 0 nvmf_connect_io_queue() | 0 Remove the timeout function parameter from __nvme_submit_sync_cmd() and adjust the rest of code accordingly. Signed-off-by: NChaitanya Kulkarni <kch@nvidia.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Michael Kelley 提交于
In the NVM Express Revision 1.4 spec, Figure 145 describes possible values for an AER with event type "Error" (value 000b). For a Persistent Internal Error (value 03h), the host should perform a controller reset. Add support for this error using code that already exists for doing a controller reset. As part of this support, introduce two utility functions for parsing the AER type and subtype. This new support was tested in a lab environment where we can generate the persistent internal error on demand, and observe both the Linux side and NVMe controller side to see that the controller reset has been done. Signed-off-by: NMichael Kelley <mikelley@microsoft.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Xiang wangx 提交于
Delete the redundant word 'be'. Signed-off-by: NXiang wangx <wangxiang@cdjrlc.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Guoqing Jiang 提交于
No need to checking the return value, make it return void. Acked-by: NJack Wang <jinpu.wang@ionos.com> Signed-off-by: NGuoqing Jiang <guoqing.jiang@linux.dev> Link: https://lore.kernel.org/r/20220706133152.12058-9-guoqing.jiang@linux.devSigned-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Guoqing Jiang 提交于
Let's change the parameter type to 'sector_t' then we don't need to cast it from rnbd_clt_resize_dev_store, and update rnbd_clt_resize_disk too. Acked-by: NJack Wang <jinpu.wang@ionos.com> Signed-off-by: NGuoqing Jiang <guoqing.jiang@linux.dev> Link: https://lore.kernel.org/r/20220706133152.12058-8-guoqing.jiang@linux.devSigned-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Guoqing Jiang 提交于
Currently, process_msg_open_rsp checks if capacity changed or not before call rnbd_clt_change_capacity while the checking also make sense for rnbd_clt_resize_dev_store, let's move the checking into the function. Acked-by: NJack Wang <jinpu.wang@ionos.com> Signed-off-by: NGuoqing Jiang <guoqing.jiang@linux.dev> Link: https://lore.kernel.org/r/20220706133152.12058-7-guoqing.jiang@linux.devSigned-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Guoqing Jiang 提交于
While at it, let re-arrange the struct to remove holes. Before, pahole reports /* size: 232, cachelines: 4, members: 17 */ /* sum members: 224, holes: 2, sum holes: 8 */ /* last cacheline: 40 bytes */ After the change, the report changes to /* size: 224, cachelines: 4, members: 17 */ /* last cacheline: 32 bytes */ Acked-by: NJack Wang <jinpu.wang@ionos.com> Signed-off-by: NGuoqing Jiang <guoqing.jiang@linux.dev> Link: https://lore.kernel.org/r/20220706133152.12058-6-guoqing.jiang@linux.devSigned-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Guoqing Jiang 提交于
Previously, both map and remap trigger rnbd_clt_set_dev_attr to set some members in rnbd_clt_dev such as wc, fua and logical_block_size etc, but those members are only useful for map scenario given the setup_request_queue is only called from the path: rnbd_clt_map_device -> rnbd_client_setup_device Since rnbd_clt_map_device frees rsp after rnbd_client_setup_device, we can pass rsp to rnbd_client_setup_device and it's callees, which means queue's attributes can be set directly from relevant members of rsp instead from rnbd_clt_dev. After that, we can kill 11 members from rnbd_clt_dev, and we don't need rnbd_clt_set_dev_attr either. Acked-by: NJack Wang <jinpu.wang@ionos.com> Signed-off-by: NGuoqing Jiang <guoqing.jiang@linux.dev> Link: https://lore.kernel.org/r/20220706133152.12058-5-guoqing.jiang@linux.devSigned-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Guoqing Jiang 提交于
The member is not needed since we can call get_disk_ro to achieve the same goal. Acked-by: NJack Wang <jinpu.wang@ionos.com> Signed-off-by: NGuoqing Jiang <guoqing.jiang@linux.dev> Link: https://lore.kernel.org/r/20220706133152.12058-4-guoqing.jiang@linux.devSigned-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Guoqing Jiang 提交于
For map scenario, rsp is freed in two places: 1. msg_open_conf frees rsp if rtrs_clt_request returns 0. 2. Otherwise, rsp is freed by the call sites of rtrs_clt_request. Now, We'd like to control full lifecycle of rsp in rnbd_clt_map_device, with that, it is feasible to pass rsp to rnbd_client_setup_device in next commit. For 1, it is possible to free rsp from the caller of send_usr_msg because of the synchronization of iu->comp.wait. And we put iu later in rnbd_clt_map_device to ensure order of release rsp and iu. Acked-by: NJack Wang <jinpu.wang@ionos.com> Signed-off-by: NGuoqing Jiang <guoqing.jiang@linux.dev> Link: https://lore.kernel.org/r/20220706133152.12058-3-guoqing.jiang@linux.devSigned-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Guoqing Jiang 提交于
Let's open code it in rnbd_clt_map_device, then we can use information from rsp to setup gendisk and request_queue in next commits. After that, we can remove some members (wc, fua and max_hw_sectors etc) from struct rnbd_clt_dev. Acked-by: NJack Wang <jinpu.wang@ionos.com> Signed-off-by: NGuoqing Jiang <guoqing.jiang@linux.dev> Link: https://lore.kernel.org/r/20220706133152.12058-2-guoqing.jiang@linux.devSigned-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Christophe JAILLET 提交于
Use bitmap_zalloc()/bitmap_free() instead of hand-writing them. It is less verbose and it improves the semantic. Signed-off-by: NChristophe JAILLET <christophe.jaillet@wanadoo.fr> Link: https://lore.kernel.org/r/7c4d3116ba843fc4a8ae557dd6176352a6cd0985.1656864320.git.christophe.jaillet@wanadoo.frSigned-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Zhang Jiaming 提交于
There are 2 spelling mistakes in comments. Fix it. Signed-off-by: NZhang Jiaming <jiaming@nfschina.com> Signed-off-by: NSong Liu <song@kernel.org> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Logan Gunthorpe 提交于
The block layer defaults the maximum segments to 128, which means requests tend to get split around the 512KB depending on how many pages can be merged. There's no such restriction in the raid5 code so increase the limit to USHRT_MAX so that larger requests can be sent as one. Signed-off-by: NLogan Gunthorpe <logang@deltatee.com> Signed-off-by: NSong Liu <song@kernel.org> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Logan Gunthorpe 提交于
Add a debug print for raid5_make_request() so that each request is printed and add the logical sector number to the debug print in __add_stripe_bio(). Signed-off-by: NLogan Gunthorpe <logang@deltatee.com> Signed-off-by: NSong Liu <song@kernel.org> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Logan Gunthorpe 提交于
raid5_make_request() loops through every page in the request, finds the appropriate stripe and adds the bio for that page in the disk. This causes a great deal of contention on the hash_lock and extra work seeing each stripe must be found once for every data disk. The number of times a stripe must be found can be reduced by pivoting raid5_make_request() so that it loops through every stripe and then loops through every disk in that stripe to see if the bio must be added. This reduces the number of times the hash lock must be taken by a factor equal to the number of data disks. To accomplish this, the logical sectors that have already been added must be tracked. Tracking them is done with a bitmap: the bits for all pages are set at the start of the request and each bit is cleared once the bio is added to a stripe. Finding the next sector to be done is then just a call to find_first_bit() so that sectors that have been done can simply be skipped. One minor downside is that the maximum sectors for a request must be limited so that the bitmap can be appropriately sized on the stack. This limit is arbitrarily chosen to be 256 stripe pages which works out to 1MB if PAGE_SIZE == DEFAULT_STRIPE_SIZE. This doesn't actually restrict the maximum request further seeing the default block queue settings are used which restricts the number of segments to 128 (which results in request sizes that are approximately 512KB). Signed-off-by: NLogan Gunthorpe <logang@deltatee.com> Signed-off-by: NSong Liu <song@kernel.org> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Logan Gunthorpe 提交于
When testing if a previous stripe has had reshape expand past it, use the earliest or latest logical sector in all the disks for that stripe head. This will allow adding multiple disks at a time in a subesquent patch. To do this cleaner, refactor the check into a helper function called stripe_ahead_of_reshape(). Signed-off-by: NLogan Gunthorpe <logang@deltatee.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NSong Liu <song@kernel.org> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Logan Gunthorpe 提交于
Factor out two helper functions from add_stripe_bio(): one to check for overlap (stripe_bio_overlaps()), and one to actually add the bio to the stripe (__add_stripe_bio()). The latter function will always succeed. This will be useful in the next patch so that overlap can be checked for multiple disks before adding any Signed-off-by: NLogan Gunthorpe <logang@deltatee.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NSong Liu <song@kernel.org> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Logan Gunthorpe 提交于
When batching, every stripe head has to find the previous stripe head to add to the batch list. This involves taking the hash lock which is highly contended during IO. Instead of finding the previous stripe_head each time, store a reference to the previous stripe_head in a pointer so that it doesn't require taking the contended lock another time. The reference to the previous stripe must be released before scheduling and waiting for work to get done. Otherwise, it can hold up raid5_activate_delayed() and deadlock. Signed-off-by: NLogan Gunthorpe <logang@deltatee.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Acked-by: NGuoqing Jiang <guoqing.jiang@linux.dev> Signed-off-by: NSong Liu <song@kernel.org> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Logan Gunthorpe 提交于
The for loop with retry label can be more cleanly expressed as a while loop by moving the logical_sector increment into the success path. No functional changes intended. Signed-off-by: NLogan Gunthorpe <logang@deltatee.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NSong Liu <song@kernel.org> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Logan Gunthorpe 提交于
Now that prepare_to_wait() isn't in the way, move read_sequcount_begin() into make_stripe_request(). No functional changes intended. Signed-off-by: NLogan Gunthorpe <logang@deltatee.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NGuoqing Jiang <guoqing.jiang@linux.dev> Signed-off-by: NSong Liu <song@kernel.org> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Logan Gunthorpe 提交于
prepare_to_wait() can be reasonably called after schedule instead of setting a flag and preparing in the next loop iteration. This means that prepare_to_wait() will be called before read_seqcount_begin(), but there shouldn't be any reason that the order matters here. On the first iteration of the loop prepare_to_wait() is already called first. Signed-off-by: NLogan Gunthorpe <logang@deltatee.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NGuoqing Jiang <guoqing.jiang@linux.dev> Signed-off-by: NSong Liu <song@kernel.org> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Logan Gunthorpe 提交于
Factor out the inner loop of raid5_make_request() into it's own helper called make_stripe_request(). The helper returns a number of statuses: SUCCESS, RETRY, SCHEDULE_AND_RETRY and FAIL. This makes the code a bit easier to understand and allows the SCHEDULE_AND_RETRY path to be made common. A context structure is added to contain do_flush. It will be used more in subsequent patches for state that needs to be kept outside the loop. No functional changes intended. This will be cleaned up further in subsequent patches to untangle the gen_lock and do_prepare logic further. Signed-off-by: NLogan Gunthorpe <logang@deltatee.com> Signed-off-by: NSong Liu <song@kernel.org> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Logan Gunthorpe 提交于
Both uses of find_stripe() require a fairly complicated dance to increment the reference count. Move this into a common find_get_stripe() helper. No functional changes intended. Signed-off-by: NLogan Gunthorpe <logang@deltatee.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Acked-by: NGuoqing Jiang <guoqing.jiang@linux.dev> Signed-off-by: NSong Liu <song@kernel.org> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Logan Gunthorpe 提交于
stripe_add_to_batch_list() is better done in the loop in make_request instead of inside add_stripe_bio(). This is clearer and allows for storing the batch_head state outside the loop in a subsequent patch. The call to add_stripe_bio() in retry_aligned_read() is for read and batching only applies to write. So it's impossible for batching to happen at that call site. No functional changes intended. Signed-off-by: NLogan Gunthorpe <logang@deltatee.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NGuoqing Jiang <guoqing.jiang@linux.dev> Signed-off-by: NSong Liu <song@kernel.org> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-