1. 20 1月, 2022 1 次提交
  2. 17 12月, 2021 1 次提交
  3. 16 12月, 2021 1 次提交
  4. 27 10月, 2021 1 次提交
  5. 25 10月, 2021 1 次提交
  6. 20 10月, 2021 1 次提交
  7. 18 10月, 2021 8 次提交
  8. 25 9月, 2021 1 次提交
  9. 03 9月, 2021 1 次提交
  10. 24 8月, 2021 4 次提交
  11. 23 8月, 2021 1 次提交
  12. 03 8月, 2021 2 次提交
  13. 24 6月, 2021 1 次提交
  14. 09 5月, 2021 1 次提交
  15. 04 5月, 2021 1 次提交
    • C
      bio: limit bio max size · cd2c7545
      Changheun Lee 提交于
      bio size can grow up to 4GB when muli-page bvec is enabled.
      but sometimes it would lead to inefficient behaviors.
      in case of large chunk direct I/O, - 32MB chunk read in user space -
      all pages for 32MB would be merged to a bio structure if the pages
      physical addresses are contiguous. it makes some delay to submit
      until merge complete. bio max size should be limited to a proper size.
      
      When 32MB chunk read with direct I/O option is coming from userspace,
      kernel behavior is below now in do_direct_IO() loop. it's timeline.
      
       | bio merge for 32MB. total 8,192 pages are merged.
       | total elapsed time is over 2ms.
       |------------------ ... ----------------------->|
                                                       | 8,192 pages merged a bio.
                                                       | at this time, first bio submit is done.
                                                       | 1 bio is split to 32 read request and issue.
                                                       |--------------->
                                                        |--------------->
                                                         |--------------->
                                                                    ......
                                                                         |--------------->
                                                                          |--------------->|
                                total 19ms elapsed to complete 32MB read done from device. |
      
      If bio max size is limited with 1MB, behavior is changed below.
      
       | bio merge for 1MB. 256 pages are merged for each bio.
       | total 32 bio will be made.
       | total elapsed time is over 2ms. it's same.
       | but, first bio submit timing is fast. about 100us.
       |--->|--->|--->|---> ... -->|--->|--->|--->|--->|
            | 256 pages merged a bio.
            | at this time, first bio submit is done.
            | and 1 read request is issued for 1 bio.
            |--------------->
                 |--------------->
                      |--------------->
                                            ......
                                                       |--------------->
                                                        |--------------->|
              total 17ms elapsed to complete 32MB read done from device. |
      
      As a result, read request issue timing is faster if bio max size is limited.
      Current kernel behavior with multipage bvec, super large bio can be created.
      And it lead to delay first I/O request issue.
      Signed-off-by: NChangheun Lee <nanich.lee@samsung.com>
      Reviewed-by: NBart Van Assche <bvanassche@acm.org>
      Link: https://lore.kernel.org/r/20210503095203.29076-1-nanich.lee@samsung.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
      cd2c7545
  16. 12 4月, 2021 2 次提交
  17. 01 4月, 2021 1 次提交
    • Y
      block: only update parent bi_status when bio fail · 3edf5346
      Yufen Yu 提交于
      For multiple split bios, if one of the bio is fail, the whole
      should return error to application. But we found there is a race
      between bio_integrity_verify_fn and bio complete, which return
      io success to application after one of the bio fail. The race as
      following:
      
      split bio(READ)          kworker
      
      nvme_complete_rq
      blk_update_request //split error=0
        bio_endio
          bio_integrity_endio
            queue_work(kintegrityd_wq, &bip->bip_work);
      
                               bio_integrity_verify_fn
                               bio_endio //split bio
                                __bio_chain_endio
                                   if (!parent->bi_status)
      
                                     <interrupt entry>
                                     nvme_irq
                                       blk_update_request //parent error=7
                                       req_bio_endio
                                          bio->bi_status = 7 //parent bio
                                     <interrupt exit>
      
                                     parent->bi_status = 0
                              parent->bi_end_io() // return bi_status=0
      
      The bio has been split as two: split and parent. When split
      bio completed, it depends on kworker to do endio, while
      bio_integrity_verify_fn have been interrupted by parent bio
      complete irq handler. Then, parent bio->bi_status which have
      been set in irq handler will overwrite by kworker.
      
      In fact, even without the above race, we also need to conside
      the concurrency beteen mulitple split bio complete and update
      the same parent bi_status. Normally, multiple split bios will
      be issued to the same hctx and complete from the same irq
      vector. But if we have updated queue map between multiple split
      bios, these bios may complete on different hw queue and different
      irq vector. Then the concurrency update parent bi_status may
      cause the final status error.
      Suggested-by: NKeith Busch <kbusch@kernel.org>
      Signed-off-by: NYufen Yu <yuyufen@huawei.com>
      Reviewed-by: NMing Lei <ming.lei@redhat.com>
      Link: https://lore.kernel.org/r/20210331115359.1125679-1-yuyufen@huawei.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
      3edf5346
  18. 25 3月, 2021 1 次提交
  19. 11 3月, 2021 1 次提交
  20. 09 2月, 2021 1 次提交
  21. 08 2月, 2021 8 次提交