1. 01 6月, 2023 1 次提交
  2. 08 3月, 2023 1 次提交
  3. 03 11月, 2022 2 次提交
  4. 31 5月, 2022 1 次提交
  5. 17 2月, 2022 1 次提交
  6. 10 12月, 2021 1 次提交
  7. 06 12月, 2021 1 次提交
  8. 19 10月, 2021 2 次提交
  9. 06 7月, 2021 1 次提交
  10. 06 10月, 2020 3 次提交
  11. 02 9月, 2020 3 次提交
  12. 17 7月, 2020 1 次提交
    • C
      block: improve discard bio alignment in __blkdev_issue_discard() · 9b15d109
      Coly Li 提交于
      This patch improves discard bio split for address and size alignment in
      __blkdev_issue_discard(). The aligned discard bio may help underlying
      device controller to perform better discard and internal garbage
      collection, and avoid unnecessary internal fragment.
      
      Current discard bio split algorithm in __blkdev_issue_discard() may have
      non-discarded fregment on device even the discard bio LBA and size are
      both aligned to device's discard granularity size.
      
      Here is the example steps on how to reproduce the above problem.
      - On a VMWare ESXi 6.5 update3 installation, create a 51GB virtual disk
        with thin mode and give it to a Linux virtual machine.
      - Inside the Linux virtual machine, if the 50GB virtual disk shows up as
        /dev/sdb, fill data into the first 50GB by,
              # dd if=/dev/zero of=/dev/sdb bs=4096 count=13107200
      - Discard the 50GB range from offset 0 on /dev/sdb,
              # blkdiscard /dev/sdb -o 0 -l 53687091200
      - Observe the underlying mapping status of the device
              # sg_get_lba_status /dev/sdb -m 1048 --lba=0
        descriptor LBA: 0x0000000000000000  blocks: 2048  mapped (or unknown)
        descriptor LBA: 0x0000000000000800  blocks: 16773120  deallocated
        descriptor LBA: 0x0000000000fff800  blocks: 2048  mapped (or unknown)
        descriptor LBA: 0x0000000001000000  blocks: 8386560  deallocated
        descriptor LBA: 0x00000000017ff800  blocks: 2048  mapped (or unknown)
        descriptor LBA: 0x0000000001800000  blocks: 8386560  deallocated
        descriptor LBA: 0x0000000001fff800  blocks: 2048  mapped (or unknown)
        descriptor LBA: 0x0000000002000000  blocks: 8386560  deallocated
        descriptor LBA: 0x00000000027ff800  blocks: 2048  mapped (or unknown)
        descriptor LBA: 0x0000000002800000  blocks: 8386560  deallocated
        descriptor LBA: 0x0000000002fff800  blocks: 2048  mapped (or unknown)
        descriptor LBA: 0x0000000003000000  blocks: 8386560  deallocated
        descriptor LBA: 0x00000000037ff800  blocks: 2048  mapped (or unknown)
        descriptor LBA: 0x0000000003800000  blocks: 8386560  deallocated
        descriptor LBA: 0x0000000003fff800  blocks: 2048  mapped (or unknown)
        descriptor LBA: 0x0000000004000000  blocks: 8386560  deallocated
        descriptor LBA: 0x00000000047ff800  blocks: 2048  mapped (or unknown)
        descriptor LBA: 0x0000000004800000  blocks: 8386560  deallocated
        descriptor LBA: 0x0000000004fff800  blocks: 2048  mapped (or unknown)
        descriptor LBA: 0x0000000005000000  blocks: 8386560  deallocated
        descriptor LBA: 0x00000000057ff800  blocks: 2048  mapped (or unknown)
        descriptor LBA: 0x0000000005800000  blocks: 8386560  deallocated
        descriptor LBA: 0x0000000005fff800  blocks: 2048  mapped (or unknown)
        descriptor LBA: 0x0000000006000000  blocks: 6291456  deallocated
        descriptor LBA: 0x0000000006600000  blocks: 0  deallocated
      
      Although the discard bio starts at LBA 0 and has 50<<30 bytes size which
      are perfect aligned to the discard granularity, from the above list
      these are many 1MB (2048 sectors) internal fragments exist unexpectedly.
      
      The problem is in __blkdev_issue_discard(), an improper algorithm causes
      an improper bio size which is not aligned.
      
       25 int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
       26                 sector_t nr_sects, gfp_t gfp_mask, int flags,
       27                 struct bio **biop)
       28 {
       29         struct request_queue *q = bdev_get_queue(bdev);
         [snipped]
       56
       57         while (nr_sects) {
       58                 sector_t req_sects = min_t(sector_t, nr_sects,
       59                                 bio_allowed_max_sectors(q));
       60
       61                 WARN_ON_ONCE((req_sects << 9) > UINT_MAX);
       62
       63                 bio = blk_next_bio(bio, 0, gfp_mask);
       64                 bio->bi_iter.bi_sector = sector;
       65                 bio_set_dev(bio, bdev);
       66                 bio_set_op_attrs(bio, op, 0);
       67
       68                 bio->bi_iter.bi_size = req_sects << 9;
       69                 sector += req_sects;
       70                 nr_sects -= req_sects;
         [snipped]
       79         }
       80
       81         *biop = bio;
       82         return 0;
       83 }
       84 EXPORT_SYMBOL(__blkdev_issue_discard);
      
      At line 58-59, to discard a 50GB range, req_sects is set as return value
      of bio_allowed_max_sectors(q), which is 8388607 sectors. In the above
      case, the discard granularity is 2048 sectors, although the start LBA
      and discard length are aligned to discard granularity, req_sects never
      has chance to be aligned to discard granularity. This is why there are
      some still-mapped 2048 sectors fragment in every 4 or 8 GB range.
      
      If req_sects at line 58 is set to a value aligned to discard_granularity
      and close to UNIT_MAX, then all consequent split bios inside device
      driver are (almostly) aligned to discard_granularity of the device
      queue. The 2048 sectors still-mapped fragment will disappear.
      
      This patch introduces bio_aligned_discard_max_sectors() to return the
      the value which is aligned to q->limits.discard_granularity and closest
      to UINT_MAX. Then this patch replaces bio_allowed_max_sectors() with
      this new routine to decide a more proper split bio length.
      
      But we still need to handle the situation when discard start LBA is not
      aligned to q->limits.discard_granularity, otherwise even the length is
      aligned, current code may still leave 2048 fragment around every 4GB
      range. Therefore, to calculate req_sects, firstly the start LBA of
      discard range is checked (including partition offset), if it is not
      aligned to discard granularity, the first split location should make
      sure following bio has bi_sector aligned to discard granularity. Then
      there won't be still-mapped fragment in the middle of the discard range.
      
      The above is how this patch improves discard bio alignment in
      __blkdev_issue_discard(). Now with this patch, after discard with same
      command line mentiond previously, sg_get_lba_status returns,
      descriptor LBA: 0x0000000000000000  blocks: 106954752  deallocated
      descriptor LBA: 0x0000000006600000  blocks: 0  deallocated
      
      We an see there is no 2048 sectors segment anymore, everything is clean.
      Reported-and-tested-by: NAcshai Manoj <acshai.manoj@microfocus.com>
      Signed-off-by: NColy Li <colyli@suse.de>
      Reviewed-by: NHannes Reinecke <hare@suse.com>
      Reviewed-by: NMing Lei <ming.lei@redhat.com>
      Reviewed-by: NXiao Ni <xni@redhat.com>
      Cc: Bart Van Assche <bvanassche@acm.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Enzo Matsumiya <ematsumiya@suse.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      9b15d109
  13. 09 7月, 2020 1 次提交
  14. 02 7月, 2020 1 次提交
  15. 01 7月, 2020 3 次提交
  16. 29 6月, 2020 1 次提交
  17. 24 6月, 2020 2 次提交
    • L
      block: create the request_queue debugfs_dir on registration · 85e0cbbb
      Luis Chamberlain 提交于
      We were only creating the request_queue debugfs_dir only
      for make_request block drivers (multiqueue), but never for
      request-based block drivers. We did this as we were only
      creating non-blktrace additional debugfs files on that directory
      for make_request drivers. However, since blktrace *always* creates
      that directory anyway, we special-case the use of that directory
      on blktrace. Other than this being an eye-sore, this exposes
      request-based block drivers to the same debugfs fragile
      race that used to exist with make_request block drivers
      where if we start adding files onto that directory we can later
      run a race with a double removal of dentries on the directory
      if we don't deal with this carefully on blktrace.
      
      Instead, just simplify things by always creating the request_queue
      debugfs_dir on request_queue registration. Rename the mutex also to
      reflect the fact that this is used outside of the blktrace context.
      Signed-off-by: NLuis Chamberlain <mcgrof@kernel.org>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      85e0cbbb
    • C
      blk-mq: move failure injection out of blk_mq_complete_request · 15f73f5b
      Christoph Hellwig 提交于
      Move the call to blk_should_fake_timeout out of blk_mq_complete_request
      and into the drivers, skipping call sites that are obvious error
      handlers, and remove the now superflous blk_mq_force_complete_rq helper.
      This ensures we don't keep injecting errors into completions that just
      terminate the Linux request after the hardware has been reset or the
      command has been aborted.
      Reviewed-by: NDaniel Wagner <dwagner@suse.de>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      15f73f5b
  18. 05 6月, 2020 1 次提交
  19. 30 5月, 2020 1 次提交
  20. 27 5月, 2020 3 次提交
  21. 19 5月, 2020 4 次提交
  22. 14 5月, 2020 1 次提交
    • S
      block: Inline encryption support for blk-mq · a892c8d5
      Satya Tangirala 提交于
      We must have some way of letting a storage device driver know what
      encryption context it should use for en/decrypting a request. However,
      it's the upper layers (like the filesystem/fscrypt) that know about and
      manages encryption contexts. As such, when the upper layer submits a bio
      to the block layer, and this bio eventually reaches a device driver with
      support for inline encryption, the device driver will need to have been
      told the encryption context for that bio.
      
      We want to communicate the encryption context from the upper layer to the
      storage device along with the bio, when the bio is submitted to the block
      layer. To do this, we add a struct bio_crypt_ctx to struct bio, which can
      represent an encryption context (note that we can't use the bi_private
      field in struct bio to do this because that field does not function to pass
      information across layers in the storage stack). We also introduce various
      functions to manipulate the bio_crypt_ctx and make the bio/request merging
      logic aware of the bio_crypt_ctx.
      
      We also make changes to blk-mq to make it handle bios with encryption
      contexts. blk-mq can merge many bios into the same request. These bios need
      to have contiguous data unit numbers (the necessary changes to blk-merge
      are also made to ensure this) - as such, it suffices to keep the data unit
      number of just the first bio, since that's all a storage driver needs to
      infer the data unit number to use for each data block in each bio in a
      request. blk-mq keeps track of the encryption context to be used for all
      the bios in a request with the request's rq_crypt_ctx. When the first bio
      is added to an empty request, blk-mq will program the encryption context
      of that bio into the request_queue's keyslot manager, and store the
      returned keyslot in the request's rq_crypt_ctx. All the functions to
      operate on encryption contexts are in blk-crypto.c.
      
      Upper layers only need to call bio_crypt_set_ctx with the encryption key,
      algorithm and data_unit_num; they don't have to worry about getting a
      keyslot for each encryption context, as blk-mq/blk-crypto handles that.
      Blk-crypto also makes it possible for request-based layered devices like
      dm-rq to make use of inline encryption hardware by cloning the
      rq_crypt_ctx and programming a keyslot in the new request_queue when
      necessary.
      
      Note that any user of the block layer can submit bios with an
      encryption context, such as filesystems, device-mapper targets, etc.
      Signed-off-by: NSatya Tangirala <satyat@google.com>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      a892c8d5
  23. 13 5月, 2020 2 次提交
  24. 25 4月, 2020 1 次提交
  25. 21 4月, 2020 1 次提交