1. 06 6月, 2020 5 次提交
  2. 23 5月, 2020 1 次提交
  3. 21 5月, 2020 8 次提交
  4. 20 5月, 2020 1 次提交
  5. 15 5月, 2020 7 次提交
  6. 25 3月, 2020 1 次提交
  7. 08 1月, 2020 1 次提交
    • D
      dm zoned: support zone sizes smaller than 128MiB · b3996295
      Dmitry Fomichev 提交于
      dm-zoned is observed to log failed kernel assertions and not work
      correctly when operating against a device with a zone size smaller
      than 128MiB (e.g. 32768 bits per 4K block). The reason is that the
      bitmap size per zone is calculated as zero with such a small zone
      size. Fix this problem and also make the code related to zone bitmap
      management be able to handle per zone bitmaps smaller than a single
      block.
      
      A dm-zoned-tools patch is required to properly format dm-zoned devices
      with zone sizes smaller than 128MiB.
      
      Fixes: 3b1a94c8 ("dm zoned: drive-managed zoned block device target")
      Cc: stable@vger.kernel.org
      Signed-off-by: NDmitry Fomichev <dmitry.fomichev@wdc.com>
      Reviewed-by: NDamien Le Moal <damien.lemoal@wdc.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      b3996295
  8. 13 11月, 2019 1 次提交
  9. 07 11月, 2019 2 次提交
  10. 21 8月, 2019 1 次提交
  11. 16 8月, 2019 4 次提交
  12. 17 7月, 2019 1 次提交
    • D
      dm zoned: fix zone state management race · 3b8cafdd
      Damien Le Moal 提交于
      dm-zoned uses the zone flag DMZ_ACTIVE to indicate that a zone of the
      backend device is being actively read or written and so cannot be
      reclaimed. This flag is set as long as the zone atomic reference
      counter is not 0. When this atomic is decremented and reaches 0 (e.g.
      on BIO completion), the active flag is cleared and set again whenever
      the zone is reused and BIO issued with the atomic counter incremented.
      These 2 operations (atomic inc/dec and flag set/clear) are however not
      always executed atomically under the target metadata mutex lock and
      this causes the warning:
      
      WARN_ON(!test_bit(DMZ_ACTIVE, &zone->flags));
      
      in dmz_deactivate_zone() to be displayed. This problem is regularly
      triggered with xfstests generic/209, generic/300, generic/451 and
      xfs/077 with XFS being used as the file system on the dm-zoned target
      device. Similarly, xfstests ext4/303, ext4/304, generic/209 and
      generic/300 trigger the warning with ext4 use.
      
      This problem can be easily fixed by simply removing the DMZ_ACTIVE flag
      and managing the "ACTIVE" state by directly looking at the reference
      counter value. To do so, the functions dmz_activate_zone() and
      dmz_deactivate_zone() are changed to inline functions respectively
      calling atomic_inc() and atomic_dec(), while the dmz_is_active() macro
      is changed to an inline function calling atomic_read().
      
      Fixes: 3b1a94c8 ("dm zoned: drive-managed zoned block device target")
      Cc: stable@vger.kernel.org
      Reported-by: NMasato Suzuki <masato.suzuki@wdc.com>
      Signed-off-by: NDamien Le Moal <damien.lemoal@wdc.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      3b8cafdd
  13. 12 7月, 2019 1 次提交
  14. 19 4月, 2019 1 次提交
    • D
      dm zoned: Fix zone report handling · 7aedf75f
      Damien Le Moal 提交于
      The function blkdev_report_zones() returns success even if no zone
      information is reported (empty report). Empty zone reports can only
      happen if the report start sector passed exceeds the device capacity.
      The conditions for this to happen are either a bug in the caller code,
      or, a change in the device that forced the low level driver to change
      the device capacity to a value that is lower than the report start
      sector. This situation includes a failed disk revalidation resulting in
      the disk capacity being changed to 0.
      
      If this change happens while dm-zoned is in its initialization phase
      executing dmz_init_zones(), this function may enter an infinite loop
      and hang the system. To avoid this, add a check to disallow empty zone
      reports and bail out early. Also fix the function dmz_update_zone() to
      make sure that the report for the requested zone was correctly obtained.
      
      Fixes: 3b1a94c8 ("dm zoned: drive-managed zoned block device target")
      Cc: stable@vger.kernel.org
      Signed-off-by: NDamien Le Moal <damien.lemoal@wdc.com>
      Reviewed-by: NShaun Tancheff <shaun@tancheff.com>
      Signed-off-by: NDamien Le Moal <damien.lemoal@wdc.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      7aedf75f
  15. 19 10月, 2018 2 次提交
    • D
      dm zoned: fix various dmz_get_mblock() issues · 3d4e7383
      Damien Le Moal 提交于
      dmz_fetch_mblock() called from dmz_get_mblock() has a race since the
      allocation of the new metadata block descriptor and its insertion in
      the cache rbtree with the READING state is not atomic. Two different
      contexts requesting the same block may end up each adding two different
      descriptors of the same block to the cache.
      
      Another problem for this function is that the BIO for processing the
      block read is allocated after the metadata block descriptor is inserted
      in the cache rbtree. If the BIO allocation fails, the metadata block
      descriptor is freed without first being removed from the rbtree.
      
      Fix the first problem by checking again if the requested block is not in
      the cache right before inserting the newly allocated descriptor,
      atomically under the mblk_lock spinlock. The second problem is fixed by
      simply allocating the BIO before inserting the new block in the cache.
      
      Finally, since dmz_fetch_mblock() also increments a block reference
      counter, rename the function to dmz_get_mblock_slow(). To be symmetric
      and clear, also rename dmz_lookup_mblock() to dmz_get_mblock_fast() and
      increment the block reference counter directly in that function rather
      than in dmz_get_mblock().
      
      Fixes: 3b1a94c8 ("dm zoned: drive-managed zoned block device target")
      Cc: stable@vger.kernel.org
      Signed-off-by: NDamien Le Moal <damien.lemoal@wdc.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      3d4e7383
    • D
      dm zoned: fix metadata block ref counting · 33c2865f
      Damien Le Moal 提交于
      Since the ref field of struct dmz_mblock is always used with the
      spinlock of struct dmz_metadata locked, there is no need to use an
      atomic_t type. Change the type of the ref field to an unsigne
      integer.
      
      Fixes: 3b1a94c8 ("dm zoned: drive-managed zoned block device target")
      Cc: stable@vger.kernel.org
      Signed-off-by: NDamien Le Moal <damien.lemoal@wdc.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      33c2865f
  16. 17 1月, 2018 1 次提交
  17. 24 8月, 2017 1 次提交
    • C
      block: replace bi_bdev with a gendisk pointer and partitions index · 74d46992
      Christoph Hellwig 提交于
      This way we don't need a block_device structure to submit I/O.  The
      block_device has different life time rules from the gendisk and
      request_queue and is usually only available when the block device node
      is open.  Other callers need to explicitly create one (e.g. the lightnvm
      passthrough code, or the new nvme multipathing code).
      
      For the actual I/O path all that we need is the gendisk, which exists
      once per block device.  But given that the block layer also does
      partition remapping we additionally need a partition index, which is
      used for said remapping in generic_make_request.
      
      Note that all the block drivers generally want request_queue or
      sometimes the gendisk, so this removes a layer of indirection all
      over the stack.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      74d46992
  18. 27 7月, 2017 1 次提交