1. 16 7月, 2015 1 次提交
    • M
      dm thin: stay in out-of-data-space mode once no_space_timeout expires · bcc696fa
      Mike Snitzer 提交于
      This fixes an issue where running out of data space would cause the
      thin-pool's metadata to become read-only.  There was no reason to make
      metadata read-only -- calling set_pool_mode() with PM_READ_ONLY was a
      misguided way to error all queued and future write IOs.  We can
      accomplish the same by degrading from PM_OUT_OF_DATA_SPACE to
      PM_OUT_OF_DATA_SPACE with error_if_no_space enabled.
      
      Otherwise, the use of PM_READ_ONLY could cause a race where commit() was
      started before the PM_READ_ONLY transition but dm_pool_commit_metadata()
      would go on to fail because the block manager had transitioned to
      read-only.  The return of -EPERM from dm_pool_commit_metadata(), due to
      attempting to commit while in read-only mode, caused the thin-pool to
      set 'needs_check' because a metadata_operation_failed().  This needless
      cascade of failures makes life for users more difficult than needed.
      Reported-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      bcc696fa
  2. 06 7月, 2015 1 次提交
  3. 12 6月, 2015 2 次提交
    • M
      dm thin: fail messages with EOPNOTSUPP when pool cannot handle messages · fd467696
      Mike Snitzer 提交于
      Use EOPNOTSUPP, rather than EINVAL, error code when user attempts to
      send the pool a message.  Otherwise usespace is led to believe the
      message failed due to invalid argument.
      Reported-by: NZdenek Kabelac <zkabelac@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      fd467696
    • J
      dm thin: range discard support · 34fbcf62
      Joe Thornber 提交于
      Previously REQ_DISCARD bios have been split into block sized chunks
      before submission to the thin target.  There are a couple of issues with
      this:
      
       - If the block size is small, a large discard request can
         get broken up into a great many bios which is both slow and causes
         a lot of memory pressure.
      
       - The thin pool block size and the discard granularity for the
         underlying data device need to be compatible if we want to passdown
         the discard.
      
      This patch relaxes the block size granularity for thin devices.  It
      makes use of the recent range locking added to the bio_prison to
      quiesce a whole range of thin blocks before unmapping them.  Once a
      thin range has been unmapped the discard can then be passed down to
      the data device for those sub ranges where the data blocks are no
      longer used (ie. they weren't shared in the first place).
      
      This patch also doesn't make any apologies about open-coding portions
      of block core as a means to supporting async discard completions in the
      near-term -- if/when late bio splitting lands it'll all get cleaned up.
      Signed-off-by: NJoe Thornber <ejt@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      34fbcf62
  4. 30 5月, 2015 2 次提交
  5. 22 5月, 2015 1 次提交
    • M
      block: remove management of bi_remaining when restoring original bi_end_io · 326e1dbb
      Mike Snitzer 提交于
      Commit c4cf5261 ("bio: skip atomic inc/dec of ->bi_remaining for
      non-chains") regressed all existing callers that followed this pattern:
       1) saving a bio's original bi_end_io
       2) wiring up an intermediate bi_end_io
       3) restoring the original bi_end_io from intermediate bi_end_io
       4) calling bio_endio() to execute the restored original bi_end_io
      
      The regression was due to BIO_CHAIN only ever getting set if
      bio_inc_remaining() is called.  For the above pattern it isn't set until
      step 3 above (step 2 would've needed to establish BIO_CHAIN).  As such
      the first bio_endio(), in step 2 above, never decremented __bi_remaining
      before calling the intermediate bi_end_io -- leaving __bi_remaining with
      the value 1 instead of 0.  When bio_inc_remaining() occurred during step
      3 it brought it to a value of 2.  When the second bio_endio() was
      called, in step 4 above, it should've called the original bi_end_io but
      it didn't because there was an extra reference that wasn't dropped (due
      to atomic operations being optimized away since BIO_CHAIN wasn't set
      upfront).
      
      Fix this issue by removing the __bi_remaining management complexity for
      all callers that use the above pattern -- bio_chain() is the only
      interface that _needs_ to be concerned with __bi_remaining.  For the
      above pattern callers just expect the bi_end_io they set to get called!
      Remove bio_endio_nodec() and also remove all bio_inc_remaining() calls
      that aren't associated with the bio_chain() interface.
      
      Also, the bio_inc_remaining() interface has been moved local to bio.c.
      
      Fixes: c4cf5261 ("bio: skip atomic inc/dec of ->bi_remaining for non-chains")
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      326e1dbb
  6. 06 5月, 2015 1 次提交
  7. 27 2月, 2015 1 次提交
    • J
      dm thin: fix to consistently zero-fill reads to unprovisioned blocks · 5f027a3b
      Joe Thornber 提交于
      It was always intended that a read to an unprovisioned block will return
      zeroes regardless of whether the pool is in read-only or read-write
      mode.  thin_bio_map() was inconsistent with its handling of such reads
      when the pool is in read-only mode, it now properly zero-fills the bios
      it returns in response to unprovisioned block reads.
      
      Eliminate thin_bio_map()'s special read-only mode handling of -ENODATA
      and just allow the IO to be deferred to the worker which will result in
      pool->process_bio() handling the IO (which already properly zero-fills
      reads to unprovisioned blocks).
      Reported-by: NEric Sandeen <sandeen@redhat.com>
      Signed-off-by: NJoe Thornber <ejt@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org
      5f027a3b
  8. 10 2月, 2015 1 次提交
  9. 28 1月, 2015 1 次提交
  10. 18 12月, 2014 3 次提交
    • M
      dm thin: fix crash by initializing thin device's refcount and completion earlier · 2b94e896
      Marc Dionne 提交于
      Commit 80e96c54 ("dm thin: do not allow thin device activation
      while pool is suspended") delayed the initialization of a new thin
      device's refcount and completion until after this new thin was added
      to the pool's active_thins list and the pool lock is released.  This
      opens a race with a worker thread that walks the list and calls
      thin_get/put, noticing that the refcount goes to 0 and calling
      complete, freezing up the system and giving the oops below:
      
       kernel: BUG: unable to handle kernel NULL pointer dereference at           (null)
       kernel: IP: [<ffffffff810d360b>] __wake_up_common+0x2b/0x90
      
       kernel: Call Trace:
       kernel: [<ffffffff810d3683>] __wake_up_locked+0x13/0x20
       kernel: [<ffffffff810d3dc7>] complete+0x37/0x50
       kernel: [<ffffffffa0595c50>] thin_put+0x20/0x30 [dm_thin_pool]
       kernel: [<ffffffffa059aab7>] do_worker+0x667/0x870 [dm_thin_pool]
       kernel: [<ffffffff816a8a4c>] ? __schedule+0x3ac/0x9a0
       kernel: [<ffffffff810b1aef>] process_one_work+0x14f/0x400
       kernel: [<ffffffff810b206b>] worker_thread+0x6b/0x490
       kernel: [<ffffffff810b2000>] ? rescuer_thread+0x260/0x260
       kernel: [<ffffffff810b6a7b>] kthread+0xdb/0x100
       kernel: [<ffffffff810b69a0>] ? kthread_create_on_node+0x170/0x170
       kernel: [<ffffffff816ad7ec>] ret_from_fork+0x7c/0xb0
       kernel: [<ffffffff810b69a0>] ? kthread_create_on_node+0x170/0x170
      
      Set the thin device's initial refcount and initialize the completion
      before adding it to the pool's active_thins list in thin_ctr().
      Signed-off-by: NMarc Dionne <marc.dionne@your-file-system.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      2b94e896
    • J
      dm thin: fix missing out-of-data-space to write mode transition if blocks are released · 2c43fd26
      Joe Thornber 提交于
      Discard bios and thin device deletion have the potential to release data
      blocks.  If the thin-pool is in out-of-data-space mode, and blocks were
      released, transition the thin-pool back to full write mode.
      
      The correct time to do this is just after the thin-pool metadata commit.
      It cannot be done before the commit because the space maps will not
      allow immediate reuse of the data blocks in case there's a rollback
      following power failure.
      Signed-off-by: NJoe Thornber <ejt@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org
      2c43fd26
    • J
      dm thin: fix inability to discard blocks when in out-of-data-space mode · 45ec9bd0
      Joe Thornber 提交于
      When the pool was in PM_OUT_OF_SPACE mode its process_prepared_discard
      function pointer was incorrectly being set to
      process_prepared_discard_passdown rather than process_prepared_discard.
      
      This incorrect function pointer meant the discard was being passed down,
      but not effecting the mapping.  As such any discard that was issued, in
      an attempt to reclaim blocks, would not successfully free data space.
      Reported-by: NEric Sandeen <sandeen@redhat.com>
      Signed-off-by: NJoe Thornber <ejt@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org
      45ec9bd0
  11. 22 11月, 2014 1 次提交
    • M
      dm thin: fix pool_io_hints to avoid looking at max_hw_sectors · d200c30e
      Mike Snitzer 提交于
      Simplify the pool_io_hints code that works to establish a max_sectors
      value that is a power-of-2 factor of the thin-pool's blocksize.  The
      biggest associated improvement is that the DM thin-pool is no longer
      concerning itself with the data device's max_hw_sectors when adjusting
      max_sectors.
      
      This fixes the relative fragility of the original "dm thin: adjust
      max_sectors_kb based on thinp blocksize" commit that only became
      apparent when testing was performed using a DM thin-pool ontop of a
      virtio_blk device.  One proposed upstream patch detailed the problems
      inherent in virtio_blk: https://lkml.org/lkml/2014/11/20/611
      
      So even though virtio_blk incorrectly set its max_hw_sectors it actually
      helped make it clear that we need DM thinp to be tolerant of any future
      Linux driver that incorrectly sets max_hw_sectors.
      
      We only need to be concerned with modifying the thin-pool device's
      max_sectors limit if it is smaller than the thin-pool's blocksize.  In
      this case the value of max_sectors does become a limiting factor when
      upper layers (e.g. filesystems) construct their bios.  But if the
      hardware can support IOs larger than the thin-pool's blocksize the user
      is encouraged to adjust the thin-pool's data device's max_sectors
      accordingly -- doing so will enable the thin-pool to inherit the
      established user-defined max_sectors.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      d200c30e
  12. 20 11月, 2014 2 次提交
  13. 13 11月, 2014 2 次提交
  14. 11 11月, 2014 14 次提交
  15. 05 11月, 2014 1 次提交
  16. 02 8月, 2014 3 次提交
  17. 12 6月, 2014 1 次提交
    • L
      dm thin: update discard_granularity to reflect the thin-pool blocksize · 09869de5
      Lukas Czerner 提交于
      DM thinp already checks whether the discard_granularity of the data
      device is a factor of the thin-pool block size.  But when using the
      dm-thin-pool's discard passdown support, DM thinp was not selecting the
      max of the underlying data device's discard_granularity and the
      thin-pool's block size.
      
      Update set_discard_limits() to set discard_granularity to the max of
      these values.  This enables blkdev_issue_discard() to properly align the
      discards that are sent to the DM thin device on a full block boundary.
      As such each discard will now cover an entire DM thin-pool block and the
      block will be reclaimed.
      Reported-by: NZdenek Kabelac <zkabelac@redhat.com>
      Signed-off-by: NLukas Czerner <lczerner@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org
      09869de5
  18. 04 6月, 2014 2 次提交