1. 10 2月, 2015 5 次提交
  2. 29 1月, 2015 1 次提交
  3. 18 12月, 2014 4 次提交
    • Z
      dm: fix missed error code if .end_io isn't implemented by target_type · 5164bece
      zhendong chen 提交于
      In bio-based DM's clone_endio(), when target_type doesn't implement
      .end_io (e.g. linear) r will be always be initialized 0.  So if a
      WRITE SAME bio fails WRITE SAME will not be disabled as intended.
      
      Fix this by initializing r to error, rather than 0, in clone_endio().
      Signed-off-by: NAlex Chen <alex.chen@huawei.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Fixes: 7eee4ae2 ("dm: disable WRITE SAME if it fails")
      Cc: stable@vger.kernel.org
      5164bece
    • M
      dm thin: fix crash by initializing thin device's refcount and completion earlier · 2b94e896
      Marc Dionne 提交于
      Commit 80e96c54 ("dm thin: do not allow thin device activation
      while pool is suspended") delayed the initialization of a new thin
      device's refcount and completion until after this new thin was added
      to the pool's active_thins list and the pool lock is released.  This
      opens a race with a worker thread that walks the list and calls
      thin_get/put, noticing that the refcount goes to 0 and calling
      complete, freezing up the system and giving the oops below:
      
       kernel: BUG: unable to handle kernel NULL pointer dereference at           (null)
       kernel: IP: [<ffffffff810d360b>] __wake_up_common+0x2b/0x90
      
       kernel: Call Trace:
       kernel: [<ffffffff810d3683>] __wake_up_locked+0x13/0x20
       kernel: [<ffffffff810d3dc7>] complete+0x37/0x50
       kernel: [<ffffffffa0595c50>] thin_put+0x20/0x30 [dm_thin_pool]
       kernel: [<ffffffffa059aab7>] do_worker+0x667/0x870 [dm_thin_pool]
       kernel: [<ffffffff816a8a4c>] ? __schedule+0x3ac/0x9a0
       kernel: [<ffffffff810b1aef>] process_one_work+0x14f/0x400
       kernel: [<ffffffff810b206b>] worker_thread+0x6b/0x490
       kernel: [<ffffffff810b2000>] ? rescuer_thread+0x260/0x260
       kernel: [<ffffffff810b6a7b>] kthread+0xdb/0x100
       kernel: [<ffffffff810b69a0>] ? kthread_create_on_node+0x170/0x170
       kernel: [<ffffffff816ad7ec>] ret_from_fork+0x7c/0xb0
       kernel: [<ffffffff810b69a0>] ? kthread_create_on_node+0x170/0x170
      
      Set the thin device's initial refcount and initialize the completion
      before adding it to the pool's active_thins list in thin_ctr().
      Signed-off-by: NMarc Dionne <marc.dionne@your-file-system.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      2b94e896
    • J
      dm thin: fix missing out-of-data-space to write mode transition if blocks are released · 2c43fd26
      Joe Thornber 提交于
      Discard bios and thin device deletion have the potential to release data
      blocks.  If the thin-pool is in out-of-data-space mode, and blocks were
      released, transition the thin-pool back to full write mode.
      
      The correct time to do this is just after the thin-pool metadata commit.
      It cannot be done before the commit because the space maps will not
      allow immediate reuse of the data blocks in case there's a rollback
      following power failure.
      Signed-off-by: NJoe Thornber <ejt@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org
      2c43fd26
    • J
      dm thin: fix inability to discard blocks when in out-of-data-space mode · 45ec9bd0
      Joe Thornber 提交于
      When the pool was in PM_OUT_OF_SPACE mode its process_prepared_discard
      function pointer was incorrectly being set to
      process_prepared_discard_passdown rather than process_prepared_discard.
      
      This incorrect function pointer meant the discard was being passed down,
      but not effecting the mapping.  As such any discard that was issued, in
      an attempt to reclaim blocks, would not successfully free data space.
      Reported-by: NEric Sandeen <sandeen@redhat.com>
      Signed-off-by: NJoe Thornber <ejt@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org
      45ec9bd0
  4. 11 12月, 2014 1 次提交
    • N
      md: Check MD_RECOVERY_RUNNING as well as ->sync_thread. · f851b60d
      NeilBrown 提交于
      A recent change to md started the ->sync_thread from a asynchronously
      from a work_queue rather than synchronously.  This means that there
      can be a small window between the time when MD_RECOVERY_RUNNING is set
      and when ->sync_thread is set.
      
      So code that checks ->sync_thread might now conclude that the thread
      has not been started and (because a lock is held) will not be started.
      That is no longer the case.
      
      Most of those places are best fixed by testing MD_RECOVERY_RUNNING
      as well.  To make this completely reliable, we wake_up(&resync_wait)
      after clearing that flag as well as after clearing ->sync_thread.
      
      Other places are better served by flushing the relevant workqueue
      to ensure that that if the sync thread was starting, it has now
      started.  This is particularly best if we are about to stop the
      sync thread.
      
      Fixes: ac05f256Signed-off-by: NNeilBrown <neilb@suse.de>
      f851b60d
  5. 03 12月, 2014 2 次提交
    • K
      md: fix semicolon.cocci warnings · 7d7e64f2
      kbuild test robot 提交于
      drivers/md/md.c:7175:43-44: Unneeded semicolon
      
       Removes unneeded semicolon.
      
      Generated by: scripts/coccinelle/misc/semicolon.cocci
      Signed-off-by: NFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      7d7e64f2
    • N
      md/raid5: fetch_block must fetch all the blocks handle_stripe_dirtying wants. · 108cef3a
      NeilBrown 提交于
      It is critical that fetch_block() and handle_stripe_dirtying()
      are consistent in their analysis of what needs to be loaded.
      Otherwise raid5 can wait forever for a block that won't be loaded.
      
      Currently when writing to a RAID5 that is resyncing, to a location
      beyond the resync offset, handle_stripe_dirtying chooses a
      reconstruct-write cycle, but fetch_block() assumes a
      read-modify-write, and a lockup can happen.
      
      So treat that case just like RAID6, just as we do in
      handle_stripe_dirtying.  RAID6 always does reconstruct-write.
      
      This bug was introduced when the behaviour of handle_stripe_dirtying
      was changed in 3.7, so the patch is suitable for any kernel since,
      though it will need careful merging for some versions.
      
      Cc: stable@vger.kernel.org (v3.7+)
      Fixes: a7854487Reported-by: NHenry Cai <henryplusplus@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      108cef3a
  6. 02 12月, 2014 12 次提交
  7. 24 11月, 2014 3 次提交
  8. 22 11月, 2014 1 次提交
    • M
      dm thin: fix pool_io_hints to avoid looking at max_hw_sectors · d200c30e
      Mike Snitzer 提交于
      Simplify the pool_io_hints code that works to establish a max_sectors
      value that is a power-of-2 factor of the thin-pool's blocksize.  The
      biggest associated improvement is that the DM thin-pool is no longer
      concerning itself with the data device's max_hw_sectors when adjusting
      max_sectors.
      
      This fixes the relative fragility of the original "dm thin: adjust
      max_sectors_kb based on thinp blocksize" commit that only became
      apparent when testing was performed using a DM thin-pool ontop of a
      virtio_blk device.  One proposed upstream patch detailed the problems
      inherent in virtio_blk: https://lkml.org/lkml/2014/11/20/611
      
      So even though virtio_blk incorrectly set its max_hw_sectors it actually
      helped make it clear that we need DM thinp to be tolerant of any future
      Linux driver that incorrectly sets max_hw_sectors.
      
      We only need to be concerned with modifying the thin-pool device's
      max_sectors limit if it is smaller than the thin-pool's blocksize.  In
      this case the value of max_sectors does become a limiting factor when
      upper layers (e.g. filesystems) construct their bios.  But if the
      hardware can support IOs larger than the thin-pool's blocksize the user
      is encouraged to adjust the thin-pool's data device's max_sectors
      accordingly -- doing so will enable the thin-pool to inherit the
      established user-defined max_sectors.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      d200c30e
  9. 20 11月, 2014 5 次提交
    • M
      dm thin: suspend/resume active thin devices when reloading thin-pool · 583024d2
      Mike Snitzer 提交于
      Before this change it was expected that userspace would first suspend
      all active thin devices, reload/resize the thin-pool target, then resume
      all active thin devices.  Now the thin-pool suspend/resume will trigger
      the suspend/resume of all active thins via appropriate calls to
      dm_internal_suspend and dm_internal_resume.
      
      Store the mapped_device for each thin device in struct thin_c to make
      these calls possible.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Acked-by: NJoe Thornber <ejt@redhat.com>
      583024d2
    • M
      dm: enhance internal suspend and resume interface · ffcc3936
      Mike Snitzer 提交于
      Rename dm_internal_{suspend,resume} to dm_internal_{suspend,resume}_fast
      -- dm-stats will continue using these methods to avoid all the extra
      suspend/resume logic that is not needed in order to quickly flush IO.
      
      Introduce dm_internal_suspend_noflush() variant that actually calls the
      mapped_device's target callbacks -- otherwise target-specific hooks are
      avoided (e.g. dm-thin's thin_presuspend and thin_postsuspend).  Common
      code between dm_internal_{suspend_noflush,resume} and
      dm_{suspend,resume} was factored out as __dm_{suspend,resume}.
      
      Update dm_internal_{suspend_noflush,resume} to always take and release
      the mapped_device's suspend_lock.  Also update dm_{suspend,resume} to be
      aware of potential for DM_INTERNAL_SUSPEND_FLAG to be set and respond
      accordingly by interruptibly waiting for the DM_INTERNAL_SUSPEND_FLAG to
      be cleared.  Add lockdep annotation to dm_suspend() and dm_resume().
      
      The existing DM_SUSPEND_FLAG remains unchanged.
      DM_INTERNAL_SUSPEND_FLAG is set by dm_internal_suspend_noflush() and
      cleared by dm_internal_resume().
      
      Both DM_SUSPEND_FLAG and DM_INTERNAL_SUSPEND_FLAG may be set if a device
      was already suspended when dm_internal_suspend_noflush() was called --
      this can be thought of as a "nested suspend".  A "nested suspend" can
      occur with legacy userspace dm-thin code that might suspend all active
      thin volumes before suspending the pool for resize.
      
      But otherwise, in the normal dm-thin-pool suspend case moving forward:
      the thin-pool will have DM_SUSPEND_FLAG set and all active thins from
      that thin-pool will have DM_INTERNAL_SUSPEND_FLAG set.
      
      Also add DM_INTERNAL_SUSPEND_FLAG to status report.  This new
      DM_INTERNAL_SUSPEND_FLAG state is being reported to assist with
      debugging (e.g. 'dmsetup info' will report an internally suspended
      device accordingly).
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Acked-by: NJoe Thornber <ejt@redhat.com>
      ffcc3936
    • M
      dm thin: do not allow thin device activation while pool is suspended · 80e96c54
      Mike Snitzer 提交于
      Otherwise IO could be issued to the pool while it is suspended.
      
      Care was taken to properly interlock between the thin and thin-pool
      targets when accessing the pool's 'suspended' flag.  The thin_ctr will
      not add a new thin device to the pool's active_thins list if the pool is
      susepended.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Acked-by: NJoe Thornber <ejt@redhat.com>
      80e96c54
    • M
      dm: add presuspend_undo hook to target_type · d67ee213
      Mike Snitzer 提交于
      The DM thin-pool target now must undo the changes performed during
      pool_presuspend() so introduce presuspend_undo hook in target_type.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Acked-by: NJoe Thornber <ejt@redhat.com>
      d67ee213
    • M
      dm: return earlier from dm_blk_ioctl if target doesn't implement .ioctl · 4d341d82
      Mike Snitzer 提交于
      No point checking if the device is suspended if the current target
      doesn't even implement .ioctl
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      4d341d82
  10. 17 11月, 2014 1 次提交
    • N
      md: Always set RECOVERY_NEEDED when clearing RECOVERY_FROZEN · 45eaf45d
      NeilBrown 提交于
      md_check_recovery will skip any recovery and also clear
      MD_RECOVERY_NEEDED if MD_RECOVERY_FROZEN is set.
      So when we clear _FROZEN, we must set _NEEDED and ensure that
      md_check_recovery gets run.
      Otherwise we could miss out on something that is needed.
      
      In particular, this can make it impossible to remove a
      failed device from an array is the  'recovery-needed' processing
      didn't happen.
      Suitable for stable kernels since 3.13.
      
      Cc: stable@vger.kernel.org (3.13+)
      Reported-and-tested-by: NJoe Lawrence <joe.lawrence@stratus.com>
      Fixes: 30b8feb7Signed-off-by: NNeilBrown <neilb@suse.de>
      45eaf45d
  11. 13 11月, 2014 3 次提交
  12. 11 11月, 2014 2 次提交
    • J
      dm cache: improve discard support · 7ae34e77
      Joe Thornber 提交于
      Safely allow the discard blocksize to be larger than the cache blocksize
      by using the bio prison's range locking support.  This also improves
      discard performance considerly because larger discards are issued to the
      dm-cache device.  The discard blocksize was always intended to be
      greater than the cache blocksize.  But until now it wasn't implemented
      safely.
      
      Also, by safely restoring the ability to have discard blocksize larger
      than cache blocksize we're able to significantly reduce the memory used
      for the cache's discard bitset.  Before, with a small discard blocksize,
      the discard bitset could get quite large because its size is a function
      of the discard blocksize and the origin device's size.  For example,
      previously, using a 32KB cache blocksize with a 40TB origin resulted in
      1280MB of incore memory use for the discard bitset!  Now, the discard
      blocksize is scaled up accordingly to ensure the discard bitset is
      capped at 2**14 bits, or 16KB.
      Signed-off-by: NJoe Thornber <ejt@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      7ae34e77
    • J
      dm cache: revert "prevent corruption caused by discard_block_size > cache_block_size" · 08b18451
      Joe Thornber 提交于
      This reverts commit d132cc6d because we
      actually do want to allow the discard blocksize to be larger than the
      cache blocksize.  Further dm-cache discard changes will make this
      possible.
      Signed-off-by: NJoe Thornber <ejt@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      08b18451