1. 19 3月, 2012 4 次提交
    • N
      md/raid1,raid10: avoid deadlock during resync/recovery. · d6b42dcb
      NeilBrown 提交于
      If RAID1 or RAID10 is used under LVM or some other stacking
      block device, it is possible to enter a deadlock during
      resync or recovery.
      This can happen if the upper level block device creates
      two requests to the RAID1 or RAID10.  The first request gets
      processed, blocks recovery and queue requests for underlying
      requests in current->bio_list.  A resync request then starts
      which will wait for those requests and block new IO.
      
      But then the second request to the RAID1/10 will be attempted
      and it cannot progress until the resync request completes,
      which cannot progress until the underlying device requests complete,
      which are on a queue behind that second request.
      
      So allow that second request to proceed even though there is
      a resync request about to start.
      
      This is suitable for any -stable kernel.
      
      Cc: stable@vger.kernel.org
      Reported-by: NRay Morris <support@bettercgi.com>
      Tested-by: NRay Morris <support@bettercgi.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      d6b42dcb
    • N
      md/bitmap: ensure to load bitmap when creating via sysfs. · 4474ca42
      NeilBrown 提交于
      When commit 69e51b44 (md/bitmap:  separate out loading a bitmap...)
      created bitmap_load, it missed calling it after bitmap_create when a
      bitmap is created through the sysfs interface.
      So if a bitmap is added this way, we don't allocate memory properly
      and can crash.
      
      This is suitable for any -stable release since 2.6.35.
      Cc: stable@vger.kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      4474ca42
    • N
      md: don't set md arrays to readonly on shutdown. · c744a65c
      NeilBrown 提交于
      It seems that with recent kernel, writeback can still be happening
      while shutdown is happening, and consequently data can be written
      after the md reboot notifier switches all arrays to read-only.
      This causes a BUG.
      
      So don't switch them to read-only - just mark them clean and
      set 'safemode' to '2' which mean that immediately after any
      write the array will be switch back to 'clean'.
      
      This could result in the shutdown happening when array is marked
      dirty, thus forcing a resync on reboot.  However if you reboot
      without performing a "sync" first, you get to keep both halves.
      
      This is suitable for any stable kernel (though there might be some
      conflicts with obvious fixes in earlier kernels).
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      c744a65c
    • N
      md: allow re-add to failed arrays. · dc10c643
      NeilBrown 提交于
      When an array is failed (some data inaccessible) then there is no
      point attempting to add a spare as it could not possibly be recovered.
      
      However that may be value in re-adding a recently removed device.
      e.g. if there is a write-intent-bitmap and it is clear, then access
      to the data could be restored by this action.
      
      So don't reject a re-add to a failed array for RAID10 and RAID5 (the
      only arrays  types that check for a failed array).
      Signed-off-by: NNeilBrown <neilb@suse.de>
      dc10c643
  2. 13 3月, 2012 5 次提交
  3. 11 3月, 2012 1 次提交
  4. 10 3月, 2012 7 次提交
  5. 09 3月, 2012 13 次提交
  6. 08 3月, 2012 10 次提交