1. 13 6月, 2013 4 次提交
    • H
      md/raid1,5,10: Disable WRITE SAME until a recovery strategy is in place · 5026d7a9
      H. Peter Anvin 提交于
      There are cases where the kernel will believe that the WRITE SAME
      command is supported by a block device which does not, in fact,
      support WRITE SAME.  This currently happens for SATA drivers behind a
      SAS controller, but there are probably a hundred other ways that can
      happen, including drive firmware bugs.
      
      After receiving an error for WRITE SAME the block layer will retry the
      request as a plain write of zeroes, but mdraid will consider the
      failure as fatal and consider the drive failed.  This has the effect
      that all the mirrors containing a specific set of data are each
      offlined in very rapid succession resulting in data loss.
      
      However, just bouncing the request back up to the block layer isn't
      ideal either, because the whole initial request-retry sequence should
      be inside the write bitmap fence, which probably means that md needs
      to do its own conversion of WRITE SAME to write zero.
      
      Until the failure scenario has been sorted out, disable WRITE SAME for
      raid1, raid5, and raid10.
      
      [neilb: added raid5]
      
      This patch is appropriate for any -stable since 3.7 when write_same
      support was added.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      5026d7a9
    • N
      md/raid1,raid10: use freeze_array in place of raise_barrier in various places. · e2d59925
      NeilBrown 提交于
      Various places in raid1 and raid10 are calling raise_barrier when they
      really should call freeze_array.
      The former is only intended to be called from "make_request".
      The later has extra checks for 'nr_queued' and makes a call to
      flush_pending_writes(), so it is safe to call it from within the
      management thread.
      
      Using raise_barrier will sometimes deadlock.  Using freeze_array
      should not.
      
      As 'freeze_array' currently expects one request to be pending (in
      handle_read_error - the only previous caller), we need to pass
      it the number of pending requests (extra) to ignore.
      
      The deadlock was made particularly noticeable by commits
      050b6615 (raid10) and 6b740b8d (raid1) which
      appeared in 3.4, so the fix is appropriate for any -stable
      kernel since then.
      
      This patch probably won't apply directly to some early kernels and
      will need to be applied by hand.
      
      Cc: stable@vger.kernel.org
      Reported-by: NAlexander Lyakas <alex.bolshoy@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      e2d59925
    • A
      md/raid1: consider WRITE as successful only if at least one non-Faulty and... · 3056e3ae
      Alex Lyakas 提交于
      md/raid1: consider WRITE as successful only if at least one non-Faulty and non-rebuilding drive completed it.
      
      Without that fix, the following scenario could happen:
      
      - RAID1 with drives A and B; drive B was freshly-added and is rebuilding
      - Drive A fails
      - WRITE request arrives to the array. It is failed by drive A, so
      r1_bio is marked as R1BIO_WriteError, but the rebuilding drive B
      succeeds in writing it, so the same r1_bio is marked as
      R1BIO_Uptodate.
      - r1_bio arrives to handle_write_finished, badblocks are disabled,
      md_error()->error() does nothing because we don't fail the last drive
      of raid1
      - raid_end_bio_io()  calls call_bio_endio()
      - As a result, in call_bio_endio():
              if (!test_bit(R1BIO_Uptodate, &r1_bio->state))
                      clear_bit(BIO_UPTODATE, &bio->bi_flags);
      this code doesn't clear the BIO_UPTODATE flag, and the whole master
      WRITE succeeds, back to the upper layer.
      
      So we returned success to the upper layer, even though we had written
      the data onto the rebuilding drive only. But when we want to read the
      data back, we would not read from the rebuilding drive, so this data
      is lost.
      
      [neilb - applied identical change to raid10 as well]
      
      This bug can result in lost data, so it is suitable for any
      -stable kernel.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NAlex Lyakas <alex@zadarastorage.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      3056e3ae
    • N
      md: md_stop_writes() should always freeze recovery. · 6b6204ee
      NeilBrown 提交于
      __md_stop_writes() will currently sometimes freeze recovery.
      So any caller must be ready for that to happen, and indeed they are.
      
      However if __md_stop_writes() doesn't freeze_recovery, then
      a recovery could start before mddev_suspend() is called, which
      could be awkward.  This can particularly cause problems or dm-raid.
      
      So change __md_stop_writes() to always freeze recovery.  This is safe
      and more predicatable.
      Reported-by: NBrassow Jonathan <jbrassow@redhat.com>
      Tested-by: NBrassow Jonathan <jbrassow@redhat.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      6b6204ee
  2. 30 5月, 2013 1 次提交
  3. 20 5月, 2013 1 次提交
    • A
      dm thin: fix metadata dev resize detection · 610bba8b
      Alasdair G Kergon 提交于
      Fix detection of the need to resize the dm thin metadata device.
      
      The code incorrectly tried to extend the metadata device when it
      didn't need to due to a merging error with patch 24347e95 ("dm thin:
      detect metadata device resizing").
      
        device-mapper: transaction manager: couldn't open metadata space map
        device-mapper: thin metadata: tm_open_with_sm failed
        device-mapper: thin: aborting transaction failed
        device-mapper: thin: switching pool to failure mode
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      610bba8b
  4. 15 5月, 2013 3 次提交
    • K
      bcache: Fix error handling in init code · f59fce84
      Kent Overstreet 提交于
      This code appears to have rotted... fix various bugs and do some
      refactoring.
      Signed-off-by: NKent Overstreet <koverstreet@google.com>
      f59fce84
    • P
      bcache: drop "select CLOSURES" · bbb1c3b5
      Paul Bolle 提交于
      The Kconfig entry for BCACHE selects CLOSURES. But there's no Kconfig
      symbol CLOSURES. That symbol was used in development versions of bcache,
      but was removed when the closures code was no longer provided as a
      kernel library. It can safely be dropped.
      Signed-off-by: NPaul Bolle <pebolle@tiscali.nl>
      bbb1c3b5
    • E
      bcache: Fix incompatible pointer type warning · 867e1162
      Emil Goode 提交于
      The function pointer release in struct block_device_operations
      should point to functions declared as void.
      
      Sparse warnings:
      
      drivers/md/bcache/super.c:656:27: warning:
      	incorrect type in initializer (different base types)
      	drivers/md/bcache/super.c:656:27:
      	expected void ( *release )( ... )
      	drivers/md/bcache/super.c:656:27:
      	got int ( static [toplevel] *<noident> )( ... )
      
      drivers/md/bcache/super.c:656:2: warning:
      	initialization from incompatible pointer type [enabled by default]
      
      drivers/md/bcache/super.c:656:2: warning:
      	(near initialization for ‘bcache_ops.release’) [enabled by default]
      Signed-off-by: NEmil Goode <emilgoode@gmail.com>
      Signed-off-by: NKent Overstreet <koverstreet@google.com>
      867e1162
  5. 10 5月, 2013 20 次提交
  6. 07 5月, 2013 1 次提交
  7. 01 5月, 2013 2 次提交
  8. 30 4月, 2013 3 次提交
  9. 25 4月, 2013 1 次提交
  10. 24 4月, 2013 4 次提交
    • J
      DM RAID: Add message/status support for changing sync action · be83651f
      Jonathan Brassow 提交于
      DM RAID:  Add message/status support for changing sync action
      
      This patch adds a message interface to dm-raid to allow the user to more
      finely control the sync actions being performed by the MD driver.  This
      gives the user the ability to initiate "check" and "repair" (i.e. scrubbing).
      Two additional fields have been appended to the status output to provide more
      information about the type of sync action occurring and the results of those
      actions, specifically: <sync_action> and <mismatch_cnt>.  These new fields
      will always be populated.  This is essentially the device-mapper way of doing
      what MD controls through the 'sync_action' sysfs file and shows through the
      'mismatch_cnt' sysfs file.
      Signed-off-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      be83651f
    • J
      MD: Export 'md_reap_sync_thread' function · a91d5ac0
      Jonathan Brassow 提交于
      MD: Export 'md_reap_sync_thread' function
      
      Make 'md_reap_sync_thread' available to other files, specifically dm-raid.c.
      - rename reap_sync_thread to md_reap_sync_thread
      - move the fn after md_check_recovery to match md.h declaration placement
      - export md_reap_sync_thread
      Signed-off-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      a91d5ac0
    • N
      md: don't update metadata when stopping a read-only array. · b6d428c6
      NeilBrown 提交于
      read-only arrays should stay that way as much as possible.
      Updating the metadata - which could be triggered by a re-add
      while assembling the array metadata - should be avoided.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      b6d428c6
    • N
      md: Allow devices to be re-added to a read-only array. · 7ceb17e8
      NeilBrown 提交于
      When assembling an array incrementally we might want to make
      it device available when "enough" devices are present, but maybe
      not "all" devices are present.
      If the remaining devices appear before the array is actually used,
      they should be added transparently.
      
      We do this by using the "read-auto" mode where the array acts like
      it is read-only until a write request arrives.
      
      Current an add-device request switches a read-auto array to active.
      This means that only one device can be added after the array is first
      made read-auto.  This isn't a problem for RAID5, but is not ideal for
      RAID6 or RAID10.
      Also we don't really want to switch the array to read-auto at all
      when re-adding a device as this doesn't really imply any change.
      
      So:
       - remove the "md_update_sb()" call from add_new_disk().  This isn't
         really needed as just adding a disk doesn't require a metadata
         update.  Instead, just set MD_CHANGE_DEVS.  This will effect a
         metadata update soon enough, once the array is not read-only.
      
       - Allow the ADD_NEW_DISK ioctl to succeed without activating a
         read-auto array, providing the MD_DISK_SYNC flag is set.
         In this case, the device will be rejected if it cannot be added
         with the correct device number, or has an incorrect event count.
      
       - Teach remove_and_add_spares() to be careful about adding spares
         when the array is read-only (or read-mostly) - only add devices
         that are thought to be in-sync, and only do it if the array is
         in-sync itself.
      
       - In md_check_recovery, use remove_and_add_spares in the read-only
         case, rather than open coding just the 'remove' part of it.
      Reported-by: NMartin Wilck <mwilck@arcor.de>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      7ceb17e8