1. 09 12月, 2016 2 次提交
    • H
      dm raid: fix discard support regression · 11e29684
      Heinz Mauelshagen 提交于
      Commit ecbfb9f1 ("dm raid: add raid level takeover support") moved the
      configure_discard_support() call from raid_ctr() to raid_preresume().
      
      Enabling/disabling discard _must_ happen during table load (through the
      .ctr hook).  Fix this regression by moving the
      configure_discard_support() call back to raid_ctr().
      
      Fixes: ecbfb9f1 ("dm raid: add raid level takeover support")
      Cc: stable@vger.kernel.org # 4.8+
      Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      11e29684
    • H
      dm raid: don't allow "write behind" with raid4/5/6 · affa9d28
      Heinz Mauelshagen 提交于
      Remove CTR_FLAG_MAX_WRITE_BEHIND from raid4/5/6's valid ctr flags.
      
      Only the md raid1 personality supports setting a maximum number
      of "write behind" write IOs on any legs set to "write mostly".
      "write mostly" enhances throughput with slow links/disks.
      
      Technically the "write behind" value is a write intent bitmap
      property only being respected by the raid1 personality.  It allows a
      maximum number of "write behind" writes to any "write mostly" raid1
      mirror legs to be delayed and avoids reads from such legs.
      
      No other MD personalities supported via dm-raid make use of "write
      behind", thus setting this property is superfluous; it wouldn't cause
      harm but it is correct to reject it.
      Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      affa9d28
  2. 21 11月, 2016 1 次提交
    • H
      dm raid: correct error messages on old metadata validation · 453c2a89
      Heinz Mauelshagen 提交于
      When target 1.9.1 gets takeover/reshape requests on devices with old superblock
      format not supporting such conversions and rejects them in super_init_validation(),
      it logs bogus error message (e.g. Reshape when a takeover is requested).
      
      Whilst on it, add messages for disk adding/removing and stripe sectors
      reshape requests, use the newer rs_{takeover,reshape}_requested() API,
      address a raid10 false positive in checking array positions and
      remove rs_set_new() because device members are already set proper.
      Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      453c2a89
  3. 18 10月, 2016 1 次提交
  4. 12 10月, 2016 1 次提交
  5. 17 8月, 2016 4 次提交
    • H
      dm raid: support raid0 with missing metadata devices · 9e7d9367
      Heinz Mauelshagen 提交于
      The raid0 MD personality does not start a raid0 array with any of its
      data devices missing.
      
      dm-raid was removing data/metadata device pairs unconditionally if it
      failed to read a superblock off the respective metadata device of such
      pair, resulting in failure to start arrays with the raid0 personality.
      
      Avoid removing any data/metadata device pairs in case of raid0
      (e.g. lvm2 segment type 'raid0_meta') thus allowing MD to start the
      array.
      
      Also, avoid region size validation for raid0.
      Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      9e7d9367
    • H
      dm raid: enhance attempt_restore_of_faulty_devices() to support more devices · a3c06a38
      Heinz Mauelshagen 提交于
      attempt_restore_of_faulty_devices() is limited to 64 when it should support
      the new maximum of 253 when identifying any failed devices. It clears any
      revivable devices via an MD personality hot remove and add cylce to allow
      for their recovery.
      
      Address by using existing functions to retrieve and update all failed
      devices' bitfield members in the dm raid superblocks on all RAID devices
      and check for any devices to clear in it.
      
      Whilst on it, don't call attempt_restore_of_faulty_devices() for any MD
      personality not providing disk hot add/remove methods (i.e. raid0 now),
      because such personalities don't support reviving of failed disks.
      Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      a3c06a38
    • H
      dm raid: fix restoring of failed devices regression · 31e10a41
      Heinz Mauelshagen 提交于
      'lvchange --refresh RaidLV' causes a mapped device suspend/resume cycle
      aiming at device restore and resync after transient device failures.  This
      failed because flag RT_FLAG_RS_RESUMED was always cleared in the suspend path,
      thus the device restore wasn't performed in the resume path.
      
      Solve by removing RT_FLAG_RS_RESUMED from the suspend path and resume
      unconditionally.  Also, remove superfluous comment from raid_resume().
      Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      31e10a41
    • H
      dm raid: fix frozen recovery regression · a4423287
      Heinz Mauelshagen 提交于
      On LVM2 conversions via lvconvert(8), the target keeps mapped devices in
      frozen state when requesting RAID devices be resynchronized.  This
      applies to e.g. adding legs to a raid1 device or taking over from raid0
      to raid4 when the rebuild flag's set on the new raid1 legs or the added
      dedicated parity stripe.
      
      Also, fix frozen recovery for reshaping as well.
      Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      a4423287
  6. 04 8月, 2016 2 次提交
  7. 03 8月, 2016 1 次提交
  8. 19 7月, 2016 27 次提交
  9. 17 6月, 2016 1 次提交