1. 22 5月, 2012 26 次提交
  2. 21 5月, 2012 10 次提交
    • N
      md/raid10: split out interpretation of layout to separate function. · deb200d0
      NeilBrown 提交于
      We will soon be interpreting the layout (and chunksize etc) from
      multiple places to support reshape.  So split it out into separate
      function.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      deb200d0
    • N
      md/raid10: Introduce 'prev' geometry to support reshape. · f8c9e74f
      NeilBrown 提交于
      When RAID10 supports reshape it will need a 'previous' and a 'current'
      geometry, so introduce that here.
      Use the 'prev' geometry when before the reshape_position, and the
      current 'geo' when beyond it.  At other times, use both as
      appropriate.
      
      For now, both are identical (And reshape_position is never set).
      
      When we use the 'prev' geometry, we must use the old data_offset.
      When we use the current (And a reshape is happening) we must use
      the new_data_offset.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      f8c9e74f
    • N
      md: use resync_max_sectors for reshape as well as resync. · c804cdec
      NeilBrown 提交于
      Some resync type operations need to act on the address space of the
      device, others on the address space of the array.
      
      This only affects RAID10, so it sets resync_max_sectors to the array
      size (it defaults to the device size), and that is currently used for
      resync only.  However reshape of a RAID10 must be done against the
      array size, not device size, so change code to use resync_max_sectors
      for both the resync and the reshape cases.
      This does not affect RAID5 or RAID1, just RAID10.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      c804cdec
    • N
      md: teach sync_page_io about new_data_offset. · 1fdd6fc9
      NeilBrown 提交于
      Some code in raid1 and raid10 use sync_page_io to
      read/write pages when responding to read errors.
      As we will shortly support changing data_offset for
      raid10, this function must understand new_data_offset.
      
      So add that understanding.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      1fdd6fc9
    • N
      md/raid10: collect some geometry fields into a dedicated structure. · 5cf00fcd
      NeilBrown 提交于
      We will shortly be adding reshape support for RAID10 which will
      require it having 2 concurrent geometries (before and after).
      To make that easier, collect most geometry fields into 'struct geom'
      and access them from there.  Then we will more easily be able to add
      a second set of fields.
      
      Note that 'copies' is not in this struct and so cannot be changed.
      There is little need to change this number and doing so is a lot
      more difficult as it requires reallocating more things.
      So leave it out for now.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      5cf00fcd
    • N
      md/raid5: allow for change in data_offset while managing a reshape. · b5254dd5
      NeilBrown 提交于
      The important issue here is incorporating the different in data_offset
      into calculations concerning when we might need to over-write data
      that is still thought to be valid.
      
      To this end we find the minimum offset difference across all devices
      and add that where appropriate.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      b5254dd5
    • N
      md/raid5: Use correct data_offset for all IO. · 05616be5
      NeilBrown 提交于
      As there can now be two different data_offsets - an 'old' and
      a 'new' - we need to carefully choose between them.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      05616be5
    • N
      md: add possibility to change data-offset for devices. · c6563a8c
      NeilBrown 提交于
      When reshaping we can avoid costly intermediate backup by
      changing the 'start' address of the array on the device
      (if there is enough room).
      
      So as a first step, allow such a change to be requested
      through sysfs, and recorded in v1.x metadata.
      
      (As we didn't previous check that all 'pad' fields were zero,
       we need a new FEATURE flag for this.
       A (belatedly) check that all remaining 'pad' fields are
       zero to avoid a repeat of this)
      
      The new data offset must be requested separately for each device.
      This allows each to have a different change in the data offset.
      This is not likely to be used often but as data_offset can be
      set per-device, new_data_offset should be too.
      
      This patch also removes the 'acknowledged' arg to rdev_set_badblocks as
      it is never used and never will be.  At the same time we add a new
      arg ('in_new') which is currently always zero but will be used more
      soon.
      
      When a reshape finishes we will need to update the data_offset
      and rdev->sectors.  So provide an exported function to do that.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      c6563a8c
    • N
      md: allow a reshape operation to be reversed. · 2c810cdd
      NeilBrown 提交于
      Currently a reshape operation always progresses from the start
      of the array to the end unless the number of devices is being
      reduced, in which case it progressed in the opposite direction.
      
      To reverse a partial reshape which changes the number of devices
      you can stop the array and re-assemble with the raid-disks numbers
      reversed and it will undo.
      
      However for a reshape that does not change the number of devices
      it is not possible to reverse the reshape in the middle - you have to
      wait until it completes.
      
      So add a 'reshape_direction' attribute with is either 'forwards' or
      'backwards' and can be explicitly set when delta_disks is zero.
      
      This will become more important when we allow the data_offset to
      change in a reshape.  Then the explicit statement of what direction is
      being used will be more useful.
      
      This can be enabled in raid5 trivially as it already supports
      reverse reshape and just needs to use a different trigger to request it.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      2c810cdd
    • S
      md: using GFP_NOIO to allocate bio for flush request · b5e1b8ce
      Shaohua Li 提交于
      A flush request is usually issued in transaction commit code path, so
      using GFP_KERNEL to allocate memory for flush request bio falls into
      the classic deadlock issue.
      
      This is suitable for any -stable kernel to which it applies as it
      avoids a possible deadlock.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NShaohua Li <shli@fusionio.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      b5e1b8ce
  3. 19 5月, 2012 1 次提交
    • N
      md/raid10: fix transcription error in calc_sectors conversion. · b0d634d5
      NeilBrown 提交于
      The old code was
      		sector_div(stride, fc);
      the new code was
      		sector_dir(size, conf->near_copies);
      
      'size' is right (the stride various wasn't really needed), but
      'fc' means 'far_copies', and that is an important difference.
      
      Signed-off-by: NeilBrown <neilb@suse.de>       
      b0d634d5
  4. 17 5月, 2012 2 次提交
    • J
      MD: Add del_timer_sync to mddev_suspend (fix nasty panic) · 0d9f4f13
      Jonathan Brassow 提交于
      Use del_timer_sync to remove timer before mddev_suspend finishes.
      
      We don't want a timer going off after an mddev_suspend is called.  This is
      especially true with device-mapper, since it can call the destructor function
      immediately following a suspend.  This results in the removal (kfree) of the
      structures upon which the timer depends - resulting in a very ugly panic.
      Therefore, we add a del_timer_sync to mddev_suspend to prevent this.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      0d9f4f13
    • N
      md/raid10: set dev_sectors properly when resizing devices in array. · 6508fdbf
      NeilBrown 提交于
      raid10 stores dev_sectors in 'conf' separately from the one in
      'mddev' because it can have a very significant effect on block
      addressing and so need to be updated carefully.
      
      However raid10_resize isn't updating it at all!
      
      To update it correctly, we need to make sure it is a proper
      multiple of the chunksize taking various details of the layout
      in to account.
      This calculation is currently done in setup_conf.   So split it
      out from there and call it from raid10_resize as well.
      Then set conf->dev_sectors properly.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      6508fdbf
  5. 04 5月, 2012 1 次提交