1. 31 3月, 2009 17 次提交
    • N
      md/raid5: change raid5_compute_sector and stripe_to_pdidx to take a 'previous' argument · 112bf897
      NeilBrown 提交于
      This similar to the recent change to get_active_stripe.
      There is no functional change, just come rearrangement to make
      future patches cleaner.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      112bf897
    • N
      md/raid5: simplify interface for init_stripe and get_active_stripe · b5663ba4
      NeilBrown 提交于
      Rather than passing 'pd_idx' and 'disks' to these functions, just pass
      'previous' which tells whether to use the 'previous' or 'current'
      geometry during a reshape, and let init_stripe calculate
      disks and pd_idx and anything else it might need.
      
      This is not a substantial simplification and even adds a division.
      However we will shortly be adding more complexity to init_stripe
      to handle more interesting 'reshape' activities, and without this
      change, the interface to these functions would get very complex.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      b5663ba4
    • A
      md: Represent raid device size in sectors. · dd8ac336
      Andre Noll 提交于
      This patch renames the "size" field of struct mdk_rdev_s to
      "sectors" and changes this field to store sectors instead of
      blocks.
      
      All users of this field, linear.c, raid0.c and md.c, are fixed up
      accordingly which gets rid of many multiplications and divisions.
      Signed-off-by: NAndre Noll <maan@systemlinux.org>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      dd8ac336
    • A
      md: Make mddev->size sector-based. · 58c0fed4
      Andre Noll 提交于
      This patch renames the "size" field of struct mddev_s to "dev_sectors"
      and stores the number of 512-byte sectors instead of the number of
      1K-blocks in it.
      
      All users of that field, including raid levels 1,4-6,10, are adjusted
      accordingly. This simplifies the code a bit because it allows to get
      rid of a couple of divisions/multiplications by two.
      
      In order to make checkpatch happy, some minor coding style issues
      have also been addressed. In particular, size_store() now uses
      strict_strtoull() instead of simple_strtoull().
      Signed-off-by: NAndre Noll <maan@systemlinux.org>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      58c0fed4
    • N
      md: be more consistent about setting WriteMostly flag when adding a drive to an array · 575a80fa
      NeilBrown 提交于
      When a drive is added to an array using ADD_NEW_DISK, there are two
      places we can get certain flags from:  the metadata on the disk or the
      flags passed through the IOCTL.
      
      For the WriteMostly flag (aka MD_DISK_WRITEMOSTLY) we take the value
      from either of those sources depending on if it is set (i.e. we
      effectively 'or' the two sources together).
      
      This makes it awkward to clear, and is at best inconsistent.
      
      As documented code (in mdadm) requires that setting
      MD_DISK_WRITEMOSTLY in the ioctl will be effective, we resolve the
      inconsistency by always using the value for this flag from the ioctl,
      and ignoring the value on disk.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      575a80fa
    • N
      md: occasionally checkpoint drive recovery to reduce duplicate effort after a crash · 97e4f42d
      NeilBrown 提交于
      Version 1.x metadata has the ability to record the status of a
      partially completed drive recovery.
      However we only update that record on a clean shutdown.
      It would be nice to update it on unclean shutdowns too, particularly
      when using a bitmap that removes much to the 'sync' effort after an
      unclean shutdown.
      
      One complication with checkpointing recovery is that we only know
      where we are up to in terms of IO requests started, not which ones
      have completed.  And we need to know what has completed to record
      how much is recovered.  So occasionally pause the recovery until all
      submitted requests are completed, then update the record of where
      we are up to.
      
      When we have a bitmap, we already do that pause occasionally to keep
      the bitmap up-to-date.  So enhance that code to record the recovery
      offset and schedule a superblock update.
      And when there is no bitmap, just pause 16 times during the resync to
      do a checkpoint.
      '16' is a fairly arbitrary number.  But we don't really have any good
      way to judge how often is acceptable, and it seems like a reasonable
      number for now.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      97e4f42d
    • N
      md: move md_k.h from include/linux/raid/ to drivers/md/ · 43b2e5d8
      NeilBrown 提交于
      It really is nicer to keep related code together..
      Signed-off-by: NNeilBrown <neilb@suse.de>
      43b2e5d8
    • N
      md: move lots of #include lines out of .h files and into .c · bff61975
      NeilBrown 提交于
      This makes the includes more explicit, and is preparation for moving
      md_k.h to drivers/md/md.h
      
      Remove include/raid/md.h as its only remaining use was to #include
      other files.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      bff61975
    • N
      md: move LEVEL_* definition from md_k.h to md_u.h · 8b2b5c21
      NeilBrown 提交于
      .. as they are part of the user-space interface.
      Also move MdpMinorShift into there so we can remove duplication.
      
      Lastly move mdp_major in.  It is less obviously part of the user-space
      interface, but do_mounts_md.c uses it, and it is acting a bit like
      user-space.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      8b2b5c21
    • C
      md: move headers out of include/linux/raid/ · ef740c37
      Christoph Hellwig 提交于
      Move the headers with the local structures for the disciplines and
      bitmap.h into drivers/md/ so that they are more easily grepable for
      hacking and not far away.  md.h is left where it is for now as there
      are some uses from the outside.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      ef740c37
    • C
      cleanup drivers/md/Makefile · 2a40a8ae
      Christoph Hellwig 提交于
      Use the -y variables instead of the old -objs so we can easily add
      conditional objects to the modules.  Also always use += to add
      subobjects to avoid problems when placing additional objects in
      some place in the file.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      2a40a8ae
    • C
      md: stop defining MAJOR_NR · 3dbd8c2e
      Christoph Hellwig 提交于
      MAJOR_NR was only required for magic in linux/blk.h in 2.4 or earlier
      kernels, so no need to keep it around.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      3dbd8c2e
    • M
      MD data integrity support · 3f9d99c1
      Martin K. Petersen 提交于
      md: Add support for data integrity to MD
      
      If all subdevices support the same protection format the MD device is
      flagged as integrity capable.
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      3f9d99c1
    • N
      md: write bitmap information to devices that are undergoing recovery. · 355a43e6
      NeilBrown 提交于
      When we add some spares to an array and start recovery, and we have
      a bitmap which is stored 'internally' on all devices, we call
      bitmap_write_all to make sure the bitmap is correct on the new
      device(s).
      However that doesn't work as write_sb_page only writes to
      'In_sync' devices, and devices undergoing recovery are not
      'In_sync' until recovery finishes.
      
      So extend write_sb_page (actually next_active_rdev) to include devices
      that are under recovery.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      355a43e6
    • N
      md: never clear bit from the write-intent bitmap when the array is degraded. · d0a4bb49
      NeilBrown 提交于
      
      It is safe to clear a bit from the write-intent bitmap for a raid1
      if we know the data has been written to all devices, which is
      what the current test does.
      
      But it is not always safe to update the 'events_cleared' counter in
      that case.  This is because one request could complete successfully
      after some other request has partially failed.
      
      So simply disable the clearing and updating of events_cleared whenever
      the array is degraded.  This might end up not clearing some bits that
      could safely be cleared, but it is safest approach.
      
      Note that the bug fixed here did not risk corrupting data by letting
      the array get out-of-sync.  Rather it meant that when a device is
      removed and re-added to the array, it might incorrectly require a full
      recovery rather than just recovering based on the bitmap.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      d0a4bb49
    • N
      md: Allow write-intent bitmaps to have chunksize < PAGE_SIZE · 1187cf0a
      NeilBrown 提交于
      md currently insists that the chunk size used for write-intent
      bitmaps (the amount of data that corresponds to one chunk)
      be at least one page.
      
      The reason for this restriction is lost in the mists of time,
      but a review of the code (and a vague memory) suggests that the only
      problem would be related to resync.  Resync tries very hard to
      work in multiples of a page, but also needs to sync with units
      of a bitmap_chunk too.
      
      This connection comes out in the bitmap_start_sync call.
      
      So change bitmap_start_sync to always work in multiples of a page.
      If the bitmap chunk size is less that one page, we flag multiple
      chunks as 'syncing' and generally make them all appear to the
      resync routines like one chunk.
      
      All other code either already works with data ranges that could
      span multiple chunks, or explicitly only cares about a single chunk.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      1187cf0a
    • N
      md: Fix is_mddev_idle test (again). · eea1bf38
      NeilBrown 提交于
      There are two problems with is_mddev_idle.
      
      1/ sync_io is 'atomic_t' and hence 'int'.  curr_events and all the
         rest are 'long'.
         So if sync_io were to wrap on a 64bit host, the value of
         curr_events would go very negative suddenly, and take a very
         long time to return to positive.
      
         So do all calculations as 'int'.  That gives us plenty of precision
         for what we need.
      
      2/ To initialise rdev->last_events we simply call is_mddev_idle, on
         the assumption that it will make sure that last_events is in a
         suitable range.  It used to do this, but now it does not.
         So now we need to be more explicit about initialisation.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      eea1bf38
  2. 25 2月, 2009 3 次提交
    • N
      md: avoid races when stopping resync. · 73d5c38a
      NeilBrown 提交于
      There has been a race in raid10 and raid1 for a long time
      which has only recently started showing up due to a scheduler changed.
      
      When a sync_read request finishes, as soon as reschedule_retry
      is called, another thread can mark the resync request as having
      completed, so md_do_sync can finish, ->stop can be called, and
      ->conf can be freed.  So using conf after reschedule_retry is not
      safe.
      
      Similarly, when finishing a sync_write, calling md_done_sync must be
      the last thing we do, as it allows a chain of events which will free
      conf and other data structures.
      
      The first of these requires action in raid10.c
      The second requires action in raid1.c and raid10.c
      
      Cc: stable@kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      73d5c38a
    • N
      md/raid10: Don't call bitmap_cond_end_sync when we are doing recovery. · 78200d45
      NeilBrown 提交于
      For raid1/4/5/6, resync (fixing inconsistencies between devices) is
      very similar to recovery (rebuilding a failed device onto a spare).
      The both walk through the device addresses in order.
      
      For raid10 it can be quite different.  resync follows the 'array'
      address, and makes sure all copies are the same.  Recover walks
      through 'device' addresses and recreates each missing block.
      
      The 'bitmap_cond_end_sync' function allows the write-intent-bitmap
      (When present) to be updated to reflect a partially completed resync.
      It makes assumptions which mean that it does not work correctly for
      raid10 recovery at all.
      
      In particularly, it can cause bitmap-directed recovery of a raid10 to
      not recovery some of the blocks that need to be recovered.
      
      So move the call to bitmap_cond_end_sync into the resync path, rather
      than being in the common "resync or recovery" path.
      
      
      Cc: stable@kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      78200d45
    • N
      md/raid10: Don't skip more than 1 bitmap-chunk at a time during recovery. · 09b4068a
      NeilBrown 提交于
      When doing recovery on a raid10 with a write-intent bitmap, we only
      need to recovery chunks that are flagged in the bitmap.
      
      However if we choose to skip a chunk as it isn't flag, the code
      currently skips the whole raid10-chunk, thus it might not recovery
      some blocks that need recovering.
      
      This patch fixes it.
      
      In case that is confusing, it might help to understand that there
      is a 'raid10 chunk size' which guides how data is distributed across
      the devices, and a 'bitmap chunk size' which says how much data
      corresponds to a single bit in the bitmap.
      
      This bug only affects cases where the bitmap chunk size is smaller
      than the raid10 chunk size.
      
      
      
      Cc: stable@kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      09b4068a
  3. 18 2月, 2009 1 次提交
  4. 06 2月, 2009 3 次提交
    • N
      md: Ensure an md array never has too many devices. · de01dfad
      NeilBrown 提交于
      Each different metadata format supported by md supports a
      different maximum number of devices.
      We really should be enforcing this maximum in the kernel, but
      we aren't quite doing that properly.
      
      We currently only enforce it at the 'hot_add' point, which is an
      older interface which is not used by current userspace.
      
      We need to also enforce it at 'add_new_disk' time for active arrays
      and at 'do_md_run' time when starting a new array.
      
      So move the test from 'hot_add' into 'bind_rdev_to_array' which is
      called from both 'hot_add' and 'add_new_disk, and add a new
      test in 'analyse_sbs' which is called from 'do_md_run'.
      
      This bug (or missing feature) has been around "forever" and so
      the patch is suitable for any -stable that is currently maintained.
      
      Cc: stable@kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      de01dfad
    • A
      md: Fix a bug in linear.c causing which_dev() to return the wrong device. · 852c8bf4
      Andre Noll 提交于
      ab5bd5cb introduced the following
      bug in linear software raid for large arrays on 32 bit machines:
      
      which_dev() computes the device holding a given sector by shifting
      down the sector number to a 32 bit range, dividing by the array
      spacing and looking up the resulting index in the hash table of
      the array.
      
      Because the computed index might be slightly too small, a loop at
      the end of which_dev() increases the index until the given sector
      actually falls into the range of the device associated with that index.
      
      The changes of the above mentioned commit caused this loop to check
      whether the _index_ rather than the sector number is small enough,
      effectively bypassing the loop and thus possibly returning the wrong
      device.
      
      As reported by Simon Kirby, this leads to errors such as
      
      	linear_make_request: Sector 2340486136 out of bounds on dev sdi: 156301312 sectors, offset 2109870464
      
      Fix this bug by introducing a local variable for the index so that
      the variable containing the passed sector is left unchanged.
      
      Cc: stable@kernel.org
      Signed-off-by: NAndre Noll <maan@systemlinux.org>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      852c8bf4
    • N
      md: Allow read error in a single drive raid1 to be passed up. · 4706b349
      NeilBrown 提交于
      If a raid1 only has a single working device and gets a read error, 
      we choose to simply return that error up to the filesystem (or whatever)
      rather than failing the whole array.
      
      However the codes doesn't quite do that.  We attempt a readbalance
      which allocates the same drive, so we retry the read - indefinitely. 
      
      Instead:  If read_balance in the error case chooses the same drive that just
      failed, treat it as a failure and don't retry.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      4706b349
  5. 09 1月, 2009 16 次提交
    • N
      md: don't retry recovery of raid1 that fails due to error on source drive. · 4044ba58
      NeilBrown 提交于
      If a raid1 has only one working drive and it has a sector which
      gives an error on read, then an attempt to recover onto a spare will
      fail, but as the single remaining drive is not removed from the
      array, the recovery will be immediately re-attempted, resulting
      in an infinite recovery loop.
      
      So detect this situation and don't retry recovery once an error
      on the lone remaining drive is detected.
      
      Allow recovery to be retried once every time a spare is added
      in case the problem wasn't actually a media error.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      4044ba58
    • N
      md: Allow md devices to be created by name. · efeb53c0
      NeilBrown 提交于
      Using sequential numbers to identify md devices is somewhat artificial.
      Using names can be a lot more user-friendly.
      
      Also, creating md devices by opening the device special file is a bit
      awkward.
      
      So this patch provides a new option for creating and naming devices.
      
      Writing a name such as "md_home" to
          /sys/modules/md_mod/parameters/new_array
      will cause an array with that name to be created.  It will appear in
      /sys/block/ /proc/partitions and /proc/mdstat as 'md_home'.
      It will have an arbitrary minor number allocated.
      
      md devices that a created by an open are destroyed on the last
      close when the device is inactive.
      For named md devices, they will not be destroyed until the array
      is explicitly stopped, either with the STOP_ARRAY ioctl or by
      writing 'clear' to /sys/block/md_XXXX/md/array_state.
      
      The name of the array must start 'md_' to avoid conflict with
      other devices.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      efeb53c0
    • N
      md: make devices disappear when they are no longer needed. · d3374825
      NeilBrown 提交于
      Currently md devices, once created, never disappear until the module
      is unloaded.  This is essentially because the gendisk holds a
      reference to the mddev, and the mddev holds a reference to the
      gendisk, this a circular reference.
      
      If we drop the reference from mddev to gendisk, then we need to ensure
      that the mddev is destroyed when the gendisk is destroyed.  However it
      is not possible to hook into the gendisk destruction process to enable
      this.
      
      So we drop the reference from the gendisk to the mddev and destroy the
      gendisk when the mddev gets destroyed.  However this has a
      complication.
      Between the call
         __blkdev_get->get_gendisk->kobj_lookup->md_probe
      and the call
         __blkdev_get->md_open
      
      there is no obvious way to hold a reference on the mddev any more, so
      unless something is done, it will disappear and gendisk will be
      destroyed prematurely.
      
      Also, once we decide to destroy the mddev, there will be an unlockable
      moment before the gendisk is unlinked (blk_unregister_region) during
      which a new reference to the gendisk can be created.  We need to
      ensure that this reference can not be used.  i.e. the ->open must
      fail.
      
      So:
       1/  in md_probe we set a flag in the mddev (hold_active) which
           indicates that the array should be treated as active, even
           though there are no references, and no appearance of activity.
           This is cleared by md_release when the device is closed if it
           is no longer needed.
           This ensures that the gendisk will survive between md_probe and
           md_open.
      
       2/  In md_open we check if the mddev we expect to open matches
           the gendisk that we did open.
           If there is a mismatch we return -ERESTARTSYS and modify
           __blkdev_get to retry from the top in that case.
           In the -ERESTARTSYS sys case we make sure to wait until
           the old gendisk (that we succeeded in opening) is really gone so
           we loop at most once.
      
      Some udev configurations will always open an md device when it first
      appears.   If we allow an md device that was just created by an open
      to disappear on an immediate close, then this can race with such udev
      configurations and result in an infinite loop the device being opened
      and closed, then re-open due to the 'ADD' even from the first open,
      and then close and so on.
      So we make sure an md device, once created by an open, remains active
      at least until some md 'ioctl' has been made on it.  This means that
      all normal usage of md devices will allow them to disappear promptly
      when not needed, but the worst that an incorrect usage will do it
      cause an inactive md device to be left in existence (it can easily be
      removed).
      
      As an array can be stopped by writing to a sysfs attribute
        echo clear > /sys/block/mdXXX/md/array_state
      we need to use scheduled work for deleting the gendisk and other
      kobjects.  This allows us to wait for any pending gendisk deletion to
      complete by simply calling flush_scheduled_work().
      Signed-off-by: NNeilBrown <neilb@suse.de>
      d3374825
    • N
      md: centralise all freeing of an 'mddev' in 'md_free' · a21d1504
      NeilBrown 提交于
      md_free is the .release handler for the md kobj_type.
      So it makes sense to release all the objects referenced by
      the mddev in there, rather than just prior to calling kobject_put
      for what we think is the last time.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      a21d1504
    • N
      md: move allocation of ->queue from mddev_find to md_probe · 8b765398
      NeilBrown 提交于
      It is more balanced to just do simple initialisation in mddev_find,
      which allocates and links a new md device, and leave all the
      more sophisticated allocation to md_probe (which calls mddev_find).
      md_probe already allocated the gendisk.  It should allocate the
      queue too.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      8b765398
    • C
      md: need another print_sb for mdp_superblock_1 · cd2ac932
      Cheng Renquan 提交于
      md_print_devices is called in two code path: MD_BUG(...), and md_ioctl
      with PRINT_RAID_DEBUG.  it will dump out all in use md devices
      information;
      
      However, it wrongly processed two types of superblock in one:
      
      The header file <linux/raid/md_p.h> has defined two types of superblock,
      struct mdp_superblock_s (typedefed with mdp_super_t) according to md with
      metadata 0.90, and struct mdp_superblock_1 according to md with metadata
      1.0 and later,
      
      These two types of superblock are very different,
      
      The md_print_devices code processed them both in mdp_super_t, that would
      lead to wrong informaton dump like:
      
      	[ 6742.345877]
      	[ 6742.345887] md:	**********************************
      	[ 6742.345890] md:	* <COMPLETE RAID STATE PRINTOUT> *
      	[ 6742.345892] md:	**********************************
      	[ 6742.345896] md1: <ram7><ram6><ram5><ram4>
      	[ 6742.345907] md: rdev ram7, SZ:00065472 F:0 S:1 DN:3
      	[ 6742.345909] md: rdev superblock:
      	[ 6742.345914] md:  SB: (V:0.90.0) ID:<42ef13c7.598c059a.5f9f1645.801e9ee6> CT:4919856d
      	[ 6742.345918] md:     L5 S00065472 ND:4 RD:4 md1 LO:2 CS:65536
      	[ 6742.345922] md:     UT:4919856d ST:1 AD:4 WD:4 FD:0 SD:0 CSUM:b7992907 E:00000001
      	[ 6742.345924]      D  0:  DISK<N:0,(1,8),R:0,S:6>
      	[ 6742.345930]      D  1:  DISK<N:1,(1,10),R:1,S:6>
      	[ 6742.345933]      D  2:  DISK<N:2,(1,12),R:2,S:6>
      	[ 6742.345937]      D  3:  DISK<N:3,(1,14),R:3,S:6>
      	[ 6742.345942] md:     THIS:  DISK<N:3,(1,14),R:3,S:6>
      	...
      	[ 6742.346058] md0: <ram3><ram2><ram1><ram0>
      	[ 6742.346067] md: rdev ram3, SZ:00065472 F:0 S:1 DN:3
      	[ 6742.346070] md: rdev superblock:
      	[ 6742.346073] md:  SB: (V:1.0.0) ID:<369aad81.00000000.00000000.00000000> CT:9a322a9c
      	[ 6742.346077] md:     L-1507699579 S976570180 ND:48 RD:0 md0 LO:65536 CS:196610
      	[ 6742.346081] md:     UT:00000018 ST:0 AD:131048 WD:0 FD:8 SD:0 CSUM:00000000 E:00000000
      	[ 6742.346084]      D  0:  DISK<N:-1,(-1,-1),R:-1,S:-1>
      	[ 6742.346089]      D  1:  DISK<N:-1,(-1,-1),R:-1,S:-1>
      	[ 6742.346092]      D  2:  DISK<N:-1,(-1,-1),R:-1,S:-1>
      	[ 6742.346096]      D  3:  DISK<N:-1,(-1,-1),R:-1,S:-1>
      	[ 6742.346102] md:     THIS:  DISK<N:0,(0,0),R:0,S:0>
      	...
      	[ 6742.346219] md:	**********************************
      	[ 6742.346221]
      
      Here md1 is metadata 0.90.0, and md0 is metadata 1.2
      
      After some more code to distinguish these two types of superblock, in this patch,
      
      it will generate dump information like:
      
      	[ 7906.755790]
      	[ 7906.755799] md:	**********************************
      	[ 7906.755802] md:	* <COMPLETE RAID STATE PRINTOUT> *
      	[ 7906.755804] md:	**********************************
      	[ 7906.755808] md1: <ram7><ram6><ram5><ram4>
      	[ 7906.755819] md: rdev ram7, SZ:00065472 F:0 S:1 DN:3
      	[ 7906.755821] md: rdev superblock (MJ:0):
      	[ 7906.755826] md:  SB: (V:0.90.0) ID:<3fca7a0d.a612bfed.5f9f1645.801e9ee6> CT:491989f3
      	[ 7906.755830] md:     L5 S00065472 ND:4 RD:4 md1 LO:2 CS:65536
      	[ 7906.755834] md:     UT:491989f3 ST:1 AD:4 WD:4 FD:0 SD:0 CSUM:00fb52ad E:00000001
      	[ 7906.755836]      D  0:  DISK<N:0,(1,8),R:0,S:6>
      	[ 7906.755842]      D  1:  DISK<N:1,(1,10),R:1,S:6>
      	[ 7906.755845]      D  2:  DISK<N:2,(1,12),R:2,S:6>
      	[ 7906.755849]      D  3:  DISK<N:3,(1,14),R:3,S:6>
      	[ 7906.755855] md:     THIS:  DISK<N:3,(1,14),R:3,S:6>
      	...
      	[ 7906.755972] md0: <ram3><ram2><ram1><ram0>
      	[ 7906.755981] md: rdev ram3, SZ:00065472 F:0 S:1 DN:3
      	[ 7906.755984] md: rdev superblock (MJ:1):
      	[ 7906.755989] md:  SB: (V:1) (F:0) Array-ID:<5fbcf158:55aa:5fbe:9a79:1e939880dcbd>
      	[ 7906.755990] md:    Name: "DG5:0" CT:1226410480
      	[ 7906.755998] md:       L5 SZ130944 RD:4 LO:2 CS:128 DO:24 DS:131048 SO:8 RO:0
      	[ 7906.755999] md:     Dev:00000003 UUID: 9194d744:87f7:a448:85f2:7497b84ce30a
      	[ 7906.756001] md:       (F:0) UT:1226410480 Events:0 ResyncOffset:-1 CSUM:0dbcd829
      	[ 7906.756003] md:         (MaxDev:384)
      	...
      	[ 7906.756113] md:	**********************************
      	[ 7906.756116]
      
      this md0 (metadata 1.2) information dumping is exactly according to struct
      mdp_superblock_1.
      Signed-off-by: NCheng Renquan <crquan@gmail.com>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Dan Williams <dan.j.williams@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      cd2ac932
    • C
      md: use list_for_each_entry macro directly · 159ec1fc
      Cheng Renquan 提交于
      The rdev_for_each macro defined in <linux/raid/md_k.h> is identical to
      list_for_each_entry_safe, from <linux/list.h>, it should be defined to
      use list_for_each_entry_safe, instead of reinventing the wheel.
      
      But some calls to each_entry_safe don't really need a safe version,
      just a direct list_for_each_entry is enough, this could save a temp
      variable (tmp) in every function that used rdev_for_each.
      
      In this patch, most rdev_for_each loops are replaced by list_for_each_entry,
      totally save many tmp vars; and only in the other situations that will call
      list_del to delete an entry, the safe version is used.
      Signed-off-by: NCheng Renquan <crquan@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      159ec1fc
    • A
      md: raid0: make hash_spacing and preshift sector-based. · ccacc7d2
      Andre Noll 提交于
      This patch renames the hash_spacing and preshift members of struct
      raid0_private_data to spacing and sector_shift respectively and
      changes the semantics as follows:
      
      We always have spacing = 2 * hash_spacing. In case
      sizeof(sector_t) > sizeof(u32) we also have sector_shift = preshift + 1
      while sector_shift = preshift = 0 otherwise.
      
      Note that the values of nb_zone and zone are unaffected by these changes
      because in the sector_div() preceeding the assignement of these two
      variables both arguments double.
      Signed-off-by: NAndre Noll <maan@systemlinux.org>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      ccacc7d2
    • A
      md: raid0: Represent the size of strip zones in sectors. · 83838ed8
      Andre Noll 提交于
      This completes the block -> sector conversion of struct strip_zone.
      Signed-off-by: NAndre Noll <maan@systemlinux.org>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      83838ed8
    • A
      md: raid0 create_strip_zones(): Add KERN_INFO/KERN_ERR to printk's. · 0825b87a
      Andre Noll 提交于
      This patch consists only of these trivial changes.
      Signed-off-by: NAndre Noll <maan@systemlinux.org>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      0825b87a
    • A
      md: raid0 create_strip_zones(): Make two local variables sector-based. · 6b8796cc
      Andre Noll 提交于
      current_offset and curr_zone_offset stored the corresponding offsets
      as 1K quantities. Rename them to current_start and curr_zone_start
      to match the naming of struct strip_zone and store the offsets as
      sector counts.
      
      Also, add KERN_INFO to the printk() affected by this change to make
      checkpatch happy.
      Signed-off-by: NAndre Noll <maan@systemlinux.org>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      6b8796cc
    • A
      md: raid0: Represent zone->zone_offset in sectors. · 6199d3db
      Andre Noll 提交于
      For the same reason as in the previous patch, rename it from zone_offset
      to zone_start.
      Signed-off-by: NAndre Noll <maan@systemlinux.org>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      6199d3db
    • A
      md: raid0: Represent device offset in sectors. · 019c4e2f
      Andre Noll 提交于
      Rename zone->dev_offset to zone->dev_start to make sure all users
      have been converted.
      Signed-off-by: NAndre Noll <maan@systemlinux.org>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      019c4e2f
    • A
      md: raid0_make_request(): Replace local variable block by sector. · e0f06868
      Andre Noll 提交于
      This change already simplifies the code a bit.
      Signed-off-by: NAndre Noll <maan@systemlinux.org>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      e0f06868
    • A
      md: raid0_make_request(): Remove local variable chunk_size. · a4712005
      Andre Noll 提交于
      We might as well use chunk_sects instead.
      Signed-off-by: NAndre Noll <maan@systemlinux.org>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      a4712005
    • A
      md: raid0_make_request(): Replace chunksize_bits by chunksect_bits. · 1b7fdf8f
      Andre Noll 提交于
      As ffz(~(2 * x)) = ffz(~x) + 1, we have
      
      	chunksect_bits = chunksize_bits + 1.
      
      Fixup all users accordingly.
      Signed-off-by: NAndre Noll <maan@systemlinux.org>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      1b7fdf8f