1. 11 7月, 2013 1 次提交
    • J
      dm: add switch target · 9d0eb0ab
      Jim Ramsay 提交于
      dm-switch is a new target that maps IO to underlying block devices
      efficiently when there is a large number of fixed-sized address regions
      but there is no simple pattern to allow for a compact mapping
      representation such as dm-stripe.
      
      Though we have developed this target for a specific storage device, Dell
      EqualLogic, we have made an effort to keep it as general purpose as
      possible in the hope that others may benefit.
      
      Originally developed by Jim Ramsay. Simplified by Mikulas Patocka.
      Signed-off-by: NJim Ramsay <jim_ramsay@dell.com>
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      9d0eb0ab
  2. 24 3月, 2013 1 次提交
  3. 02 3月, 2013 4 次提交
  4. 28 2月, 2013 1 次提交
    • N
      md: remove CONFIG_MULTICORE_RAID456 · 51acbcec
      NeilBrown 提交于
      This doesn't seem to actually help and we have an alternate
      multi-threading approach waiting in the wings, so just get
      rid of this config option and associated code.
      
      As a bonus, we remove one use of CONFIG_EXPERIMENTAL
      
      Cc: Dan Williams <djbw@fb.com>
      Cc: Kees Cook <keescook@chromium.org>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      51acbcec
  5. 13 10月, 2012 1 次提交
  6. 02 8月, 2012 1 次提交
  7. 27 7月, 2012 1 次提交
  8. 29 3月, 2012 3 次提交
  9. 01 11月, 2011 2 次提交
    • J
      dm: add thin provisioning target · 991d9fa0
      Joe Thornber 提交于
      Initial EXPERIMENTAL implementation of device-mapper thin provisioning
      with snapshot support.  The 'thin' target is used to create instances of
      the virtual devices that are hosted in the 'thin-pool' target.  The
      thin-pool target provides data sharing among devices.  This sharing is
      made possible using the persistent-data library in the previous patch.
      
      The main highlight of this implementation, compared to the previous
      implementation of snapshots, is that it allows many virtual devices to
      be stored on the same data volume, simplifying administration and
      allowing sharing of data between volumes (thus reducing disk usage).
      
      Another big feature is support for arbitrary depth of recursive
      snapshots (snapshots of snapshots of snapshots ...).  The previous
      implementation of snapshots did this by chaining together lookup tables,
      and so performance was O(depth).  This new implementation uses a single
      data structure so we don't get this degradation with depth.
      
      For further information and examples of how to use this, please read
      Documentation/device-mapper/thin-provisioning.txt
      Signed-off-by: NJoe Thornber <thornber@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      991d9fa0
    • M
      dm: add bufio · 95d402f0
      Mikulas Patocka 提交于
      The dm-bufio interface allows you to do cached I/O on devices,
      holding recently-read blocks in memory and performing delayed writes.
      
      We don't use buffer cache or page cache already present in the kernel, because:
      * we need to handle block sizes larger than a page
      * we can't allocate memory to perform reads or we'd have deadlocks
      
      Currently, when a cache is required, we limit its size to a fraction of
      available memory.  Usage can be viewed and changed in
      /sys/module/dm_bufio/parameters/ .
      
      The first user is thin provisioning, but more dm users are planned.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      95d402f0
  10. 02 8月, 2011 1 次提交
    • J
      dm raid: support metadata devices · b12d437b
      Jonathan Brassow 提交于
      Add the ability to parse and use metadata devices to dm-raid.  Although
      not strictly required, without the metadata devices, many features of
      RAID are unavailable.  They are used to store a superblock and bitmap.
      
      The role, or position in the array, of each device must be recorded in
      its superblock.  This is to help with fault handling, array reshaping,
      and sanity checks.  RAID 4/5/6 devices must be loaded in a specific order:
      in this way, the 'array_position' field helps validate the correctness
      of the mapping when it is loaded.  It can be used during reshaping to
      identify which devices are added/removed.  Fault handling is impossible
      without this field.  For example, when a device fails it is recorded in
      the superblock.  If this is a RAID1 device and the offending device is
      removed from the array, there must be a way during subsequent array
      assembly to determine that the failed device was the one removed.  This
      is done by correlating the 'array_position' field and the bit-field
      variable 'failed_devices'.
      Signed-off-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      b12d437b
  11. 24 3月, 2011 1 次提交
  12. 14 1月, 2011 1 次提交
    • N
      dm: raid456 basic support · 9d09e663
      NeilBrown 提交于
      This patch is the skeleton for the DM target that will be
      the bridge from DM to MD (initially RAID456 and later RAID1).  It
      provides a way to use device-mapper interfaces to the MD RAID456
      drivers.
      
      As with all device-mapper targets, the nominal public interfaces are the
      constructor (CTR) tables and the status outputs (both STATUSTYPE_INFO
      and STATUSTYPE_TABLE).  The CTR table looks like the following:
      
      1: <s> <l> raid \
      2:	<raid_type> <#raid_params> <raid_params> \
      3:	<#raid_devs> <meta_dev1> <dev1> .. <meta_devN> <devN>
      
      Line 1 contains the standard first three arguments to any device-mapper
      target - the start, length, and target type fields.  The target type in
      this case is "raid".
      
      Line 2 contains the arguments that define the particular raid
      type/personality/level, the required arguments for that raid type, and
      any optional arguments.  Possible raid types include: raid4, raid5_la,
      raid5_ls, raid5_rs, raid6_zr, raid6_nr, and raid6_nc.  (again, raid1 is
      planned for the future.)  The list of required and optional parameters
      is the same for all the current raid types.  The required parameters are
      positional, while the optional parameters are given as key/value pairs.
      The possible parameters are as follows:
       <chunk_size>		Chunk size in sectors.
       [[no]sync]		Force/Prevent RAID initialization
       [rebuild <idx>]	Rebuild the drive indicated by the index
       [daemon_sleep <ms>]	Time between bitmap daemon work to clear bits
       [min_recovery_rate <kB/sec/disk>]	Throttle RAID initialization
       [max_recovery_rate <kB/sec/disk>]	Throttle RAID initialization
       [max_write_behind <value>]		See '-write-behind=' (man mdadm)
       [stripe_cache <sectors>]		Stripe cache size for higher RAIDs
      
      Line 3 contains the list of devices that compose the array in
      metadata/data device pairs.  If the metadata is stored separately, a '-'
      is given for the metadata device position.  If a drive has failed or is
      missing at creation time, a '-' can be given for both the metadata and
      data drives for a given position.
      
      Examples:
      # RAID4 - 4 data drives, 1 parity
      # No metadata devices specified to hold superblock/bitmap info
      # Chunk size of 1MiB
      # (Lines separated for easy reading)
      0 1960893648 raid \
      	raid4 1 2048 \
      	5 - 8:17 - 8:33 - 8:49 - 8:65 - 8:81
      
      # RAID4 - 4 data drives, 1 parity (no metadata devices)
      # Chunk size of 1MiB, force RAID initialization,
      #	min recovery rate at 20 kiB/sec/disk
      0 1960893648 raid \
              raid4 4 2048 min_recovery_rate 20 sync\
              5 - 8:17 - 8:33 - 8:49 - 8:65 - 8:81
      
      Performing a 'dmsetup table' should display the CTR table used to
      construct the mapping (with possible reordering of optional
      parameters).
      
      Performing a 'dmsetup status' will yield information on the state and
      health of the array.  The output is as follows:
      1: <s> <l> raid \
      2:	<raid_type> <#devices> <1 health char for each dev> <resync_ratio>
      
      Line 1 is standard DM output.  Line 2 is best shown by example:
      	0 1960893648 raid raid4 5 AAAAA 2/490221568
      Here we can see the RAID type is raid4, there are 5 devices - all of
      which are 'A'live, and the array is 2/490221568 complete with recovery.
      
      Cc: linux-raid@vger.kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Signed-off-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      9d09e663
  13. 18 5月, 2010 1 次提交
  14. 14 12月, 2009 1 次提交
  15. 30 10月, 2009 1 次提交
  16. 29 10月, 2009 1 次提交
  17. 30 8月, 2009 3 次提交
    • D
      md/raid456: distribute raid processing over multiple cores · 07a3b417
      Dan Williams 提交于
      Now that the resources to handle stripe_head operations are allocated
      percpu it is possible for raid5d to distribute stripe handling over
      multiple cores.  This conversion also adds a call to cond_resched() in
      the non-multicore case to prevent one core from getting monopolized for
      raid operations.
      
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      07a3b417
    • D
      md/raid6: asynchronous raid6 operations · ac6b53b6
      Dan Williams 提交于
      [ Based on an original patch by Yuri Tikhonov ]
      
      The raid_run_ops routine uses the asynchronous offload api and
      the stripe_operations member of a stripe_head to carry out xor+pq+copy
      operations asynchronously, outside the lock.
      
      The operations performed by RAID-6 are the same as in the RAID-5 case
      except for no support of STRIPE_OP_PREXOR operations. All the others
      are supported:
      STRIPE_OP_BIOFILL
       - copy data into request buffers to satisfy a read request
      STRIPE_OP_COMPUTE_BLK
       - generate missing blocks (1 or 2) in the cache from the other blocks
      STRIPE_OP_BIODRAIN
       - copy data out of request buffers to satisfy a write request
      STRIPE_OP_RECONSTRUCT
       - recalculate parity for new data that has entered the cache
      STRIPE_OP_CHECK
       - verify that the parity is correct
      
      The flow is the same as in the RAID-5 case, and reuses some routines, namely:
      1/ ops_complete_postxor (renamed to ops_complete_reconstruct)
      2/ ops_complete_compute (updated to set up to 2 targets uptodate)
      3/ ops_run_check (renamed to ops_run_check_p for xor parity checks)
      
      [neilb@suse.de: fixes to get it to pass mdadm regression suite]
      Reviewed-by: NAndre Noll <maan@systemlinux.org>
      Signed-off-by: NYuri Tikhonov <yur@emcraft.com>
      Signed-off-by: NIlya Yanok <yanok@emcraft.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      
      
      
      ac6b53b6
    • D
      async_tx: raid6 recovery self test · cb3c8299
      Dan Williams 提交于
      Port drivers/md/raid6test/test.c to use the async raid6 recovery
      routines.  This is meant as a unit test for raid6 acceleration drivers.  In
      addition to the 16-drive test case this implements tests for the 4-disk and
      5-disk special cases (dma devices can not generically handle less than 2
      sources), and adds a test for the D+Q case.
      Reviewed-by: NAndre Noll <maan@systemlinux.org>
      Acked-by: NMaciej Sosnowski <maciej.sosnowski@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      
      cb3c8299
  18. 22 6月, 2009 3 次提交
  19. 31 3月, 2009 2 次提交
    • N
      md: remove CONFIG_MD_RAID_RESHAPE config option. · 2cffc4a0
      NeilBrown 提交于
      This was only needed when the code was experimental.  Most of it
      is well tested now, so the option is no longer useful.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      2cffc4a0
    • D
      md/raid6: move raid6 data processing to raid6_pq.ko · f701d589
      Dan Williams 提交于
      Move the raid6 data processing routines into a standalone module
      (raid6_pq) to prepare them to be called from async_tx wrappers and other
      non-md drivers/modules.  This precludes a circular dependency of raid456
      needing the async modules for data processing while those modules in
      turn depend on raid456 for the base level synchronous raid6 routines.
      
      To support this move:
      1/ The exportable definitions in raid6.h move to include/linux/raid/pq.h
      2/ The raid6_call, recovery calls, and table symbols are exported
      3/ Extra #ifdef __KERNEL__ statements to enable the userspace raid6test to
         compile
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      f701d589
  20. 12 10月, 2008 2 次提交
  21. 15 7月, 2008 1 次提交
  22. 05 6月, 2008 2 次提交
  23. 08 2月, 2008 1 次提交
  24. 21 12月, 2007 1 次提交
  25. 20 10月, 2007 2 次提交
  26. 25 8月, 2007 1 次提交