1. 02 3月, 2013 3 次提交
  2. 13 10月, 2012 1 次提交
  3. 02 8月, 2012 1 次提交
  4. 27 7月, 2012 1 次提交
  5. 29 3月, 2012 3 次提交
  6. 01 11月, 2011 2 次提交
    • J
      dm: add thin provisioning target · 991d9fa0
      Joe Thornber 提交于
      Initial EXPERIMENTAL implementation of device-mapper thin provisioning
      with snapshot support.  The 'thin' target is used to create instances of
      the virtual devices that are hosted in the 'thin-pool' target.  The
      thin-pool target provides data sharing among devices.  This sharing is
      made possible using the persistent-data library in the previous patch.
      
      The main highlight of this implementation, compared to the previous
      implementation of snapshots, is that it allows many virtual devices to
      be stored on the same data volume, simplifying administration and
      allowing sharing of data between volumes (thus reducing disk usage).
      
      Another big feature is support for arbitrary depth of recursive
      snapshots (snapshots of snapshots of snapshots ...).  The previous
      implementation of snapshots did this by chaining together lookup tables,
      and so performance was O(depth).  This new implementation uses a single
      data structure so we don't get this degradation with depth.
      
      For further information and examples of how to use this, please read
      Documentation/device-mapper/thin-provisioning.txt
      Signed-off-by: NJoe Thornber <thornber@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      991d9fa0
    • M
      dm: add bufio · 95d402f0
      Mikulas Patocka 提交于
      The dm-bufio interface allows you to do cached I/O on devices,
      holding recently-read blocks in memory and performing delayed writes.
      
      We don't use buffer cache or page cache already present in the kernel, because:
      * we need to handle block sizes larger than a page
      * we can't allocate memory to perform reads or we'd have deadlocks
      
      Currently, when a cache is required, we limit its size to a fraction of
      available memory.  Usage can be viewed and changed in
      /sys/module/dm_bufio/parameters/ .
      
      The first user is thin provisioning, but more dm users are planned.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      95d402f0
  7. 02 8月, 2011 1 次提交
    • J
      dm raid: support metadata devices · b12d437b
      Jonathan Brassow 提交于
      Add the ability to parse and use metadata devices to dm-raid.  Although
      not strictly required, without the metadata devices, many features of
      RAID are unavailable.  They are used to store a superblock and bitmap.
      
      The role, or position in the array, of each device must be recorded in
      its superblock.  This is to help with fault handling, array reshaping,
      and sanity checks.  RAID 4/5/6 devices must be loaded in a specific order:
      in this way, the 'array_position' field helps validate the correctness
      of the mapping when it is loaded.  It can be used during reshaping to
      identify which devices are added/removed.  Fault handling is impossible
      without this field.  For example, when a device fails it is recorded in
      the superblock.  If this is a RAID1 device and the offending device is
      removed from the array, there must be a way during subsequent array
      assembly to determine that the failed device was the one removed.  This
      is done by correlating the 'array_position' field and the bit-field
      variable 'failed_devices'.
      Signed-off-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      b12d437b
  8. 24 3月, 2011 1 次提交
  9. 14 1月, 2011 1 次提交
    • N
      dm: raid456 basic support · 9d09e663
      NeilBrown 提交于
      This patch is the skeleton for the DM target that will be
      the bridge from DM to MD (initially RAID456 and later RAID1).  It
      provides a way to use device-mapper interfaces to the MD RAID456
      drivers.
      
      As with all device-mapper targets, the nominal public interfaces are the
      constructor (CTR) tables and the status outputs (both STATUSTYPE_INFO
      and STATUSTYPE_TABLE).  The CTR table looks like the following:
      
      1: <s> <l> raid \
      2:	<raid_type> <#raid_params> <raid_params> \
      3:	<#raid_devs> <meta_dev1> <dev1> .. <meta_devN> <devN>
      
      Line 1 contains the standard first three arguments to any device-mapper
      target - the start, length, and target type fields.  The target type in
      this case is "raid".
      
      Line 2 contains the arguments that define the particular raid
      type/personality/level, the required arguments for that raid type, and
      any optional arguments.  Possible raid types include: raid4, raid5_la,
      raid5_ls, raid5_rs, raid6_zr, raid6_nr, and raid6_nc.  (again, raid1 is
      planned for the future.)  The list of required and optional parameters
      is the same for all the current raid types.  The required parameters are
      positional, while the optional parameters are given as key/value pairs.
      The possible parameters are as follows:
       <chunk_size>		Chunk size in sectors.
       [[no]sync]		Force/Prevent RAID initialization
       [rebuild <idx>]	Rebuild the drive indicated by the index
       [daemon_sleep <ms>]	Time between bitmap daemon work to clear bits
       [min_recovery_rate <kB/sec/disk>]	Throttle RAID initialization
       [max_recovery_rate <kB/sec/disk>]	Throttle RAID initialization
       [max_write_behind <value>]		See '-write-behind=' (man mdadm)
       [stripe_cache <sectors>]		Stripe cache size for higher RAIDs
      
      Line 3 contains the list of devices that compose the array in
      metadata/data device pairs.  If the metadata is stored separately, a '-'
      is given for the metadata device position.  If a drive has failed or is
      missing at creation time, a '-' can be given for both the metadata and
      data drives for a given position.
      
      Examples:
      # RAID4 - 4 data drives, 1 parity
      # No metadata devices specified to hold superblock/bitmap info
      # Chunk size of 1MiB
      # (Lines separated for easy reading)
      0 1960893648 raid \
      	raid4 1 2048 \
      	5 - 8:17 - 8:33 - 8:49 - 8:65 - 8:81
      
      # RAID4 - 4 data drives, 1 parity (no metadata devices)
      # Chunk size of 1MiB, force RAID initialization,
      #	min recovery rate at 20 kiB/sec/disk
      0 1960893648 raid \
              raid4 4 2048 min_recovery_rate 20 sync\
              5 - 8:17 - 8:33 - 8:49 - 8:65 - 8:81
      
      Performing a 'dmsetup table' should display the CTR table used to
      construct the mapping (with possible reordering of optional
      parameters).
      
      Performing a 'dmsetup status' will yield information on the state and
      health of the array.  The output is as follows:
      1: <s> <l> raid \
      2:	<raid_type> <#devices> <1 health char for each dev> <resync_ratio>
      
      Line 1 is standard DM output.  Line 2 is best shown by example:
      	0 1960893648 raid raid4 5 AAAAA 2/490221568
      Here we can see the RAID type is raid4, there are 5 devices - all of
      which are 'A'live, and the array is 2/490221568 complete with recovery.
      
      Cc: linux-raid@vger.kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Signed-off-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      9d09e663
  10. 18 5月, 2010 1 次提交
  11. 14 12月, 2009 1 次提交
  12. 30 10月, 2009 1 次提交
  13. 29 10月, 2009 1 次提交
  14. 30 8月, 2009 3 次提交
    • D
      md/raid456: distribute raid processing over multiple cores · 07a3b417
      Dan Williams 提交于
      Now that the resources to handle stripe_head operations are allocated
      percpu it is possible for raid5d to distribute stripe handling over
      multiple cores.  This conversion also adds a call to cond_resched() in
      the non-multicore case to prevent one core from getting monopolized for
      raid operations.
      
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      07a3b417
    • D
      md/raid6: asynchronous raid6 operations · ac6b53b6
      Dan Williams 提交于
      [ Based on an original patch by Yuri Tikhonov ]
      
      The raid_run_ops routine uses the asynchronous offload api and
      the stripe_operations member of a stripe_head to carry out xor+pq+copy
      operations asynchronously, outside the lock.
      
      The operations performed by RAID-6 are the same as in the RAID-5 case
      except for no support of STRIPE_OP_PREXOR operations. All the others
      are supported:
      STRIPE_OP_BIOFILL
       - copy data into request buffers to satisfy a read request
      STRIPE_OP_COMPUTE_BLK
       - generate missing blocks (1 or 2) in the cache from the other blocks
      STRIPE_OP_BIODRAIN
       - copy data out of request buffers to satisfy a write request
      STRIPE_OP_RECONSTRUCT
       - recalculate parity for new data that has entered the cache
      STRIPE_OP_CHECK
       - verify that the parity is correct
      
      The flow is the same as in the RAID-5 case, and reuses some routines, namely:
      1/ ops_complete_postxor (renamed to ops_complete_reconstruct)
      2/ ops_complete_compute (updated to set up to 2 targets uptodate)
      3/ ops_run_check (renamed to ops_run_check_p for xor parity checks)
      
      [neilb@suse.de: fixes to get it to pass mdadm regression suite]
      Reviewed-by: NAndre Noll <maan@systemlinux.org>
      Signed-off-by: NYuri Tikhonov <yur@emcraft.com>
      Signed-off-by: NIlya Yanok <yanok@emcraft.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      
      
      
      ac6b53b6
    • D
      async_tx: raid6 recovery self test · cb3c8299
      Dan Williams 提交于
      Port drivers/md/raid6test/test.c to use the async raid6 recovery
      routines.  This is meant as a unit test for raid6 acceleration drivers.  In
      addition to the 16-drive test case this implements tests for the 4-disk and
      5-disk special cases (dma devices can not generically handle less than 2
      sources), and adds a test for the D+Q case.
      Reviewed-by: NAndre Noll <maan@systemlinux.org>
      Acked-by: NMaciej Sosnowski <maciej.sosnowski@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      
      cb3c8299
  15. 22 6月, 2009 3 次提交
  16. 31 3月, 2009 2 次提交
    • N
      md: remove CONFIG_MD_RAID_RESHAPE config option. · 2cffc4a0
      NeilBrown 提交于
      This was only needed when the code was experimental.  Most of it
      is well tested now, so the option is no longer useful.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      2cffc4a0
    • D
      md/raid6: move raid6 data processing to raid6_pq.ko · f701d589
      Dan Williams 提交于
      Move the raid6 data processing routines into a standalone module
      (raid6_pq) to prepare them to be called from async_tx wrappers and other
      non-md drivers/modules.  This precludes a circular dependency of raid456
      needing the async modules for data processing while those modules in
      turn depend on raid456 for the base level synchronous raid6 routines.
      
      To support this move:
      1/ The exportable definitions in raid6.h move to include/linux/raid/pq.h
      2/ The raid6_call, recovery calls, and table symbols are exported
      3/ Extra #ifdef __KERNEL__ statements to enable the userspace raid6test to
         compile
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      f701d589
  17. 12 10月, 2008 2 次提交
  18. 15 7月, 2008 1 次提交
  19. 05 6月, 2008 2 次提交
  20. 08 2月, 2008 1 次提交
  21. 21 12月, 2007 1 次提交
  22. 20 10月, 2007 2 次提交
  23. 25 8月, 2007 1 次提交
  24. 18 7月, 2007 1 次提交
  25. 13 7月, 2007 3 次提交
    • D
      async_tx: add the async_tx api · 9bc89cd8
      Dan Williams 提交于
      The async_tx api provides methods for describing a chain of asynchronous
      bulk memory transfers/transforms with support for inter-transactional
      dependencies.  It is implemented as a dmaengine client that smooths over
      the details of different hardware offload engine implementations.  Code
      that is written to the api can optimize for asynchronous operation and the
      api will fit the chain of operations to the available offload resources. 
       
      	I imagine that any piece of ADMA hardware would register with the
      	'async_*' subsystem, and a call to async_X would be routed as
      	appropriate, or be run in-line. - Neil Brown
      
      async_tx exploits the capabilities of struct dma_async_tx_descriptor to
      provide an api of the following general format:
      
      struct dma_async_tx_descriptor *
      async_<operation>(..., struct dma_async_tx_descriptor *depend_tx,
      			dma_async_tx_callback cb_fn, void *cb_param)
      {
      	struct dma_chan *chan = async_tx_find_channel(depend_tx, <operation>);
      	struct dma_device *device = chan ? chan->device : NULL;
      	int int_en = cb_fn ? 1 : 0;
      	struct dma_async_tx_descriptor *tx = device ?
      		device->device_prep_dma_<operation>(chan, len, int_en) : NULL;
      
      	if (tx) { /* run <operation> asynchronously */
      		...
      		tx->tx_set_dest(addr, tx, index);
      		...
      		tx->tx_set_src(addr, tx, index);
      		...
      		async_tx_submit(chan, tx, flags, depend_tx, cb_fn, cb_param);
      	} else { /* run <operation> synchronously */
      		...
      		<operation>
      		...
      		async_tx_sync_epilog(flags, depend_tx, cb_fn, cb_param);
      	}
      
      	return tx;
      }
      
      async_tx_find_channel() returns a capable channel from its pool.  The
      channel pool is organized as a per-cpu array of channel pointers.  The
      async_tx_rebalance() routine is tasked with managing these arrays.  In the
      uniprocessor case async_tx_rebalance() tries to spread responsibility
      evenly over channels of similar capabilities.  For example if there are two
      copy+xor channels, one will handle copy operations and the other will
      handle xor.  In the SMP case async_tx_rebalance() attempts to spread the
      operations evenly over the cpus, e.g. cpu0 gets copy channel0 and xor
      channel0 while cpu1 gets copy channel 1 and xor channel 1.  When a
      dependency is specified async_tx_find_channel defaults to keeping the
      operation on the same channel.  A xor->copy->xor chain will stay on one
      channel if it supports both operation types, otherwise the transaction will
      transition between a copy and a xor resource.
      
      Currently the raid5 implementation in the MD raid456 driver has been
      converted to the async_tx api.  A driver for the offload engines on the
      Intel Xscale series of I/O processors, iop-adma, is provided in a later
      commit.  With the iop-adma driver and async_tx, raid456 is able to offload
      copy, xor, and xor-zero-sum operations to hardware engines.
       
      On iop342 tiobench showed higher throughput for sequential writes (20 - 30%
      improvement) and sequential reads to a degraded array (40 - 55%
      improvement).  For the other cases performance was roughly equal, +/- a few
      percentage points.  On a x86-smp platform the performance of the async_tx
      implementation (in synchronous mode) was also +/- a few percentage points
      of the original implementation.  According to 'top' on iop342 CPU
      utilization drops from ~50% to ~15% during a 'resync' while the speed
      according to /proc/mdstat doubles from ~25 MB/s to ~50 MB/s.
       
      The tiobench command line used for testing was: tiobench --size 2048
      --block 4096 --block 131072 --dir /mnt/raid --numruns 5
      * iop342 had 1GB of memory available
      
      Details:
      * if CONFIG_DMA_ENGINE=n the asynchronous path is compiled away by making
        async_tx_find_channel a static inline routine that always returns NULL
      * when a callback is specified for a given transaction an interrupt will
        fire at operation completion time and the callback will occur in a
        tasklet.  if the the channel does not support interrupts then a live
        polling wait will be performed
      * the api is written as a dmaengine client that requests all available
        channels
      * In support of dependencies the api implicitly schedules channel-switch
        interrupts.  The interrupt triggers the cleanup tasklet which causes
        pending operations to be scheduled on the next channel
      * Xor engines treat an xor destination address differently than a software
        xor routine.  To the software routine the destination address is an implied
        source, whereas engines treat it as a write-only destination.  This patch
        modifies the xor_blocks routine to take a an explicit destination address
        to mirror the hardware.
      
      Changelog:
      * fixed a leftover debug print
      * don't allow callbacks in async_interrupt_cond
      * fixed xor_block changes
      * fixed usage of ASYNC_TX_XOR_DROP_DEST
      * drop dma mapping methods, suggested by Chris Leech
      * printk warning fixups from Andrew Morton
      * don't use inline in C files, Adrian Bunk
      * select the API when MD is enabled
      * BUG_ON xor source counts <= 1
      * implicitly handle hardware concerns like channel switching and
        interrupts, Neil Brown
      * remove the per operation type list, and distribute operation capabilities
        evenly amongst the available channels
      * simplify async_tx_find_channel to optimize the fast path
      * introduce the channel_table_initialized flag to prevent early calls to
        the api
      * reorganize the code to mimic crypto
      * include mm.h as not all archs include it in dma-mapping.h
      * make the Kconfig options non-user visible, Adrian Bunk
      * move async_tx under crypto since it is meant as 'core' functionality, and
        the two may share algorithms in the future
      * move large inline functions into c files
      * checkpatch.pl fixes
      * gpl v2 only correction
      
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-By: NNeilBrown <neilb@suse.de>
      9bc89cd8
    • D
      xor: make 'xor_blocks' a library routine for use with async_tx · 685784aa
      Dan Williams 提交于
      The async_tx api tries to use a dma engine for an operation, but will fall
      back to an optimized software routine otherwise.  Xor support is
      implemented using the raid5 xor routines.  For organizational purposes this
      routine is moved to a common area.
      
      The following fixes are also made:
      * rename xor_block => xor_blocks, suggested by Adrian Bunk
      * ensure that xor.o initializes before md.o in the built-in case
      * checkpatch.pl fixes
      * mark calibrate_xor_blocks __init, Adrian Bunk
      
      Cc: Adrian Bunk <bunk@stusta.de>
      Cc: NeilBrown <neilb@suse.de>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      685784aa
    • C
      dm mpath: rdac · dd172d72
      Chandra Seetharaman 提交于
      This patch supports LSI/Engenio devices in RDAC mode. Like dm-emc
      it requires userspace support. In your multipath.conf file you must have:
      
      path_checker            rdac
      hardware_handler        "1 rdac"
      prio_callout		"/sbin/mpath_prio_tpc /dev/%n"
      
      And you also then must have a updated multipath tools release which
      has rdac support.
      Signed-off-by: NChandra Seetharaman <sekharan@us.ibm.com>
      Signed-off-by: NMike Christie <michaelc@cs.wisc.edu>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dd172d72