1. 24 7月, 2007 1 次提交
  2. 22 7月, 2007 1 次提交
  3. 20 7月, 2007 3 次提交
  4. 18 7月, 2007 8 次提交
  5. 13 7月, 2007 27 次提交
    • D
      md: remove raid5 compute_block and compute_parity5 · f6dff381
      Dan Williams 提交于
      replaced by raid5_run_ops
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-By: NNeilBrown <neilb@suse.de>
      f6dff381
    • D
      md: handle_stripe5 - request io processing in raid5_run_ops · 830ea016
      Dan Williams 提交于
      I/O submission requests were already handled outside of the stripe lock in
      handle_stripe.  Now that handle_stripe is only tasked with finding work,
      this logic belongs in raid5_run_ops.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-By: NNeilBrown <neilb@suse.de>
      830ea016
    • D
      md: handle_stripe5 - add request/completion logic for async expand ops · f0a50d37
      Dan Williams 提交于
      When a stripe is being expanded bulk copying takes place to move the data
      from the old stripe to the new.  Since raid5_run_ops only operates on one
      stripe at a time these bulk copies are handled in-line under the stripe
      lock.  In the dma offload case we poll for the completion of the operation.
      
      After the data has been copied into the new stripe the parity needs to be
      recalculated across the new disks.  We reuse the existing postxor
      functionality to carry out this calculation.  By setting STRIPE_OP_POSTXOR
      without setting STRIPE_OP_BIODRAIN the completion path in handle stripe
      can differentiate expand operations from normal write operations.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-By: NNeilBrown <neilb@suse.de>
      f0a50d37
    • D
      md: handle_stripe5 - add request/completion logic for async read ops · b5e98d65
      Dan Williams 提交于
      When a read bio is attached to the stripe and the corresponding block is
      marked R5_UPTODATE, then a read (biofill) operation is scheduled to copy
      the data from the stripe cache to the bio buffer.  handle_stripe flags the
      blocks to be operated on with the R5_Wantfill flag.  If new read requests
      arrive while raid5_run_ops is running they will not be handled until
      handle_stripe is scheduled to run again.
      
      Changelog:
      * cleanup to_read and to_fill accounting
      * do not fail reads that have reached the cache
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-By: NNeilBrown <neilb@suse.de>
      b5e98d65
    • D
      md: handle_stripe5 - add request/completion logic for async check ops · e89f8962
      Dan Williams 提交于
      Check operations are scheduled when the array is being resynced or an
      explicit 'check/repair' command was sent to the array.  Previously check
      operations would destroy the parity block in the cache such that even if
      parity turned out to be correct the parity block would be marked
      !R5_UPTODATE at the completion of the check.  When the operation can be
      carried out by a dma engine the assumption is that it can check parity as a
      read-only operation.  If raid5_run_ops notices that the check was handled
      by hardware it will preserve the R5_UPTODATE status of the parity disk.
      
      When a check operation determines that the parity needs to be repaired we
      reuse the existing compute block infrastructure to carry out the operation.
      Repair operations imply an immediate write back of the data, so to
      differentiate a repair from a normal compute operation the
      STRIPE_OP_MOD_REPAIR_PD flag is added.
      
      Changelog:
      * remove test_and_set/test_and_clear BUG_ONs, Neil Brown
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-By: NNeilBrown <neilb@suse.de>
      e89f8962
    • D
      md: handle_stripe5 - add request/completion logic for async compute ops · f38e1219
      Dan Williams 提交于
      handle_stripe will compute a block when a backing disk has failed, or when
      it determines it can save a disk read by computing the block from all the
      other up-to-date blocks.
      
      Previously a block would be computed under the lock and subsequent logic in
      handle_stripe could use the newly up-to-date block.  With the raid5_run_ops
      implementation the compute operation is carried out a later time outside
      the lock.  To preserve the old functionality we take advantage of the
      dependency chain feature of async_tx to flag the block as R5_Wantcompute
      and then let other parts of handle_stripe operate on the block as if it
      were up-to-date.  raid5_run_ops guarantees that the block will be ready
      before it is used in another operation.
      
      However, this only works in cases where the compute and the dependent
      operation are scheduled at the same time.  If a previous call to
      handle_stripe sets the R5_Wantcompute flag there is no facility to pass the
      async_tx dependency chain across successive calls to raid5_run_ops.  The
      req_compute variable protects against this case.
      
      Changelog:
      * remove the req_compute BUG_ON
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-By: NNeilBrown <neilb@suse.de>
      f38e1219
    • D
      md: handle_stripe5 - add request/completion logic for async write ops · e33129d8
      Dan Williams 提交于
      After handle_stripe5 decides whether it wants to perform a
      read-modify-write, or a reconstruct write it calls
      handle_write_operations5.  A read-modify-write operation will perform an
      xor subtraction of the blocks marked with the R5_Wantprexor flag, copy the
      new data into the stripe (biodrain) and perform a postxor operation across
      all up-to-date blocks to generate the new parity.  A reconstruct write is run
      when all blocks are already up-to-date in the cache so all that is needed
      is a biodrain and postxor.
      
      On the completion path STRIPE_OP_PREXOR will be set if the operation was a
      read-modify-write.  The STRIPE_OP_BIODRAIN flag is used in the completion
      path to differentiate write-initiated postxor operations versus
      expansion-initiated postxor operations.  Completion of a write triggers i/o
      to the drives.
      
      Changelog:
      * make the 'rcw' parameter to handle_write_operations5 a simple flag, Neil Brown
      * remove test_and_set/test_and_clear BUG_ONs, Neil Brown
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-By: NNeilBrown <neilb@suse.de>
      e33129d8
    • D
      md: common infrastructure for running operations with raid5_run_ops · d84e0f10
      Dan Williams 提交于
      All the handle_stripe operations that are to be transitioned to use
      raid5_run_ops need a method to coherently gather work under the stripe-lock
      and hand that work off to raid5_run_ops.  The 'get_stripe_work' routine
      runs under the lock to read all the bits in sh->ops.pending that do not
      have the corresponding bit set in sh->ops.ack.  This modified 'pending'
      bitmap is then passed to raid5_run_ops for processing.
      
      The transition from 'ack' to 'completion' does not need similar protection
      as the existing release_stripe infrastructure will guarantee that
      handle_stripe will run again after a completion bit is set, and
      handle_stripe can tolerate a sh->ops.completed bit being set while the lock
      is held.
      
      A call to async_tx_issue_pending_all() is added to raid5d to kick the
      offload engines once all pending stripe operations work has been submitted.
      This enables batching of the submission and completion of operations.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-By: NNeilBrown <neilb@suse.de>
      d84e0f10
    • D
      md: raid5_run_ops - run stripe operations outside sh->lock · 91c00924
      Dan Williams 提交于
      When the raid acceleration work was proposed, Neil laid out the following
      attack plan:
      
      1/ move the xor and copy operations outside spin_lock(&sh->lock)
      2/ find/implement an asynchronous offload api
      
      The raid5_run_ops routine uses the asynchronous offload api (async_tx) and
      the stripe_operations member of a stripe_head to carry out xor+copy
      operations asynchronously, outside the lock.
      
      To perform operations outside the lock a new set of state flags is needed
      to track new requests, in-flight requests, and completed requests.  In this
      new model handle_stripe is tasked with scanning the stripe_head for work,
      updating the stripe_operations structure, and finally dropping the lock and
      calling raid5_run_ops for processing.  The following flags outline the
      requests that handle_stripe can make of raid5_run_ops:
      
      STRIPE_OP_BIOFILL
       - copy data into request buffers to satisfy a read request
      STRIPE_OP_COMPUTE_BLK
       - generate a missing block in the cache from the other blocks
      STRIPE_OP_PREXOR
       - subtract existing data as part of the read-modify-write process
      STRIPE_OP_BIODRAIN
       - copy data out of request buffers to satisfy a write request
      STRIPE_OP_POSTXOR
       - recalculate parity for new data that has entered the cache
      STRIPE_OP_CHECK
       - verify that the parity is correct
      STRIPE_OP_IO
       - submit i/o to the member disks (note this was already performed outside
         the stripe lock, but it made sense to add it as an operation type
      
      The flow is:
      1/ handle_stripe sets STRIPE_OP_* in sh->ops.pending
      2/ raid5_run_ops reads sh->ops.pending, sets sh->ops.ack, and submits the
         operation to the async_tx api
      3/ async_tx triggers the completion callback routine to set
         sh->ops.complete and release the stripe
      4/ handle_stripe runs again to finish the operation and optionally submit
         new operations that were previously blocked
      
      Note this patch just defines raid5_run_ops, subsequent commits (one per
      major operation type) modify handle_stripe to take advantage of this
      routine.
      
      Changelog:
      * removed ops_complete_biodrain in favor of ops_complete_postxor and
        ops_complete_write.
      * removed the raid5_run_ops workqueue
      * call bi_end_io for reads in ops_complete_biofill, saves a call to
        handle_stripe
      * explicitly handle the 2-disk raid5 case (xor becomes memcpy), Neil Brown
      * fix race between async engines and bi_end_io call for reads, Neil Brown
      * remove unnecessary spin_lock from ops_complete_biofill
      * remove test_and_set/test_and_clear BUG_ONs, Neil Brown
      * remove explicit interrupt handling for channel switching, this feature
        was absorbed (i.e. it is now implicit) by the async_tx api
      * use return_io in ops_complete_biofill
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-By: NNeilBrown <neilb@suse.de>
      91c00924
    • D
      raid5: replace custom debug PRINTKs with standard pr_debug · 45b4233c
      Dan Williams 提交于
      Replaces PRINTK with pr_debug, and kills the RAID5_DEBUG definition in
      favor of the global DEBUG definition.  To get local debug messages just add
      '#define DEBUG' to the top of the file.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-By: NNeilBrown <neilb@suse.de>
      45b4233c
    • D
      raid5: refactor handle_stripe5 and handle_stripe6 (v3) · a4456856
      Dan Williams 提交于
      handle_stripe5 and handle_stripe6 have very deep logic paths handling the
      various states of a stripe_head.  By introducing the 'stripe_head_state'
      and 'r6_state' objects, large portions of the logic can be moved to
      sub-routines.
      
      'struct stripe_head_state' consumes all of the automatic variables that previously
      stood alone in handle_stripe5,6.  'struct r6_state' contains the handle_stripe6
      specific variables like p_failed and q_failed.
      
      One of the nice side effects of the 'stripe_head_state' change is that it
      allows for further reductions in code duplication between raid5 and raid6.
      The following new routines are shared between raid5 and raid6:
      
      	handle_completed_write_requests
      	handle_requests_to_failed_array
      	handle_stripe_expansion
      
      Changes:
      * v2: fixed 'conf->raid_disk-1' for the raid6 'handle_stripe_expansion' path
      * v3: removed the unused 'dirty' field from struct stripe_head_state
      * v3: coalesced open coded bi_end_io routines into return_io()
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-By: NNeilBrown <neilb@suse.de>
      a4456856
    • D
      async_tx: add the async_tx api · 9bc89cd8
      Dan Williams 提交于
      The async_tx api provides methods for describing a chain of asynchronous
      bulk memory transfers/transforms with support for inter-transactional
      dependencies.  It is implemented as a dmaengine client that smooths over
      the details of different hardware offload engine implementations.  Code
      that is written to the api can optimize for asynchronous operation and the
      api will fit the chain of operations to the available offload resources. 
       
      	I imagine that any piece of ADMA hardware would register with the
      	'async_*' subsystem, and a call to async_X would be routed as
      	appropriate, or be run in-line. - Neil Brown
      
      async_tx exploits the capabilities of struct dma_async_tx_descriptor to
      provide an api of the following general format:
      
      struct dma_async_tx_descriptor *
      async_<operation>(..., struct dma_async_tx_descriptor *depend_tx,
      			dma_async_tx_callback cb_fn, void *cb_param)
      {
      	struct dma_chan *chan = async_tx_find_channel(depend_tx, <operation>);
      	struct dma_device *device = chan ? chan->device : NULL;
      	int int_en = cb_fn ? 1 : 0;
      	struct dma_async_tx_descriptor *tx = device ?
      		device->device_prep_dma_<operation>(chan, len, int_en) : NULL;
      
      	if (tx) { /* run <operation> asynchronously */
      		...
      		tx->tx_set_dest(addr, tx, index);
      		...
      		tx->tx_set_src(addr, tx, index);
      		...
      		async_tx_submit(chan, tx, flags, depend_tx, cb_fn, cb_param);
      	} else { /* run <operation> synchronously */
      		...
      		<operation>
      		...
      		async_tx_sync_epilog(flags, depend_tx, cb_fn, cb_param);
      	}
      
      	return tx;
      }
      
      async_tx_find_channel() returns a capable channel from its pool.  The
      channel pool is organized as a per-cpu array of channel pointers.  The
      async_tx_rebalance() routine is tasked with managing these arrays.  In the
      uniprocessor case async_tx_rebalance() tries to spread responsibility
      evenly over channels of similar capabilities.  For example if there are two
      copy+xor channels, one will handle copy operations and the other will
      handle xor.  In the SMP case async_tx_rebalance() attempts to spread the
      operations evenly over the cpus, e.g. cpu0 gets copy channel0 and xor
      channel0 while cpu1 gets copy channel 1 and xor channel 1.  When a
      dependency is specified async_tx_find_channel defaults to keeping the
      operation on the same channel.  A xor->copy->xor chain will stay on one
      channel if it supports both operation types, otherwise the transaction will
      transition between a copy and a xor resource.
      
      Currently the raid5 implementation in the MD raid456 driver has been
      converted to the async_tx api.  A driver for the offload engines on the
      Intel Xscale series of I/O processors, iop-adma, is provided in a later
      commit.  With the iop-adma driver and async_tx, raid456 is able to offload
      copy, xor, and xor-zero-sum operations to hardware engines.
       
      On iop342 tiobench showed higher throughput for sequential writes (20 - 30%
      improvement) and sequential reads to a degraded array (40 - 55%
      improvement).  For the other cases performance was roughly equal, +/- a few
      percentage points.  On a x86-smp platform the performance of the async_tx
      implementation (in synchronous mode) was also +/- a few percentage points
      of the original implementation.  According to 'top' on iop342 CPU
      utilization drops from ~50% to ~15% during a 'resync' while the speed
      according to /proc/mdstat doubles from ~25 MB/s to ~50 MB/s.
       
      The tiobench command line used for testing was: tiobench --size 2048
      --block 4096 --block 131072 --dir /mnt/raid --numruns 5
      * iop342 had 1GB of memory available
      
      Details:
      * if CONFIG_DMA_ENGINE=n the asynchronous path is compiled away by making
        async_tx_find_channel a static inline routine that always returns NULL
      * when a callback is specified for a given transaction an interrupt will
        fire at operation completion time and the callback will occur in a
        tasklet.  if the the channel does not support interrupts then a live
        polling wait will be performed
      * the api is written as a dmaengine client that requests all available
        channels
      * In support of dependencies the api implicitly schedules channel-switch
        interrupts.  The interrupt triggers the cleanup tasklet which causes
        pending operations to be scheduled on the next channel
      * Xor engines treat an xor destination address differently than a software
        xor routine.  To the software routine the destination address is an implied
        source, whereas engines treat it as a write-only destination.  This patch
        modifies the xor_blocks routine to take a an explicit destination address
        to mirror the hardware.
      
      Changelog:
      * fixed a leftover debug print
      * don't allow callbacks in async_interrupt_cond
      * fixed xor_block changes
      * fixed usage of ASYNC_TX_XOR_DROP_DEST
      * drop dma mapping methods, suggested by Chris Leech
      * printk warning fixups from Andrew Morton
      * don't use inline in C files, Adrian Bunk
      * select the API when MD is enabled
      * BUG_ON xor source counts <= 1
      * implicitly handle hardware concerns like channel switching and
        interrupts, Neil Brown
      * remove the per operation type list, and distribute operation capabilities
        evenly amongst the available channels
      * simplify async_tx_find_channel to optimize the fast path
      * introduce the channel_table_initialized flag to prevent early calls to
        the api
      * reorganize the code to mimic crypto
      * include mm.h as not all archs include it in dma-mapping.h
      * make the Kconfig options non-user visible, Adrian Bunk
      * move async_tx under crypto since it is meant as 'core' functionality, and
        the two may share algorithms in the future
      * move large inline functions into c files
      * checkpatch.pl fixes
      * gpl v2 only correction
      
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-By: NNeilBrown <neilb@suse.de>
      9bc89cd8
    • D
      xor: make 'xor_blocks' a library routine for use with async_tx · 685784aa
      Dan Williams 提交于
      The async_tx api tries to use a dma engine for an operation, but will fall
      back to an optimized software routine otherwise.  Xor support is
      implemented using the raid5 xor routines.  For organizational purposes this
      routine is moved to a common area.
      
      The following fixes are also made:
      * rename xor_block => xor_blocks, suggested by Adrian Bunk
      * ensure that xor.o initializes before md.o in the built-in case
      * checkpatch.pl fixes
      * mark calibrate_xor_blocks __init, Adrian Bunk
      
      Cc: Adrian Bunk <bunk@stusta.de>
      Cc: NeilBrown <neilb@suse.de>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      685784aa
    • C
      dm mpath: rdac · dd172d72
      Chandra Seetharaman 提交于
      This patch supports LSI/Engenio devices in RDAC mode. Like dm-emc
      it requires userspace support. In your multipath.conf file you must have:
      
      path_checker            rdac
      hardware_handler        "1 rdac"
      prio_callout		"/sbin/mpath_prio_tpc /dev/%n"
      
      And you also then must have a updated multipath tools release which
      has rdac support.
      Signed-off-by: NChandra Seetharaman <sekharan@us.ibm.com>
      Signed-off-by: NMike Christie <michaelc@cs.wisc.edu>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dd172d72
    • J
      dm raid1: handle log failure · fc1ff958
      Jonathan Brassow 提交于
      When writing to a mirror, the log must be updated first.  Failure
      to update the log could result in the log not properly reflecting
      the state of the mirror if the machine should crash.
      
      We change the return type of the rh_flush function to give us
      the ability to check if a log write was successful.  If the
      log write was unsuccessful, we fail the writes to avoid the
      case where the log does not properly reflect the state of the
      mirror.
      
      A follow-up patch - which is dependent on the ability to
      requeue I/O's to core device-mapper - will requeue the I/O's
      for retry (allowing the mirror to be reconfigured.)
      Signed-off-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fc1ff958
    • J
      dm raid1: handle resync failures · f44db678
      Jonathan Brassow 提交于
      Device-mapper mirroring currently takes a best effort approach to
      recovery - failures during mirror synchronization are completely ignored.
      This means that regions are marked 'in-sync' and 'clean' and removed
      from the hash list.  Future reads and writes that query the region
      will incorrectly interpret the region as in-sync.
      
      This patch handles failures during the recovery process.  If a failure
      occurs, the region is marked as 'not-in-sync' (aka RH_NOSYNC) and added
      to a new list 'failed_recovered_regions'.
      
      Regions on the 'failed_recovered_regions' list are not marked as 'clean'
      upon removal from the list.  Furthermore, if the DM_RAID1_HANDLE_ERRORS
      flag is set, the region is marked as 'not-in-sync'.  This action prevents
      any future read-balancing from choosing an invalid device because of the
      'not-in-sync' status.
      
      If "handle_errors" is not specified when creating a mirror (leaving the
      DM_RAID1_HANDLE_ERRORS flag unset), failures will be ignored exactly as they
      would be without this patch.  This is to preserve backwards compatibility with
      user-space tools, such as 'pvmove'.  However, since future read-balancing
      policies will rely on the correct sync status of a region, a user must choose
      "handle_errors" when using read-balancing.
      Signed-off-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f44db678
    • J
      dm: add ratelimit logging macros · d0d444c7
      Jonathan Brassow 提交于
      Add ratelimit extension to dm logging macros.
      Signed-off-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d0d444c7
    • S
      dm: disable barriers · 07a83c47
      Stefan Bader 提交于
      This patch causes device-mapper to reject any barrier requests.  This is done
      since most of the targets won't handle this correctly anyway.  So until the
      situation improves it is better to reject these requests at the first place.
      Since barrier requests won't get to the targets, the checks there can be
      removed.
      
      Cc: stable@kernel.org
      Signed-off-by: NStefan Bader <shbader@de.ibm.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      07a83c47
    • J
      dm raid1: clear region outside spinlock · 943317ef
      Jonathan Brassow 提交于
      A clear_region function is permitted to block (in practice, rare) but gets
      called in rh_update_states() with a spinlock held.
      
      The bits being marked and cleared by the above functions are used
      to update the on-disk log, but are never read directly.  We can
      perform these operations outside the spinlock since the
      bits are only changed within one thread viz.
         - mark_region in rh_inc()
         - clear_region in rh_update_states().
      
      So, we grab the clean_regions list items via list_splice() within the
      spinlock and defer clear_region() until we iterate over the list for
      deletion - similar to how the recovered_regions list is already handled.
      We then move the flush() call down to ensure it encapsulates the changes
      which are done by the later calls to clear_region().
      Signed-off-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      943317ef
    • M
      dm snapshot: permit invalid activation · 0764147b
      Milan Broz 提交于
      Allow invalid snapshots to be activated instead of failing.
      
      This allows userspace to reinstate any given snapshot state - for
      example after an unscheduled reboot - and clean up the invalid snapshot
      at its leisure.
      
      Cc: stable@kernel.org
      Signed-off-by: NMilan Broz <mbroz@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0764147b
    • M
      dm snapshot: fix invalidation deadlock · fcac03ab
      Milan Broz 提交于
      Process persistent exception store metadata IOs in a separate thread.
      
      A snapshot may become invalid while inside generic_make_request().
      A synchronous write is then needed to update the metadata while still
      inside that function.  Since the introduction of
      md-dm-reduce-stack-usage-with-stacked-block-devices.patch this has to
      be performed by a separate thread to avoid deadlock.
      Signed-off-by: NMilan Broz <mbroz@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fcac03ab
    • J
      dm io: fix panic on large request · 596f138e
      Jun'ichi Nomura 提交于
      bio_alloc_bioset() will return NULL if 'num_vecs' is too large.
      Use bio_get_nr_vecs() to get estimation of maximum number.
      
      Cc: stable@kernel.org
      Signed-off-by: N"Jun'ichi Nomura" <j-nomura@ce.jp.nec.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      596f138e
    • M
      dm raid1: fix status · c95bc206
      Milan Broz 提交于
      Fix mirror status line broken in dm-log-report-fault-status.patch:
        - space missing between two words
        - placeholder ("0") required for compatibility with a subsequent patch
        - incorrect offset parameter
      
      Cc: stable@kernel.org
      Signed-off-by: NMilan Broz <mbroz@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c95bc206
    • A
      dm: remove duplicate module name from error msgs · 0cd33124
      Alasdair G Kergon 提交于
      Remove explicit module name from messages as the macro now includes it
      automatically.
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0cd33124
    • A
      dm delay: cleanup · ac818646
      Alasdair G Kergon 提交于
      Use setup_timer().
      Replace semaphore with mutex.
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ac818646
    • A
      dm: use kmem_cache macro · 028867ac
      Alasdair G Kergon 提交于
      Use new KMEM_CACHE() macro and make the newly-exposed structure names more
      meaningful.  Also remove some superfluous casts and inlines (let a modern
      compiler be the judge).
      Acked-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      028867ac
    • A
      dm: bio_list prefetch removal · 79e15ae4
      Alasdair G Kergon 提交于
      Remove dubious prefetch from bio_list_for_each() macro.
      
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      79e15ae4