1. 05 3月, 2008 1 次提交
  2. 07 2月, 2008 3 次提交
  3. 25 1月, 2008 1 次提交
  4. 18 12月, 2007 2 次提交
  5. 30 11月, 2007 1 次提交
  6. 15 11月, 2007 2 次提交
  7. 30 10月, 2007 1 次提交
  8. 19 10月, 2007 5 次提交
  9. 17 10月, 2007 7 次提交
  10. 27 8月, 2007 1 次提交
  11. 15 8月, 2007 1 次提交
    • S
      [IOAT]: Remove redundant struct member to avoid descriptor cache miss · 54a09feb
      Shannon Nelson 提交于
      The layout for struct ioat_desc_sw is non-optimal and causes an extra
      cache hit for every descriptor processed.  By tightening up the struct
      layout and removing one item, we pull in the fields that get used in
      the speedpath and get a little better performance.
      
      
      Before:
      -------
      struct ioat_desc_sw {
      	struct ioat_dma_descriptor * hw;                 /*     0     8
      */
      	struct list_head           node;                 /*     8    16
      */
      	int                        tx_cnt;               /*    24     4
      */
      
      	/* XXX 4 bytes hole, try to pack */
      
      	dma_addr_t                 src;                  /*    32     8
      */
      	__u32                      src_len;              /*    40     4
      */
      
      	/* XXX 4 bytes hole, try to pack */
      
      	dma_addr_t                 dst;                  /*    48     8
      */
      	__u32                      dst_len;              /*    56     4
      */
      
      	/* XXX 4 bytes hole, try to pack */
      
      	/* --- cacheline 1 boundary (64 bytes) --- */
      	struct dma_async_tx_descriptor async_tx;         /*    64   144
      */
      	/* --- cacheline 3 boundary (192 bytes) was 16 bytes ago --- */
      
      	/* size: 208, cachelines: 4 */
      	/* sum members: 196, holes: 3, sum holes: 12 */
      	/* last cacheline: 16 bytes */
      };	/* definitions: 1 */
      
      
      After:
      ------
      
      struct ioat_desc_sw {
      	struct ioat_dma_descriptor * hw;                 /*     0     8
      */
      	struct list_head           node;                 /*     8    16
      */
      	int                        tx_cnt;               /*    24     4
      */
      	__u32                      len;                  /*    28     4
      */
      	dma_addr_t                 src;                  /*    32     8
      */
      	dma_addr_t                 dst;                  /*    40     8
      */
      	struct dma_async_tx_descriptor async_tx;         /*    48   144
      */
      	/* --- cacheline 3 boundary (192 bytes) --- */
      
      	/* size: 192, cachelines: 3 */
      };	/* definitions: 1 */
      Signed-off-by: NShannon Nelson <shannon.nelson@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      54a09feb
  12. 31 7月, 2007 1 次提交
  13. 17 7月, 2007 1 次提交
    • D
      dma-mapping: prevent dma dependent code from linking on !HAS_DMA archs · 1b0fac45
      Dan Williams 提交于
      Continuing the work started in 411f0f3e ...
      
      This enables code with a dma path, that compiles away, to build without
      requiring additional code factoring.  It also prevents code that calls
      dma_alloc_coherent and dma_free_coherent from linking whereas previously
      the code would hit a BUG() at run time.  Finally, it allows archs that set
      !HAS_DMA to delete their asm/dma-mapping.h file.
      
      Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: John W. Linville <linville@tuxdriver.com>
      Cc: Kyle McMartin <kyle@parisc-linux.org>
      Cc: James Bottomley <James.Bottomley@SteelEye.com>
      Cc: Tejun Heo <htejun@gmail.com>
      Cc: Jeff Garzik <jeff@garzik.org>
      Cc: <geert@linux-m68k.org>
      Cc: <zippel@linux-m68k.org>
      Cc: <spyro@f2s.com>
      Cc: <ysato@users.sourceforge.jp>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1b0fac45
  14. 13 7月, 2007 5 次提交
    • D
      ioatdma: add the unisys "i/oat" pci vendor/device id · 3039f073
      Dan Williams 提交于
      Cc: John Magolan <john.magolan@unisys.com>
      Signed-off-by: NShannon Nelson <shannon.nelson@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      3039f073
    • D
      dmaengine: driver for the iop32x, iop33x, and iop13xx raid engines · c2110923
      Dan Williams 提交于
      The Intel(R) IOP series of i/o processors integrate an Xscale core with
      raid acceleration engines.  The capabilities per platform are:
      
      iop219:
       (2) copy engines
      iop321:
       (2) copy engines
       (1) xor and block fill engine
      iop33x:
       (2) copy and crc32c engines
       (1) xor, xor zero sum, pq, pq zero sum, and block fill engine
      iop34x (iop13xx):
       (2) copy, crc32c, xor, xor zero sum, and block fill engines
       (1) copy, crc32c, xor, xor zero sum, pq, pq zero sum, and block fill engine
      
      The driver supports the features of the async_tx api:
      * asynchronous notification of operation completion
      * implicit (interupt triggered) handling of inter-channel transaction
        dependencies
      
      The driver adapts to the platform it is running by two methods.
      1/ #include <asm/arch/adma.h> which defines the hardware specific
         iop_chan_* and iop_desc_* routines as a series of static inline
         functions
      2/ The private platform data attached to the platform_device defines the
         capabilities of the channels
      
      20070626: Callbacks are run in a tasklet.  Given the recent discussion on
      LKML about killing tasklets in favor of workqueues I did a quick conversion
      of the driver.  Raid5 resync performance dropped from 50MB/s to 30MB/s, so
      the tasklet implementation remains until a generic softirq interface is
      available.
      
      Changelog:
      * fixed a slot allocation bug in do_iop13xx_adma_xor that caused too few
      slots to be requested eventually leading to data corruption
      * enabled the slot allocation routine to attempt to free slots before
      returning -ENOMEM
      * switched the cleanup routine to solely use the software chain and the
      status register to determine if a descriptor is complete.  This is
      necessary to support other IOP engines that do not have status writeback
      capability
      * make the driver iop generic
      * modified the allocation routines to understand allocating a group of
      slots for a single operation
      * added a null xor initialization operation for the xor only channel on
      iop3xx
      * support xor operations on buffers larger than the hardware maximum
      * split the do_* routines into separate prep, src/dest set, submit stages
      * added async_tx support (dependent operations initiation at cleanup time)
      * simplified group handling
      * added interrupt support (callbacks via tasklets)
      * brought the pending depth inline with ioat (i.e. 4 descriptors)
      * drop dma mapping methods, suggested by Chris Leech
      * don't use inline in C files, Adrian Bunk
      * remove static tasklet declarations
      * make iop_adma_alloc_slots easier to read and remove chances for a
        corrupted descriptor chain
      * fix locking bug in iop_adma_alloc_chan_resources, Benjamin Herrenschmidt
      * convert capabilities over to dma_cap_mask_t
      * fixup sparse warnings
      * add descriptor flush before iop_chan_enable
      * checkpatch.pl fixes
      * gpl v2 only correction
      * move set_src, set_dest, submit to async_tx methods
      * move group_list and phys to async_tx
      
      Cc: Russell King <rmk@arm.linux.org.uk>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      c2110923
    • D
      async_tx: add the async_tx api · 9bc89cd8
      Dan Williams 提交于
      The async_tx api provides methods for describing a chain of asynchronous
      bulk memory transfers/transforms with support for inter-transactional
      dependencies.  It is implemented as a dmaengine client that smooths over
      the details of different hardware offload engine implementations.  Code
      that is written to the api can optimize for asynchronous operation and the
      api will fit the chain of operations to the available offload resources. 
       
      	I imagine that any piece of ADMA hardware would register with the
      	'async_*' subsystem, and a call to async_X would be routed as
      	appropriate, or be run in-line. - Neil Brown
      
      async_tx exploits the capabilities of struct dma_async_tx_descriptor to
      provide an api of the following general format:
      
      struct dma_async_tx_descriptor *
      async_<operation>(..., struct dma_async_tx_descriptor *depend_tx,
      			dma_async_tx_callback cb_fn, void *cb_param)
      {
      	struct dma_chan *chan = async_tx_find_channel(depend_tx, <operation>);
      	struct dma_device *device = chan ? chan->device : NULL;
      	int int_en = cb_fn ? 1 : 0;
      	struct dma_async_tx_descriptor *tx = device ?
      		device->device_prep_dma_<operation>(chan, len, int_en) : NULL;
      
      	if (tx) { /* run <operation> asynchronously */
      		...
      		tx->tx_set_dest(addr, tx, index);
      		...
      		tx->tx_set_src(addr, tx, index);
      		...
      		async_tx_submit(chan, tx, flags, depend_tx, cb_fn, cb_param);
      	} else { /* run <operation> synchronously */
      		...
      		<operation>
      		...
      		async_tx_sync_epilog(flags, depend_tx, cb_fn, cb_param);
      	}
      
      	return tx;
      }
      
      async_tx_find_channel() returns a capable channel from its pool.  The
      channel pool is organized as a per-cpu array of channel pointers.  The
      async_tx_rebalance() routine is tasked with managing these arrays.  In the
      uniprocessor case async_tx_rebalance() tries to spread responsibility
      evenly over channels of similar capabilities.  For example if there are two
      copy+xor channels, one will handle copy operations and the other will
      handle xor.  In the SMP case async_tx_rebalance() attempts to spread the
      operations evenly over the cpus, e.g. cpu0 gets copy channel0 and xor
      channel0 while cpu1 gets copy channel 1 and xor channel 1.  When a
      dependency is specified async_tx_find_channel defaults to keeping the
      operation on the same channel.  A xor->copy->xor chain will stay on one
      channel if it supports both operation types, otherwise the transaction will
      transition between a copy and a xor resource.
      
      Currently the raid5 implementation in the MD raid456 driver has been
      converted to the async_tx api.  A driver for the offload engines on the
      Intel Xscale series of I/O processors, iop-adma, is provided in a later
      commit.  With the iop-adma driver and async_tx, raid456 is able to offload
      copy, xor, and xor-zero-sum operations to hardware engines.
       
      On iop342 tiobench showed higher throughput for sequential writes (20 - 30%
      improvement) and sequential reads to a degraded array (40 - 55%
      improvement).  For the other cases performance was roughly equal, +/- a few
      percentage points.  On a x86-smp platform the performance of the async_tx
      implementation (in synchronous mode) was also +/- a few percentage points
      of the original implementation.  According to 'top' on iop342 CPU
      utilization drops from ~50% to ~15% during a 'resync' while the speed
      according to /proc/mdstat doubles from ~25 MB/s to ~50 MB/s.
       
      The tiobench command line used for testing was: tiobench --size 2048
      --block 4096 --block 131072 --dir /mnt/raid --numruns 5
      * iop342 had 1GB of memory available
      
      Details:
      * if CONFIG_DMA_ENGINE=n the asynchronous path is compiled away by making
        async_tx_find_channel a static inline routine that always returns NULL
      * when a callback is specified for a given transaction an interrupt will
        fire at operation completion time and the callback will occur in a
        tasklet.  if the the channel does not support interrupts then a live
        polling wait will be performed
      * the api is written as a dmaengine client that requests all available
        channels
      * In support of dependencies the api implicitly schedules channel-switch
        interrupts.  The interrupt triggers the cleanup tasklet which causes
        pending operations to be scheduled on the next channel
      * Xor engines treat an xor destination address differently than a software
        xor routine.  To the software routine the destination address is an implied
        source, whereas engines treat it as a write-only destination.  This patch
        modifies the xor_blocks routine to take a an explicit destination address
        to mirror the hardware.
      
      Changelog:
      * fixed a leftover debug print
      * don't allow callbacks in async_interrupt_cond
      * fixed xor_block changes
      * fixed usage of ASYNC_TX_XOR_DROP_DEST
      * drop dma mapping methods, suggested by Chris Leech
      * printk warning fixups from Andrew Morton
      * don't use inline in C files, Adrian Bunk
      * select the API when MD is enabled
      * BUG_ON xor source counts <= 1
      * implicitly handle hardware concerns like channel switching and
        interrupts, Neil Brown
      * remove the per operation type list, and distribute operation capabilities
        evenly amongst the available channels
      * simplify async_tx_find_channel to optimize the fast path
      * introduce the channel_table_initialized flag to prevent early calls to
        the api
      * reorganize the code to mimic crypto
      * include mm.h as not all archs include it in dma-mapping.h
      * make the Kconfig options non-user visible, Adrian Bunk
      * move async_tx under crypto since it is meant as 'core' functionality, and
        the two may share algorithms in the future
      * move large inline functions into c files
      * checkpatch.pl fixes
      * gpl v2 only correction
      
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-By: NNeilBrown <neilb@suse.de>
      9bc89cd8
    • D
      dmaengine: make clients responsible for managing channels · d379b01e
      Dan Williams 提交于
      The current implementation assumes that a channel will only be used by one
      client at a time.  In order to enable channel sharing the dmaengine core is
      changed to a model where clients subscribe to channel-available-events.
      Instead of tracking how many channels a client wants and how many it has
      received the core just broadcasts the available channels and lets the
      clients optionally take a reference.  The core learns about the clients'
      needs at dma_event_callback time.
      
      In support of multiple operation types, clients can specify a capability
      mask to only be notified of channels that satisfy a certain set of
      capabilities.
      
      Changelog:
      * removed DMA_TX_ARRAY_INIT, no longer needed
      * dma_client_chan_free -> dma_chan_release: switch to global reference
        counting only at device unregistration time, before it was also happening
        at client unregistration time
      * clients now return dma_state_client to dmaengine (ack, dup, nak)
      * checkpatch.pl fixes
      * fixup merge with git-ioat
      
      Cc: Chris Leech <christopher.leech@intel.com>
      Signed-off-by: NShannon Nelson <shannon.nelson@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      d379b01e
    • D
      dmaengine: refactor dmaengine around dma_async_tx_descriptor · 7405f74b
      Dan Williams 提交于
      The current dmaengine interface defines mutliple routines per operation,
      i.e. dma_async_memcpy_buf_to_buf, dma_async_memcpy_buf_to_page etc.  Adding
      more operation types (xor, crc, etc) to this model would result in an
      unmanageable number of method permutations.
      
      	Are we really going to add a set of hooks for each DMA engine
      	whizbang feature?
      		- Jeff Garzik
      
      The descriptor creation process is refactored using the new common
      dma_async_tx_descriptor structure.  Instead of per driver
      do_<operation>_<dest>_to_<src> methods, drivers integrate
      dma_async_tx_descriptor into their private software descriptor and then
      define a 'prep' routine per operation.  The prep routine allocates a
      descriptor and ensures that the tx_set_src, tx_set_dest, tx_submit routines
      are valid.  Descriptor creation and submission becomes:
      
      struct dma_device *dev;
      struct dma_chan *chan;
      struct dma_async_tx_descriptor *tx;
      
      tx = dev->device_prep_dma_<operation>(chan, len, int_flag)
      tx->tx_set_src(dma_addr_t, tx, index /* for multi-source ops */)
      tx->tx_set_dest(dma_addr_t, tx, index)
      tx->tx_submit(tx)
      
      In addition to the refactoring, dma_async_tx_descriptor also lays the
      groundwork for definining cross-channel-operation dependencies, and a
      callback facility for asynchronous notification of operation completion.
      
      Changelog:
      * drop dma mapping methods, suggested by Chris Leech
      * fix ioat_dma_dependency_added, also caught by Andrew Morton
      * fix dma_sync_wait, change from Andrew Morton
      * uninline large functions, change from Andrew Morton
      * add tx->callback = NULL to dmaengine calls to interoperate with async_tx
        calls
      * hookup ioat_tx_submit
      * convert channel capabilities to a 'cpumask_t like' bitmap
      * removed DMA_TX_ARRAY_INIT, no longer needed
      * checkpatch.pl fixes
      * make set_src, set_dest, and tx_submit descriptor specific methods
      * fixup git-ioat merge
      * move group_list and phys to dma_async_tx_descriptor
      
      Cc: Jeff Garzik <jeff@garzik.org>
      Cc: Chris Leech <christopher.leech@intel.com>
      Signed-off-by: NShannon Nelson <shannon.nelson@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      7405f74b
  15. 12 7月, 2007 5 次提交
  16. 29 6月, 2007 1 次提交
    • R
      IOATDMA: fix section mismatches · 92504f79
      Randy Dunlap 提交于
      Rename struct pci_driver data so that false section mismatch warnings won't
      be produced.
      
      Sam, ISTM that depending on variable names is the weakest & worst part of
      modpost section checking.  Should __init_refok work here?  I got build
      errors when I tried to use it, probably because the struct pci_driver probe
      and remove methods are not marked "__init_refok".
      
      WARNING: drivers/dma/ioatdma.o(.data+0x10): Section mismatch: reference to .init.text: (between 'ioat_pci_drv' and 'ioat_pci_tbl')
      WARNING: drivers/dma/ioatdma.o(.data+0x14): Section mismatch: reference to .exit.text: (between 'ioat_pci_drv' and 'ioat_pci_tbl')
      Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com>
      Acked-by: NChris Leech <christopher.leech@intel.com>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      92504f79
  17. 10 5月, 2007 1 次提交
  18. 17 3月, 2007 1 次提交