1. 11 12月, 2009 1 次提交
  2. 09 9月, 2009 5 次提交
    • D
      dmaengine: kill tx_list · 08031727
      Dan Williams 提交于
      The tx_list attribute of struct dma_async_tx_descriptor is common to
      most, but not all dma driver implementations.  None of the upper level
      code (dmaengine/async_tx) uses it, so allow drivers to implement it
      locally if they need it.  This saves sizeof(struct list_head) bytes for
      drivers that do not manage descriptors with a linked list (e.g.: ioatdma
      v2,3).
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      
      08031727
    • D
      dmaengine, async_tx: support alignment checks · 83544ae9
      Dan Williams 提交于
      Some engines have transfer size and address alignment restrictions.  Add
      a per-operation alignment property to struct dma_device that the async
      routines and dmatest can use to check alignment capabilities.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      83544ae9
    • D
      dmaengine: cleanup unused transaction types · 9308add6
      Dan Williams 提交于
      No drivers currently implement these operation types, so they can be
      deleted.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      9308add6
    • D
      dmaengine, async_tx: add a "no channel switch" allocator · 138f4c35
      Dan Williams 提交于
      Channel switching is problematic for some dmaengine drivers as the
      architecture precludes separating the ->prep from ->submit.  In these
      cases the driver can select ASYNC_TX_DISABLE_CHANNEL_SWITCH to modify
      the async_tx allocator to only return channels that support all of the
      required asynchronous operations.
      
      For example MD_RAID456=y selects support for asynchronous xor, xor
      validate, pq, pq validate, and memcpy.  When
      ASYNC_TX_DISABLE_CHANNEL_SWITCH=y any channel with all these
      capabilities is marked DMA_ASYNC_TX allowing async_tx_find_channel() to
      quickly locate compatible channels with the guarantee that dependency
      chains will remain on one channel.  When
      ASYNC_TX_DISABLE_CHANNEL_SWITCH=n async_tx_find_channel() may select
      channels that lead to operation chains that need to cross channel
      boundaries using the async_tx channel switch capability.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      138f4c35
    • D
      dmaengine: add fence support · 0403e382
      Dan Williams 提交于
      Some engines optimize operation by reading ahead in the descriptor chain
      such that descriptor2 may start execution before descriptor1 completes.
      If descriptor2 depends on the result from descriptor1 then a fence is
      required (on descriptor2) to disable this optimization.  The async_tx
      api could implicitly identify dependencies via the 'depend_tx'
      parameter, but that would constrain cases where the dependency chain
      only specifies a completion order rather than a data dependency.  So,
      provide an ASYNC_TX_FENCE to explicitly identify data dependencies.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      0403e382
  3. 30 8月, 2009 2 次提交
    • D
      async_tx: add support for asynchronous GF multiplication · b2f46fd8
      Dan Williams 提交于
      [ Based on an original patch by Yuri Tikhonov ]
      
      This adds support for doing asynchronous GF multiplication by adding
      two additional functions to the async_tx API:
      
       async_gen_syndrome() does simultaneous XOR and Galois field
          multiplication of sources.
      
       async_syndrome_val() validates the given source buffers against known P
          and Q values.
      
      When a request is made to run async_pq against more than the hardware
      maximum number of supported sources we need to reuse the previous
      generated P and Q values as sources into the next operation.  Care must
      be taken to remove Q from P' and P from Q'.  For example to perform a 5
      source pq op with hardware that only supports 4 sources at a time the
      following approach is taken:
      
      p, q = PQ(src0, src1, src2, src3, COEF({01}, {02}, {04}, {08}))
      p', q' = PQ(p, q, q, src4, COEF({00}, {01}, {00}, {10}))
      
      p' = p + q + q + src4 = p + src4
      q' = {00}*p + {01}*q + {00}*q + {10}*src4 = q + {10}*src4
      
      Note: 4 is the minimum acceptable maxpq otherwise we punt to
      synchronous-software path.
      
      The DMA_PREP_CONTINUE flag indicates to the driver to reuse p and q as
      sources (in the above manner) and fill the remaining slots up to maxpq
      with the new sources/coefficients.
      
      Note1: Some devices have native support for P+Q continuation and can skip
      this extra work.  Devices with this capability can advertise it with
      dma_set_maxpq.  It is up to each driver how to handle the
      DMA_PREP_CONTINUE flag.
      
      Note2: The api supports disabling the generation of P when generating Q,
      this is ignored by the synchronous path but is implemented by some dma
      devices to save unnecessary writes.  In this case the continuation
      algorithm is simplified to only reuse Q as a source.
      
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: David Woodhouse <David.Woodhouse@intel.com>
      Signed-off-by: NYuri Tikhonov <yur@emcraft.com>
      Signed-off-by: NIlya Yanok <yanok@emcraft.com>
      Reviewed-by: NAndre Noll <maan@systemlinux.org>
      Acked-by: NMaciej Sosnowski <maciej.sosnowski@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      b2f46fd8
    • D
      async_tx: add sum check flags · ad283ea4
      Dan Williams 提交于
      Replace the flat zero_sum_result with a collection of flags to contain
      the P (xor) zero-sum result, and the soon to be utilized Q (raid6 reed
      solomon syndrome) zero-sum result.  Use the SUM_CHECK_ namespace instead
      of DMA_ since these flags will be used on non-dma-zero-sum enabled
      platforms.
      Reviewed-by: NAndre Noll <maan@systemlinux.org>
      Acked-by: NMaciej Sosnowski <maciej.sosnowski@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      ad283ea4
  4. 13 5月, 2009 1 次提交
  5. 09 4月, 2009 1 次提交
  6. 27 3月, 2009 1 次提交
  7. 26 3月, 2009 2 次提交
  8. 19 2月, 2009 1 次提交
  9. 12 2月, 2009 1 次提交
  10. 07 2月, 2009 1 次提交
  11. 20 1月, 2009 2 次提交
  12. 11 1月, 2009 1 次提交
  13. 07 1月, 2009 12 次提交
  14. 06 1月, 2009 1 次提交
  15. 18 7月, 2008 1 次提交
  16. 09 7月, 2008 4 次提交
    • H
      dmaengine: Add slave DMA interface · dc0ee643
      Haavard Skinnemoen 提交于
      This patch adds the necessary interfaces to the DMA Engine framework
      to use functionality found on most embedded DMA controllers: DMA from
      and to I/O registers with hardware handshaking.
      
      In this context, hardware hanshaking means that the peripheral that
      owns the I/O registers in question is able to tell the DMA controller
      when more data is available for reading, or when there is room for
      more data to be written. This usually happens internally on the chip,
      but these signals may also be exported outside the chip for things
      like IDE DMA, etc.
      
      A new struct dma_slave is introduced. This contains information that
      the DMA engine driver needs to set up slave transfers to and from a
      slave device. Most engines supporting DMA slave transfers will want to
      extend this structure with controller-specific parameters.  This
      additional information is usually passed from the platform/board code
      through the client driver.
      
      A "slave" pointer is added to the dma_client struct. This must point
      to a valid dma_slave structure iff the DMA_SLAVE capability is
      requested.  The DMA engine driver may use this information in its
      device_alloc_chan_resources hook to configure the DMA controller for
      slave transfers from and to the given slave device.
      
      A new operation for preparing slave DMA transfers is added to struct
      dma_device. This takes a scatterlist and returns a single descriptor
      representing the whole transfer.
      
      Another new operation for terminating all pending transfers is added as
      well. The latter is needed because there may be errors outside the scope
      of the DMA Engine framework that may require DMA operations to be
      terminated prematurely.
      
      DMA Engine drivers may extend the dma_device, dma_chan and/or
      dma_slave_descriptor structures to allow controller-specific
      operations. The client driver can detect such extensions by looking at
      the DMA Engine's struct device, or it can request a specific DMA
      Engine device by setting the dma_dev field in struct dma_slave.
      
      dmaslave interface changes since v4:
        * Fix checkpatch errors
        * Fix changelog (there are no slave descriptors anymore)
      
      dmaslave interface changes since v3:
        * Use dma_data_direction instead of a new enum
        * Submit slave transfers as scatterlists
        * Remove the DMA slave descriptor struct
      
      dmaslave interface changes since v2:
        * Add a dma_dev field to struct dma_slave. If set, the client can
          only be bound to the DMA controller that corresponds to this
          device.  This allows controller-specific extensions of the
          dma_slave structure; if the device matches, the controller may
          safely assume its extensions are present.
        * Move reg_width into struct dma_slave as there are currently no
          users that need to be able to set the width on a per-transfer
          basis.
      
      dmaslave interface changes since v1:
        * Drop the set_direction and set_width descriptor hooks. Pass the
          direction and width to the prep function instead.
        * Declare a dma_slave struct with fixed information about a slave,
          i.e. register addresses, handshake interfaces and such.
        * Add pointer to a dma_slave struct to dma_client. Can be NULL if
          the DMA_SLAVE capability isn't requested.
        * Drop the set_slave device hook since the alloc_chan_resources hook
          now has enough information to set up the channel for slave
          transfers.
      Acked-by: NMaciej Sosnowski <maciej.sosnowski@intel.com>
      Signed-off-by: NHaavard Skinnemoen <haavard.skinnemoen@atmel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      dc0ee643
    • D
      dmaengine: add DMA_COMPL_SKIP_{SRC,DEST}_UNMAP flags to control dma unmap · e1d181ef
      Dan Williams 提交于
      In some cases client code may need the dma-driver to skip the unmap of source
      and/or destination buffers.  Setting these flags indicates to the driver to
      skip the unmap step.  In this regard async_xor is currently broken in that it
      allows the destination buffer to be unmapped while an operation is still in
      progress, i.e. when the number of sources exceeds the hardware channel's
      maximum (fixed in a subsequent patch).
      Acked-by: NSaeed Bishara <saeed@marvell.com>
      Acked-by: NMaciej Sosnowski <maciej.sosnowski@intel.com>
      Acked-by: NHaavard Skinnemoen <haavard.skinnemoen@atmel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      e1d181ef
    • H
      dmaengine: Add dma_client parameter to device_alloc_chan_resources · 848c536a
      Haavard Skinnemoen 提交于
      A DMA controller capable of doing slave transfers may need to know a
      few things about the slave when preparing the channel. We don't want
      to add this information to struct dma_channel since the channel hasn't
      yet been bound to a client at this point.
      
      Instead, pass a reference to the client requesting the channel to the
      driver's device_alloc_chan_resources hook so that it can pick the
      necessary information from the dma_client struct by itself.
      
      [dan.j.williams@intel.com: fixed up fsldma and mv_xor]
      Acked-by: NMaciej Sosnowski <maciej.sosnowski@intel.com>
      Signed-off-by: NHaavard Skinnemoen <haavard.skinnemoen@atmel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      848c536a
    • D
      dmaengine: track the number of clients using a channel · 7cc5bf9a
      Dan Williams 提交于
      Haavard's dma-slave interface would like to test for exclusive access to a
      channel.  The standard channel refcounting is not sufficient in that it
      tracks more than just client references, it is also inaccurate as reference
      counts are percpu until the channel is removed.
      
      This change also enables a future fix to deallocate resources when a client
      declines to use a capable channel.
      Acked-by: NHaavard Skinnemoen <haavard.skinnemoen@atmel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      7cc5bf9a
  17. 22 4月, 2008 1 次提交
  18. 18 4月, 2008 2 次提交