1. 08 6月, 2012 1 次提交
  2. 01 6月, 2012 1 次提交
  3. 11 5月, 2012 1 次提交
  4. 06 4月, 2012 1 次提交
  5. 21 3月, 2012 2 次提交
  6. 13 3月, 2012 2 次提交
  7. 05 3月, 2012 1 次提交
    • P
      BUG: headers with BUG/BUG_ON etc. need linux/bug.h · 187f1882
      Paul Gortmaker 提交于
      If a header file is making use of BUG, BUG_ON, BUILD_BUG_ON, or any
      other BUG variant in a static inline (i.e. not in a #define) then
      that header really should be including <linux/bug.h> and not just
      expecting it to be implicitly present.
      
      We can make this change risk-free, since if the files using these
      headers didn't have exposure to linux/bug.h already, they would have
      been causing compile failures/warnings.
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      187f1882
  8. 22 2月, 2012 1 次提交
  9. 24 12月, 2011 1 次提交
    • S
      dmaengine: add DMA_TRANS_NONE to dma_transfer_direction · 62268ce9
      Shawn Guo 提交于
      Before dma_transfer_direction was introduced to replace
      dma_data_direction, some dmaengine device uses DMA_NONE of
      dma_data_direction for some talk with its client drivers.
      The mxs-dma and its clients mxs-mmc and gpmi-nand are such case.
      
      This patch adds DMA_TRANS_NONE to dma_transfer_direction and
      migrate the DMA_NONE use in mxs-dma to it.
      
      It also fixes the compile warning below.
      
      CC      drivers/dma/mxs-dma.o
      drivers/dma/mxs-dma.c: In function ‘mxs_dma_prep_slave_sg’:
      drivers/dma/mxs-dma.c:420:16: warning: comparison between ‘enum dma_transfer_direction’ and ‘enum dma_data_direction’
      Signed-off-by: NShawn Guo <shawn.guo@linaro.org>
      Signed-off-by: NVinod Koul <vinod.koul@linux.intel.com>
      62268ce9
  10. 18 11月, 2011 1 次提交
    • J
      DMAEngine: Define interleaved transfer request api · b14dab79
      Jassi Brar 提交于
      Define a new api that could be used for doing fancy data transfers
      like interleaved to contiguous copy and vice-versa.
      Traditional SG_list based transfers tend to be very inefficient in
      such cases as where the interleave and chunk are only a few bytes,
      which call for a very condensed api to convey pattern of the transfer.
      This api supports all 4 variants of scatter-gather and contiguous transfer.
      
      Of course, neither can this api help transfers that don't lend to DMA by
      nature, i.e, scattered tiny read/writes with no periodic pattern.
      
      Also since now we support SLAVE channels that might not provide
      device_prep_slave_sg callback but device_prep_interleaved_dma,
      remove the BUG_ON check.
      Signed-off-by: NJassi Brar <jaswinder.singh@linaro.org>
      Acked-by: NBarry Song <Baohua.Song@csr.com>
      [renamed dmaxfer_template to dma_interleaved_template
       did fixup after the enum dma_transfer_merge]
      Signed-off-by: NVinod Koul <vinod.koul@linux.intel.com>
      b14dab79
  11. 01 11月, 2011 1 次提交
    • P
      linux/dmaengine.h: fix implicit use of bitmap.h and asm/page.h · a8efa9d6
      Paul Gortmaker 提交于
      The implicit presence of module.h and all its sub-includes was
      masking these implicit header usages:
      
      include/linux/dmaengine.h:684: warning: 'struct page' declared inside parameter list
      include/linux/dmaengine.h:684: warning: its scope is only this definition or declaration, which is probably not what you want
      include/linux/dmaengine.h:687: warning: 'struct page' declared inside parameter list
      include/linux/dmaengine.h:736:2: error: implicit declaration of function 'bitmap_zero'
      
      With input from Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      a8efa9d6
  12. 27 10月, 2011 1 次提交
  13. 16 8月, 2011 1 次提交
  14. 08 8月, 2011 1 次提交
  15. 22 6月, 2011 1 次提交
    • A
      net: remove mm.h inclusion from netdevice.h · b7f080cf
      Alexey Dobriyan 提交于
      Remove linux/mm.h inclusion from netdevice.h -- it's unused (I've checked manually).
      
      To prevent mm.h inclusion via other channels also extract "enum dma_data_direction"
      definition into separate header. This tiny piece is what gluing netdevice.h with mm.h
      via "netdevice.h => dmaengine.h => dma-mapping.h => scatterlist.h => mm.h".
      Removal of mm.h from scatterlist.h was tried and was found not feasible
      on most archs, so the link was cutoff earlier.
      
      Hope people are OK with tiny include file.
      
      Note, that mm_types.h is still dragged in, but it is a separate story.
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b7f080cf
  16. 31 3月, 2011 1 次提交
  17. 15 1月, 2011 1 次提交
  18. 03 1月, 2011 1 次提交
  19. 08 10月, 2010 3 次提交
    • D
      async_tx: make async_tx channel switching opt-in · 5fc6d897
      Dan Williams 提交于
      The majority of drivers in drivers/dma/ will never establish cross
      channel operation chains and do not need the extra overhead in struct
      dma_async_tx_descriptor.  Make channel switching opt-in by default.
      
      Cc: Anatolij Gustschin <agust@denx.de>
      Cc: Ira Snyder <iws@ovro.caltech.edu>
      Cc: Linus Walleij <linus.walleij@stericsson.com>
      Cc: Saeed Bishara <saeed@marvell.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      5fc6d897
    • I
      fsldma: improved DMA_SLAVE support · 968f19ae
      Ira Snyder 提交于
      Now that the generic DMAEngine API has support for scatterlist to
      scatterlist copying, the device_prep_slave_sg() portion of the
      DMA_SLAVE API is no longer necessary and has been removed.
      
      However, the device_control() portion of the DMA_SLAVE API is still
      useful to control device specific parameters, such as externally
      controlled DMA transfers and maximum burst length.
      
      A special dma_ctrl_cmd has been added to enable externally controlled
      DMA transfers. This is currently specific to the Freescale DMA
      controller, but can easily be made generic when another user is found.
      Signed-off-by: NIra W. Snyder <iws@ovro.caltech.edu>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      968f19ae
    • I
      dma: add support for scatterlist to scatterlist copy · a86ee03c
      Ira Snyder 提交于
      This adds support for scatterlist to scatterlist DMA transfers. A
      similar interface is exposed by the fsldma driver (through the DMA_SLAVE
      API) and by the ste_dma40 driver (through an exported function).
      
      This patch paves the way for making this type of copy operation a part
      of the generic DMAEngine API. Futher patches will add support in
      individual drivers.
      Signed-off-by: NIra W. Snyder <iws@ovro.caltech.edu>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      a86ee03c
  20. 06 10月, 2010 2 次提交
  21. 23 9月, 2010 1 次提交
  22. 05 8月, 2010 1 次提交
  23. 18 5月, 2010 2 次提交
  24. 27 3月, 2010 3 次提交
    • D
      dmaengine: provide helper for setting txstate · bca34692
      Dan Williams 提交于
      Simple conditional struct filler to cut out some duplicated code.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      bca34692
    • L
      DMAENGINE: generic channel status v2 · 07934481
      Linus Walleij 提交于
      Convert the device_is_tx_complete() operation on the
      DMA engine to a generic device_tx_status()operation which
      can return three states, DMA_TX_RUNNING, DMA_TX_COMPLETE,
      DMA_TX_PAUSED.
      
      [dan.j.williams@intel.com: update for timberdale]
      Signed-off-by: NLinus Walleij <linus.walleij@stericsson.com>
      Acked-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      Cc: Maciej Sosnowski <maciej.sosnowski@intel.com>
      Cc: Nicolas Ferre <nicolas.ferre@atmel.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Li Yang <leoli@freescale.com>
      Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
      Cc: Magnus Damm <damm@opensource.se>
      Cc: Liam Girdwood <lrg@slimlogic.co.uk>
      Cc: Joe Perches <joe@perches.com>
      Cc: Roland Dreier <rdreier@cisco.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      07934481
    • L
      DMAENGINE: generic slave control v2 · c3635c78
      Linus Walleij 提交于
      Convert the device_terminate_all() operation on the
      DMA engine to a generic device_control() operation
      which can now optionally support also pausing and
      resuming DMA on a certain channel. Implemented for the
      COH 901 318 DMAC as an example.
      
      [dan.j.williams@intel.com: update for timberdale]
      Signed-off-by: NLinus Walleij <linus.walleij@stericsson.com>
      Acked-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      Cc: Maciej Sosnowski <maciej.sosnowski@intel.com>
      Cc: Nicolas Ferre <nicolas.ferre@atmel.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Li Yang <leoli@freescale.com>
      Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
      Cc: Magnus Damm <damm@opensource.se>
      Cc: Liam Girdwood <lrg@slimlogic.co.uk>
      Cc: Joe Perches <joe@perches.com>
      Cc: Roland Dreier <rdreier@cisco.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      c3635c78
  25. 01 3月, 2010 1 次提交
  26. 17 2月, 2010 1 次提交
    • T
      percpu: add __percpu sparse annotations to what's left · a29d8b8e
      Tejun Heo 提交于
      Add __percpu sparse annotations to places which didn't make it in one
      of the previous patches.  All converions are trivial.
      
      These annotations are to make sparse consider percpu variables to be
      in a different address space and warn if accessed without going
      through percpu accessors.  This patch doesn't affect normal builds.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NBorislav Petkov <borislav.petkov@amd.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Len Brown <lenb@kernel.org>
      Cc: Neil Brown <neilb@suse.de>
      a29d8b8e
  27. 11 12月, 2009 1 次提交
  28. 09 9月, 2009 5 次提交
    • D
      dmaengine: kill tx_list · 08031727
      Dan Williams 提交于
      The tx_list attribute of struct dma_async_tx_descriptor is common to
      most, but not all dma driver implementations.  None of the upper level
      code (dmaengine/async_tx) uses it, so allow drivers to implement it
      locally if they need it.  This saves sizeof(struct list_head) bytes for
      drivers that do not manage descriptors with a linked list (e.g.: ioatdma
      v2,3).
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      
      08031727
    • D
      dmaengine, async_tx: support alignment checks · 83544ae9
      Dan Williams 提交于
      Some engines have transfer size and address alignment restrictions.  Add
      a per-operation alignment property to struct dma_device that the async
      routines and dmatest can use to check alignment capabilities.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      83544ae9
    • D
      dmaengine: cleanup unused transaction types · 9308add6
      Dan Williams 提交于
      No drivers currently implement these operation types, so they can be
      deleted.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      9308add6
    • D
      dmaengine, async_tx: add a "no channel switch" allocator · 138f4c35
      Dan Williams 提交于
      Channel switching is problematic for some dmaengine drivers as the
      architecture precludes separating the ->prep from ->submit.  In these
      cases the driver can select ASYNC_TX_DISABLE_CHANNEL_SWITCH to modify
      the async_tx allocator to only return channels that support all of the
      required asynchronous operations.
      
      For example MD_RAID456=y selects support for asynchronous xor, xor
      validate, pq, pq validate, and memcpy.  When
      ASYNC_TX_DISABLE_CHANNEL_SWITCH=y any channel with all these
      capabilities is marked DMA_ASYNC_TX allowing async_tx_find_channel() to
      quickly locate compatible channels with the guarantee that dependency
      chains will remain on one channel.  When
      ASYNC_TX_DISABLE_CHANNEL_SWITCH=n async_tx_find_channel() may select
      channels that lead to operation chains that need to cross channel
      boundaries using the async_tx channel switch capability.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      138f4c35
    • D
      dmaengine: add fence support · 0403e382
      Dan Williams 提交于
      Some engines optimize operation by reading ahead in the descriptor chain
      such that descriptor2 may start execution before descriptor1 completes.
      If descriptor2 depends on the result from descriptor1 then a fence is
      required (on descriptor2) to disable this optimization.  The async_tx
      api could implicitly identify dependencies via the 'depend_tx'
      parameter, but that would constrain cases where the dependency chain
      only specifies a completion order rather than a data dependency.  So,
      provide an ASYNC_TX_FENCE to explicitly identify data dependencies.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      0403e382