1. 04 8月, 2016 1 次提交
    • K
      dma-mapping: use unsigned long for dma_attrs · 00085f1e
      Krzysztof Kozlowski 提交于
      The dma-mapping core and the implementations do not change the DMA
      attributes passed by pointer.  Thus the pointer can point to const data.
      However the attributes do not have to be a bitfield.  Instead unsigned
      long will do fine:
      
      1. This is just simpler.  Both in terms of reading the code and setting
         attributes.  Instead of initializing local attributes on the stack
         and passing pointer to it to dma_set_attr(), just set the bits.
      
      2. It brings safeness and checking for const correctness because the
         attributes are passed by value.
      
      Semantic patches for this change (at least most of them):
      
          virtual patch
          virtual context
      
          @r@
          identifier f, attrs;
      
          @@
          f(...,
          - struct dma_attrs *attrs
          + unsigned long attrs
          , ...)
          {
          ...
          }
      
          @@
          identifier r.f;
          @@
          f(...,
          - NULL
          + 0
           )
      
      and
      
          // Options: --all-includes
          virtual patch
          virtual context
      
          @r@
          identifier f, attrs;
          type t;
      
          @@
          t f(..., struct dma_attrs *attrs);
      
          @@
          identifier r.f;
          @@
          f(...,
          - NULL
          + 0
           )
      
      Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.comSigned-off-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com>
      Acked-by: NVineet Gupta <vgupta@synopsys.com>
      Acked-by: NRobin Murphy <robin.murphy@arm.com>
      Acked-by: NHans-Christian Noren Egtvedt <egtvedt@samfundet.no>
      Acked-by: Mark Salter <msalter@redhat.com> [c6x]
      Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> [cris]
      Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> [drm]
      Reviewed-by: NBart Van Assche <bart.vanassche@sandisk.com>
      Acked-by: Joerg Roedel <jroedel@suse.de> [iommu]
      Acked-by: Fabien Dessenne <fabien.dessenne@st.com> [bdisp]
      Reviewed-by: Marek Szyprowski <m.szyprowski@samsung.com> [vb2-core]
      Acked-by: David Vrabel <david.vrabel@citrix.com> [xen]
      Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [xen swiotlb]
      Acked-by: Joerg Roedel <jroedel@suse.de> [iommu]
      Acked-by: Richard Kuo <rkuo@codeaurora.org> [hexagon]
      Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k]
      Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390]
      Acked-by: NBjorn Andersson <bjorn.andersson@linaro.org>
      Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> [avr32]
      Acked-by: Vineet Gupta <vgupta@synopsys.com> [arc]
      Acked-by: Robin Murphy <robin.murphy@arm.com> [arm64 and dma-iommu]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      00085f1e
  2. 21 7月, 2016 1 次提交
  3. 30 5月, 2016 1 次提交
  4. 19 3月, 2016 5 次提交
  5. 21 1月, 2016 1 次提交
  6. 21 8月, 2015 1 次提交
  7. 20 8月, 2015 1 次提交
    • A
      ARCv2: Support IO Coherency and permutations involving L1 and L2 caches · f2b0b25a
      Alexey Brodkin 提交于
      In case of ARCv2 CPU there're could be following configurations
      that affect cache handling for data exchanged with peripherals
      via DMA:
       [1] Only L1 cache exists
       [2] Both L1 and L2 exist, but no IO coherency unit
       [3] L1, L2 caches and IO coherency unit exist
      
      Current implementation takes care of [1] and [2].
      Moreover support of [2] is implemented with run-time check
      for SLC existence which is not super optimal.
      
      This patch introduces support of [3] and rework of DMA ops
      usage. Instead of doing run-time check every time a particular
      DMA op is executed we'll have 3 different implementations of
      DMA ops and select appropriate one during init.
      
      As for IOC support for it we need:
       [a] Implement empty DMA ops because IOC takes care of cache
           coherency with DMAed data
       [b] Route dma_alloc_coherent() via dma_alloc_noncoherent()
           This is required to make IOC work in first place and also
           serves as optimization as LD/ST to coherent buffers can be
           srviced from caches w/o going all the way to memory
      Signed-off-by: NAlexey Brodkin <abrodkin@synopsys.com>
      [vgupta:
        -Added some comments about IOC gains
        -Marked dma ops as static,
        -Massaged changelog a bit]
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      f2b0b25a
  8. 06 7月, 2015 1 次提交
  9. 25 6月, 2015 1 次提交
  10. 19 6月, 2015 1 次提交
  11. 16 2月, 2013 1 次提交