1. 15 1月, 2018 2 次提交
  2. 28 6月, 2017 1 次提交
  3. 25 1月, 2017 1 次提交
    • B
      treewide: Constify most dma_map_ops structures · 5299709d
      Bart Van Assche 提交于
      Most dma_map_ops structures are never modified. Constify these
      structures such that these can be write-protected. This patch
      has been generated as follows:
      
      git grep -l 'struct dma_map_ops' |
        xargs -d\\n sed -i \
          -e 's/struct dma_map_ops/const struct dma_map_ops/g' \
          -e 's/const struct dma_map_ops {/struct dma_map_ops {/g' \
          -e 's/^const struct dma_map_ops;$/struct dma_map_ops;/' \
          -e 's/const const struct dma_map_ops /const struct dma_map_ops /g';
      sed -i -e 's/const \(struct dma_map_ops intel_dma_ops\)/\1/' \
        $(git grep -l 'struct dma_map_ops intel_dma_ops');
      sed -i -e 's/const \(struct dma_map_ops dma_iommu_ops\)/\1/' \
        $(git grep -l 'struct dma_map_ops' | grep ^arch/powerpc);
      sed -i -e '/^struct vmd_dev {$/,/^};$/ s/const \(struct dma_map_ops[[:blank:]]dma_ops;\)/\1/' \
             -e '/^static void vmd_setup_dma_ops/,/^}$/ s/const \(struct dma_map_ops \*dest\)/\1/' \
             -e 's/const \(struct dma_map_ops \*dest = \&vmd->dma_ops\)/\1/' \
          drivers/pci/host/*.c
      sed -i -e '/^void __init pci_iommu_alloc(void)$/,/^}$/ s/dma_ops->/intel_dma_ops./' arch/ia64/kernel/pci-dma.c
      sed -i -e 's/static const struct dma_map_ops sn_dma_ops/static struct dma_map_ops sn_dma_ops/' arch/ia64/sn/pci/pci_dma.c
      sed -i -e 's/(const struct dma_map_ops \*)//' drivers/misc/mic/bus/vop_bus.c
      Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: x86@kernel.org
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      5299709d
  4. 15 12月, 2016 1 次提交
  5. 04 8月, 2016 1 次提交
    • K
      dma-mapping: use unsigned long for dma_attrs · 00085f1e
      Krzysztof Kozlowski 提交于
      The dma-mapping core and the implementations do not change the DMA
      attributes passed by pointer.  Thus the pointer can point to const data.
      However the attributes do not have to be a bitfield.  Instead unsigned
      long will do fine:
      
      1. This is just simpler.  Both in terms of reading the code and setting
         attributes.  Instead of initializing local attributes on the stack
         and passing pointer to it to dma_set_attr(), just set the bits.
      
      2. It brings safeness and checking for const correctness because the
         attributes are passed by value.
      
      Semantic patches for this change (at least most of them):
      
          virtual patch
          virtual context
      
          @r@
          identifier f, attrs;
      
          @@
          f(...,
          - struct dma_attrs *attrs
          + unsigned long attrs
          , ...)
          {
          ...
          }
      
          @@
          identifier r.f;
          @@
          f(...,
          - NULL
          + 0
           )
      
      and
      
          // Options: --all-includes
          virtual patch
          virtual context
      
          @r@
          identifier f, attrs;
          type t;
      
          @@
          t f(..., struct dma_attrs *attrs);
      
          @@
          identifier r.f;
          @@
          f(...,
          - NULL
          + 0
           )
      
      Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.comSigned-off-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com>
      Acked-by: NVineet Gupta <vgupta@synopsys.com>
      Acked-by: NRobin Murphy <robin.murphy@arm.com>
      Acked-by: NHans-Christian Noren Egtvedt <egtvedt@samfundet.no>
      Acked-by: Mark Salter <msalter@redhat.com> [c6x]
      Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> [cris]
      Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> [drm]
      Reviewed-by: NBart Van Assche <bart.vanassche@sandisk.com>
      Acked-by: Joerg Roedel <jroedel@suse.de> [iommu]
      Acked-by: Fabien Dessenne <fabien.dessenne@st.com> [bdisp]
      Reviewed-by: Marek Szyprowski <m.szyprowski@samsung.com> [vb2-core]
      Acked-by: David Vrabel <david.vrabel@citrix.com> [xen]
      Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [xen swiotlb]
      Acked-by: Joerg Roedel <jroedel@suse.de> [iommu]
      Acked-by: Richard Kuo <rkuo@codeaurora.org> [hexagon]
      Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k]
      Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390]
      Acked-by: NBjorn Andersson <bjorn.andersson@linaro.org>
      Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> [avr32]
      Acked-by: Vineet Gupta <vgupta@synopsys.com> [arc]
      Acked-by: Robin Murphy <robin.murphy@arm.com> [arm64 and dma-iommu]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      00085f1e
  6. 21 1月, 2016 1 次提交
  7. 04 9月, 2013 1 次提交
  8. 07 8月, 2013 4 次提交
    • C
      tile PCI DMA: fix bug in non-page-aligned accessors · 02b67e09
      Chris Metcalf 提交于
      The code incorrectly masked with PAGE_OFFSET instead of
      PAGE_SIZE-1.  This only matters when trying to do a
      non page-aligned DMA; it was noticed during code inspection.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      02b67e09
    • C
      tile PCI RC: add dma_get_required_mask() · dc7d5cf2
      Chris Metcalf 提交于
      The standard kernel function dma_get_required_mask() uses the
      highest DRAM address to determine if 32-bit or 64-bit DMA addressing
      is needed.  This only works on architectures that have direct mapping
      between the PA and the PCI address space, i.e. those that don't have
      I/O TLBs or have I/O TLB but choose to use direct mapping.  Neither
      of these are true for tilegx.  Whether to use 64-bit DMA should depend
      on the PCI device's capability only, not on the amount of DRAM
      installeds, so we now advertise a 64-bit DMA mask unconditionally.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      dc7d5cf2
    • C
      9b6846ce
    • C
      tile: support LSI MEGARAID SAS HBA hybrid dma_ops · 803c874a
      Chris Metcalf 提交于
      The LSI MEGARAID SAS HBA suffers from the problem where it can do
      64-bit DMA to streaming buffers but not to consistent buffers.
      In other words, 64-bit DMA is used for disk data transfers and 32-bit
      DMA must be used for control message transfers. According to LSI,
      the firmware is not fully functional yet. This change implements a
      kind of hybrid dma_ops to support this.
      
      Note that on most other platforms, the 64-bit DMA addressing space is the
      same as the 32-bit DMA space and they overlap the physical memory space.
      No special arrangement is needed to support this kind of mixed DMA
      capability.  On TILE-Gx, the 64-bit DMA space is completely separate
      from the 32-bit DMA space.  Due to the use of the IOMMU, the 64-bit DMA
      space doesn't overlap the physical memory space.  On the other hand,
      the 32-bit DMA space overlaps the physical memory space under 4GB.
      The separate address spaces make it necessary to have separate dma_ops.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      803c874a
  9. 19 7月, 2012 3 次提交
    • C
      tile pci: enable IOMMU to support DMA for legacy devices · 41bb38fc
      Chris Metcalf 提交于
      This change uses the TRIO IOMMU to map the PCI DMA space and physical
      memory at different addresses.  We also now use the dma_mapping_ops
      to provide support for non-PCI DMA, PCIe DMA (64-bit) and legacy PCI
      DMA (32-bit).  We use the kernel's software I/O TLB framework
      (i.e. bounce buffers) for the legacy 32-bit PCI device support since
      there are a limited number of TLB entries in the IOMMU and it is
      non-trivial to handle indexing, searching, matching, etc.  For 32-bit
      devices the performance impact of bounce buffers should not be a concern.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      41bb38fc
    • C
      arch/tile: enable ZONE_DMA for tilegx · eef015c8
      Chris Metcalf 提交于
      This is required for PCI root complex legacy support and USB OHCI root
      complex support.  With this change tilegx now supports allocating memory
      whose PA fits in 32 bits.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      eef015c8
    • C
      tilegx pci: support I/O to arbitrarily-cached pages · bbaa22c3
      Chris Metcalf 提交于
      The tilegx PCI root complex support (currently only in linux-next)
      is limited to pages that are homed on cached in the default manner,
      i.e. "hash-for-home".  This change supports delivery of I/O data to
      pages that are cached in other ways (locally on a particular core,
      uncached, user-managed incoherent, etc.).
      
      A large part of the change is supporting flushing pages from cache
      on particular homes so that we can transition the data that we are
      delivering to or from the device appropriately.  The new homecache_finv*
      routines handle this.
      
      Some changes to page_table_range_init() were also required to make
      the fixmap code work correctly on tilegx; it hadn't been used there
      before.
      
      We also remove some stub mark_caches_evicted_*() routines that
      were just no-ops anyway.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      bbaa22c3
  10. 04 12月, 2011 1 次提交
  11. 05 5月, 2011 1 次提交
  12. 11 3月, 2011 1 次提交
    • C
      arch/tile: support 4KB page size as well as 64KB · 76c567fb
      Chris Metcalf 提交于
      The Tilera architecture traditionally supports 64KB page sizes
      to improve TLB utilization and improve performance when the
      hardware is being used primarily to run a single application.
      
      For more generic server scenarios, it can be beneficial to run
      with 4KB page sizes, so this commit allows that to be specified
      (by modifying the arch/tile/include/hv/pagesize.h header).
      
      As part of this change, we also re-worked the PTE management
      slightly so that PTE writes all go through a __set_pte() function
      where we can do some additional validation.  The set_pte_order()
      function was eliminated since the "order" argument wasn't being used.
      
      One bug uncovered was in the PCI DMA code, which wasn't properly
      flushing the specified range.  This was benign with 64KB pages,
      but with 4KB pages we were getting some larger flushes wrong.
      
      The per-cpu memory reservation code also needed updating to
      conform with the newer percpu stuff; before it always chose 64KB,
      and that was always correct, but with 4KB granularity we now have
      to pay closer attention and reserve the amount of memory that will
      be requested when the percpu code starts allocating.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      76c567fb
  13. 05 6月, 2010 2 次提交