1. 04 9月, 2019 2 次提交
  2. 03 9月, 2019 1 次提交
  3. 21 8月, 2019 1 次提交
  4. 11 8月, 2019 1 次提交
    • C
      dma-mapping: fix page attributes for dma_mmap_* · 33dcb37c
      Christoph Hellwig 提交于
      All the way back to introducing dma_common_mmap we've defaulted to mark
      the pages as uncached.  But this is wrong for DMA coherent devices.
      Later on DMA_ATTR_WRITE_COMBINE also got incorrect treatment as that
      flag is only treated special on the alloc side for non-coherent devices.
      
      Introduce a new dma_pgprot helper that deals with the check for coherent
      devices so that only the remapping cases ever reach arch_dma_mmap_pgprot
      and we thus ensure no aliasing of page attributes happens, which makes
      the powerpc version of arch_dma_mmap_pgprot obsolete and simplifies the
      remaining ones.
      
      Note that this means arch_dma_mmap_pgprot is a bit misnamed now, but
      we'll phase it out soon.
      
      Fixes: 64ccc9c0 ("common: dma-mapping: add support for generic dma_mmap_* calls")
      Reported-by: NShawn Anastasio <shawn@anastas.io>
      Reported-by: NGavin Li <git@thegavinli.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: Catalin Marinas <catalin.marinas@arm.com> # arm64
      33dcb37c
  5. 09 8月, 2019 1 次提交
    • R
      iommu/dma: Handle SG length overflow better · ab2cbeb0
      Robin Murphy 提交于
      Since scatterlist dimensions are all unsigned ints, in the relatively
      rare cases where a device's max_segment_size is set to UINT_MAX, then
      the "cur_len + s_length <= max_len" check in __finalise_sg() will always
      return true. As a result, the corner case of such a device mapping an
      excessively large scatterlist which is mergeable to or beyond a total
      length of 4GB can lead to overflow and a bogus truncated dma_length in
      the resulting segment.
      
      As we already assume that any single segment must be no longer than
      max_len to begin with, this can easily be addressed by reshuffling the
      comparison.
      
      Fixes: 809eac54 ("iommu/dma: Implement scatterlist segment merging")
      Reported-by: NNicolin Chen <nicoleotsuka@gmail.com>
      Tested-by: NNicolin Chen <nicoleotsuka@gmail.com>
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      ab2cbeb0
  6. 06 8月, 2019 1 次提交
  7. 19 6月, 2019 1 次提交
  8. 18 6月, 2019 1 次提交
    • A
      iommu: Fix integer truncation · 29fcea8c
      Arnd Bergmann 提交于
      On 32-bit architectures, phys_addr_t may be different from dma_add_t,
      both smaller and bigger. This can lead to an overflow during an assignment
      that clang warns about:
      
      drivers/iommu/dma-iommu.c:230:10: error: implicit conversion from 'dma_addr_t' (aka 'unsigned long long') to
            'phys_addr_t' (aka 'unsigned int') changes value from 18446744073709551615 to 4294967295 [-Werror,-Wconstant-conversion]
      
      Use phys_addr_t here because that is the type that the variable was
      declared as.
      
      Fixes: aadad097 ("iommu/dma: Reserve IOVA for PCIe inaccessible DMA address")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      29fcea8c
  9. 14 6月, 2019 1 次提交
  10. 03 6月, 2019 1 次提交
  11. 27 5月, 2019 20 次提交
  12. 15 5月, 2019 1 次提交
    • S
      iommu/dma-iommu.c: convert to use vm_map_pages() · b0d0084f
      Souptick Joarder 提交于
      Convert to use vm_map_pages() to map range of kernel memory to user vma.
      
      Link: http://lkml.kernel.org/r/80c3d220fc6ada73a88ce43ca049afb55a889258.1552921225.git.jrdr.linux@gmail.comSigned-off-by: NSouptick Joarder <jrdr.linux@gmail.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Heiko Stuebner <heiko@sntech.de>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Kyungmin Park <kyungmin.park@samsung.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mauro Carvalho Chehab <mchehab@infradead.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Mike Rapoport <rppt@linux.ibm.com>
      Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
      Cc: Pawel Osciak <pawel@osciak.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Sandy Huang <hjc@rock-chips.com>
      Cc: Stefan Richter <stefanr@s5r6.in-berlin.de>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Thierry Reding <treding@nvidia.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b0d0084f
  13. 07 5月, 2019 1 次提交
  14. 03 5月, 2019 2 次提交
  15. 24 1月, 2019 1 次提交
  16. 17 12月, 2018 1 次提交
  17. 11 12月, 2018 1 次提交
  18. 06 12月, 2018 1 次提交
  19. 01 10月, 2018 1 次提交
    • Z
      iommu/dma: Add support for non-strict mode · 2da274cd
      Zhen Lei 提交于
      With the flush queue infrastructure already abstracted into IOVA
      domains, hooking it up in iommu-dma is pretty simple. Since there is a
      degree of dependency on the IOMMU driver knowing what to do to play
      along, we key the whole thing off a domain attribute which will be set
      on default DMA ops domains to request non-strict invalidation. That way,
      drivers can indicate the appropriate support by acknowledging the
      attribute, and we can easily fall back to strict invalidation otherwise.
      
      The flush queue callback needs a handle on the iommu_domain which owns
      our cookie, so we have to add a pointer back to that, but neatly, that's
      also sufficient to indicate whether we're using a flush queue or not,
      and thus which way to release IOVAs. The only slight subtlety is
      switching __iommu_dma_unmap() from calling iommu_unmap() to explicit
      iommu_unmap_fast()/iommu_tlb_sync() so that we can elide the sync
      entirely in non-strict mode.
      Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com>
      [rm: convert to domain attribute, tweak comments and commit message]
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2da274cd