1. 21 11月, 2019 1 次提交
  2. 04 9月, 2019 2 次提交
  3. 03 9月, 2019 1 次提交
  4. 30 8月, 2019 1 次提交
  5. 21 8月, 2019 1 次提交
  6. 11 8月, 2019 1 次提交
    • C
      dma-mapping: fix page attributes for dma_mmap_* · 33dcb37c
      Christoph Hellwig 提交于
      All the way back to introducing dma_common_mmap we've defaulted to mark
      the pages as uncached.  But this is wrong for DMA coherent devices.
      Later on DMA_ATTR_WRITE_COMBINE also got incorrect treatment as that
      flag is only treated special on the alloc side for non-coherent devices.
      
      Introduce a new dma_pgprot helper that deals with the check for coherent
      devices so that only the remapping cases ever reach arch_dma_mmap_pgprot
      and we thus ensure no aliasing of page attributes happens, which makes
      the powerpc version of arch_dma_mmap_pgprot obsolete and simplifies the
      remaining ones.
      
      Note that this means arch_dma_mmap_pgprot is a bit misnamed now, but
      we'll phase it out soon.
      
      Fixes: 64ccc9c0 ("common: dma-mapping: add support for generic dma_mmap_* calls")
      Reported-by: NShawn Anastasio <shawn@anastas.io>
      Reported-by: NGavin Li <git@thegavinli.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: Catalin Marinas <catalin.marinas@arm.com> # arm64
      33dcb37c
  7. 09 8月, 2019 1 次提交
    • R
      iommu/dma: Handle SG length overflow better · ab2cbeb0
      Robin Murphy 提交于
      Since scatterlist dimensions are all unsigned ints, in the relatively
      rare cases where a device's max_segment_size is set to UINT_MAX, then
      the "cur_len + s_length <= max_len" check in __finalise_sg() will always
      return true. As a result, the corner case of such a device mapping an
      excessively large scatterlist which is mergeable to or beyond a total
      length of 4GB can lead to overflow and a bogus truncated dma_length in
      the resulting segment.
      
      As we already assume that any single segment must be no longer than
      max_len to begin with, this can easily be addressed by reshuffling the
      comparison.
      
      Fixes: 809eac54 ("iommu/dma: Implement scatterlist segment merging")
      Reported-by: NNicolin Chen <nicoleotsuka@gmail.com>
      Tested-by: NNicolin Chen <nicoleotsuka@gmail.com>
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      ab2cbeb0
  8. 06 8月, 2019 1 次提交
  9. 24 7月, 2019 1 次提交
    • W
      iommu: Introduce struct iommu_iotlb_gather for batching TLB flushes · a7d20dc1
      Will Deacon 提交于
      To permit batching of TLB flushes across multiple calls to the IOMMU
      driver's ->unmap() implementation, introduce a new structure for
      tracking the address range to be flushed and the granularity at which
      the flushing is required.
      
      This is hooked into the IOMMU API and its caller are updated to make use
      of the new structure. Subsequent patches will plumb this into the IOMMU
      drivers as well, but for now the gathering information is ignored.
      Signed-off-by: NWill Deacon <will@kernel.org>
      a7d20dc1
  10. 19 6月, 2019 1 次提交
  11. 18 6月, 2019 1 次提交
    • A
      iommu: Fix integer truncation · 29fcea8c
      Arnd Bergmann 提交于
      On 32-bit architectures, phys_addr_t may be different from dma_add_t,
      both smaller and bigger. This can lead to an overflow during an assignment
      that clang warns about:
      
      drivers/iommu/dma-iommu.c:230:10: error: implicit conversion from 'dma_addr_t' (aka 'unsigned long long') to
            'phys_addr_t' (aka 'unsigned int') changes value from 18446744073709551615 to 4294967295 [-Werror,-Wconstant-conversion]
      
      Use phys_addr_t here because that is the type that the variable was
      declared as.
      
      Fixes: aadad097 ("iommu/dma: Reserve IOVA for PCIe inaccessible DMA address")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      29fcea8c
  12. 14 6月, 2019 1 次提交
  13. 03 6月, 2019 1 次提交
  14. 27 5月, 2019 20 次提交
  15. 15 5月, 2019 1 次提交
    • S
      iommu/dma-iommu.c: convert to use vm_map_pages() · b0d0084f
      Souptick Joarder 提交于
      Convert to use vm_map_pages() to map range of kernel memory to user vma.
      
      Link: http://lkml.kernel.org/r/80c3d220fc6ada73a88ce43ca049afb55a889258.1552921225.git.jrdr.linux@gmail.comSigned-off-by: NSouptick Joarder <jrdr.linux@gmail.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Heiko Stuebner <heiko@sntech.de>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Kyungmin Park <kyungmin.park@samsung.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mauro Carvalho Chehab <mchehab@infradead.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Mike Rapoport <rppt@linux.ibm.com>
      Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
      Cc: Pawel Osciak <pawel@osciak.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Sandy Huang <hjc@rock-chips.com>
      Cc: Stefan Richter <stefanr@s5r6.in-berlin.de>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Thierry Reding <treding@nvidia.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b0d0084f
  16. 07 5月, 2019 1 次提交
  17. 03 5月, 2019 2 次提交
  18. 24 1月, 2019 1 次提交
  19. 17 12月, 2018 1 次提交