1. 27 5月, 2019 20 次提交
  2. 15 5月, 2019 1 次提交
    • S
      iommu/dma-iommu.c: convert to use vm_map_pages() · b0d0084f
      Souptick Joarder 提交于
      Convert to use vm_map_pages() to map range of kernel memory to user vma.
      
      Link: http://lkml.kernel.org/r/80c3d220fc6ada73a88ce43ca049afb55a889258.1552921225.git.jrdr.linux@gmail.comSigned-off-by: NSouptick Joarder <jrdr.linux@gmail.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Heiko Stuebner <heiko@sntech.de>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Kyungmin Park <kyungmin.park@samsung.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mauro Carvalho Chehab <mchehab@infradead.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Mike Rapoport <rppt@linux.ibm.com>
      Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
      Cc: Pawel Osciak <pawel@osciak.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Sandy Huang <hjc@rock-chips.com>
      Cc: Stefan Richter <stefanr@s5r6.in-berlin.de>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Thierry Reding <treding@nvidia.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b0d0084f
  3. 07 5月, 2019 1 次提交
  4. 03 5月, 2019 2 次提交
  5. 24 1月, 2019 1 次提交
  6. 17 12月, 2018 1 次提交
  7. 11 12月, 2018 1 次提交
  8. 06 12月, 2018 1 次提交
  9. 01 10月, 2018 1 次提交
    • Z
      iommu/dma: Add support for non-strict mode · 2da274cd
      Zhen Lei 提交于
      With the flush queue infrastructure already abstracted into IOVA
      domains, hooking it up in iommu-dma is pretty simple. Since there is a
      degree of dependency on the IOMMU driver knowing what to do to play
      along, we key the whole thing off a domain attribute which will be set
      on default DMA ops domains to request non-strict invalidation. That way,
      drivers can indicate the appropriate support by acknowledging the
      attribute, and we can easily fall back to strict invalidation otherwise.
      
      The flush queue callback needs a handle on the iommu_domain which owns
      our cookie, so we have to add a pointer back to that, but neatly, that's
      also sufficient to indicate whether we're using a flush queue or not,
      and thus which way to release IOVAs. The only slight subtlety is
      switching __iommu_dma_unmap() from calling iommu_unmap() to explicit
      iommu_unmap_fast()/iommu_tlb_sync() so that we can elide the sync
      entirely in non-strict mode.
      Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com>
      [rm: convert to domain attribute, tweak comments and commit message]
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2da274cd
  10. 25 9月, 2018 1 次提交
  11. 28 7月, 2018 1 次提交
  12. 03 5月, 2018 1 次提交
  13. 14 2月, 2018 1 次提交
  14. 12 10月, 2017 1 次提交
    • T
      iommu/iova: Make rcache flush optional on IOVA allocation failure · 538d5b33
      Tomasz Nowicki 提交于
      Since IOVA allocation failure is not unusual case we need to flush
      CPUs' rcache in hope we will succeed in next round.
      
      However, it is useful to decide whether we need rcache flush step because
      of two reasons:
      - Not scalability. On large system with ~100 CPUs iterating and flushing
        rcache for each CPU becomes serious bottleneck so we may want to defer it.
      - free_cpu_cached_iovas() does not care about max PFN we are interested in.
        Thus we may flush our rcaches and still get no new IOVA like in the
        commonly used scenario:
      
          if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
              iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift);
      
          if (!iova)
              iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
      
         1. First alloc_iova_fast() call is limited to DMA_BIT_MASK(32) to get
            PCI devices a SAC address
         2. alloc_iova() fails due to full 32-bit space
         3. rcaches contain PFNs out of 32-bit space so free_cpu_cached_iovas()
            throws entries away for nothing and alloc_iova() fails again
         4. Next alloc_iova_fast() call cannot take advantage of rcache since we
            have just defeated caches. In this case we pick the slowest option
            to proceed.
      
      This patch reworks flushed_rcache local flag to be additional function
      argument instead and control rcache flush step. Also, it updates all users
      to do the flush as the last chance.
      Signed-off-by: NTomasz Nowicki <Tomasz.Nowicki@caviumnetworks.com>
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Tested-by: NNate Watterson <nwatters@codeaurora.org>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      538d5b33
  15. 27 9月, 2017 1 次提交
  16. 20 6月, 2017 1 次提交
  17. 17 5月, 2017 2 次提交
  18. 03 4月, 2017 2 次提交