1. 27 5月, 2019 15 次提交
  2. 15 5月, 2019 1 次提交
    • S
      iommu/dma-iommu.c: convert to use vm_map_pages() · b0d0084f
      Souptick Joarder 提交于
      Convert to use vm_map_pages() to map range of kernel memory to user vma.
      
      Link: http://lkml.kernel.org/r/80c3d220fc6ada73a88ce43ca049afb55a889258.1552921225.git.jrdr.linux@gmail.comSigned-off-by: NSouptick Joarder <jrdr.linux@gmail.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Heiko Stuebner <heiko@sntech.de>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Kyungmin Park <kyungmin.park@samsung.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mauro Carvalho Chehab <mchehab@infradead.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Mike Rapoport <rppt@linux.ibm.com>
      Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
      Cc: Pawel Osciak <pawel@osciak.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Sandy Huang <hjc@rock-chips.com>
      Cc: Stefan Richter <stefanr@s5r6.in-berlin.de>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Thierry Reding <treding@nvidia.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b0d0084f
  3. 07 5月, 2019 1 次提交
  4. 03 5月, 2019 2 次提交
  5. 24 1月, 2019 1 次提交
  6. 17 12月, 2018 1 次提交
  7. 11 12月, 2018 1 次提交
  8. 06 12月, 2018 1 次提交
  9. 01 10月, 2018 1 次提交
    • Z
      iommu/dma: Add support for non-strict mode · 2da274cd
      Zhen Lei 提交于
      With the flush queue infrastructure already abstracted into IOVA
      domains, hooking it up in iommu-dma is pretty simple. Since there is a
      degree of dependency on the IOMMU driver knowing what to do to play
      along, we key the whole thing off a domain attribute which will be set
      on default DMA ops domains to request non-strict invalidation. That way,
      drivers can indicate the appropriate support by acknowledging the
      attribute, and we can easily fall back to strict invalidation otherwise.
      
      The flush queue callback needs a handle on the iommu_domain which owns
      our cookie, so we have to add a pointer back to that, but neatly, that's
      also sufficient to indicate whether we're using a flush queue or not,
      and thus which way to release IOVAs. The only slight subtlety is
      switching __iommu_dma_unmap() from calling iommu_unmap() to explicit
      iommu_unmap_fast()/iommu_tlb_sync() so that we can elide the sync
      entirely in non-strict mode.
      Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com>
      [rm: convert to domain attribute, tweak comments and commit message]
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2da274cd
  10. 25 9月, 2018 1 次提交
  11. 28 7月, 2018 1 次提交
  12. 03 5月, 2018 1 次提交
  13. 14 2月, 2018 1 次提交
  14. 12 10月, 2017 1 次提交
    • T
      iommu/iova: Make rcache flush optional on IOVA allocation failure · 538d5b33
      Tomasz Nowicki 提交于
      Since IOVA allocation failure is not unusual case we need to flush
      CPUs' rcache in hope we will succeed in next round.
      
      However, it is useful to decide whether we need rcache flush step because
      of two reasons:
      - Not scalability. On large system with ~100 CPUs iterating and flushing
        rcache for each CPU becomes serious bottleneck so we may want to defer it.
      - free_cpu_cached_iovas() does not care about max PFN we are interested in.
        Thus we may flush our rcaches and still get no new IOVA like in the
        commonly used scenario:
      
          if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
              iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift);
      
          if (!iova)
              iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
      
         1. First alloc_iova_fast() call is limited to DMA_BIT_MASK(32) to get
            PCI devices a SAC address
         2. alloc_iova() fails due to full 32-bit space
         3. rcaches contain PFNs out of 32-bit space so free_cpu_cached_iovas()
            throws entries away for nothing and alloc_iova() fails again
         4. Next alloc_iova_fast() call cannot take advantage of rcache since we
            have just defeated caches. In this case we pick the slowest option
            to proceed.
      
      This patch reworks flushed_rcache local flag to be additional function
      argument instead and control rcache flush step. Also, it updates all users
      to do the flush as the last chance.
      Signed-off-by: NTomasz Nowicki <Tomasz.Nowicki@caviumnetworks.com>
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Tested-by: NNate Watterson <nwatters@codeaurora.org>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      538d5b33
  15. 27 9月, 2017 1 次提交
  16. 20 6月, 2017 1 次提交
  17. 17 5月, 2017 2 次提交
  18. 03 4月, 2017 3 次提交
  19. 22 3月, 2017 3 次提交
    • R
      iommu/dma: Make PCI window reservation generic · 273df963
      Robin Murphy 提交于
      Now that we're applying the IOMMU API reserved regions to our IOVA
      domains, we shouldn't need to privately special-case PCI windows, or
      indeed anything else which isn't specific to our iommu-dma layer.
      However, since those aren't IOMMU-specific either, rather than start
      duplicating code into IOMMU drivers let's transform the existing
      function into an iommu_get_resv_regions() helper that they can share.
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      273df963
    • R
      iommu/dma: Handle IOMMU API reserved regions · 7c1b058c
      Robin Murphy 提交于
      Now that it's simple to discover the necessary reservations for a given
      device/IOMMU combination, let's wire up the appropriate handling. Basic
      reserved regions and direct-mapped regions we simply have to carve out
      of IOVA space (the IOMMU core having already mapped the latter before
      attaching the device). For hardware MSI regions, we also pre-populate
      the cookie with matching msi_pages. That way, irqchip drivers which
      normally assume MSIs to require mapping at the IOMMU can keep working
      without having to special-case their iommu_dma_map_msi_msg() hook, or
      indeed be aware at all of quirks preventing the IOMMU from translating
      certain addresses.
      Reviewed-by: NEric Auger <eric.auger@redhat.com>
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      7c1b058c
    • R
      iommu/dma: Don't reserve PCI I/O windows · 938f1bbe
      Robin Murphy 提交于
      Even if a host controller's CPU-side MMIO windows into PCI I/O space do
      happen to leak into PCI memory space such that it might treat them as
      peer addresses, trying to reserve the corresponding I/O space addresses
      doesn't do anything to help solve that problem. Stop doing a silly thing.
      
      Fixes: fade1ec0 ("iommu/dma: Avoid PCI host bridge windows")
      Reviewed-by: NEric Auger <eric.auger@redhat.com>
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      938f1bbe
  20. 06 2月, 2017 1 次提交
    • R
      iommu/dma: Remove bogus dma_supported() implementation · a1831bb9
      Robin Murphy 提交于
      Back when this was first written, dma_supported() was somewhat of a
      murky mess, with subtly different interpretations being relied upon in
      various places. The "does device X support DMA to address range Y?"
      uses assuming Y to be physical addresses, which motivated the current
      iommu_dma_supported() implementation and are alluded to in the comment
      therein, have since been cleaned up, leaving only the far less ambiguous
      "can device X drive address bits Y" usage internal to DMA API mask
      setting. As such, there is no reason to keep a slightly misleading
      callback which does nothing but duplicate the current default behaviour;
      we already constrain IOVA allocations to the iommu_domain aperture where
      necessary, so let's leave DMA mask business to architecture-specific
      code where it belongs.
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      a1831bb9