- 27 5月, 2019 7 次提交
-
-
由 Robin Murphy 提交于
Since we duplicate the find_vm_area() logic a few times in places where we only care aboute the pages, factor out a helper to abstract it. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> [hch: don't warn when not finding a region, as we'll rely on that later] Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
The remaining internal callsites don't care about having prototypes compatible with the relevant dma_map_ops callbacks, so the extra level of indirection just wastes space and complictaes things. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
Most of the callers don't care, and the couple that do already have the domain to hand for other reasons are in slow paths where the (trivial) overhead of a repeated lookup will be utterly immaterial. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> [hch: dropped the hunk touching iommu_dma_get_msi_page to avoid a conflict with another series] Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
Moving this function up to its unmap counterpart helps to keep related code together for the following changes. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
There is nothing really arm64 specific in the iommu_dma_ops implementation, so move it to dma-iommu.c and keep a lot of symbols self-contained. Note the implementation does depend on the DMA_DIRECT_REMAP infrastructure for now, so we'll have to make the DMA_IOMMU support depend on it, but this will be relaxed soon. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
arch_dma_prep_coherent can handle physically contiguous ranges larger than PAGE_SIZE just fine, which means we don't need a page-based iterator. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Christoph Hellwig 提交于
We now have a arch_dma_prep_coherent architecture hook that is used for the generic DMA remap allocator, and we should use the same interface for the dma-iommu code. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 15 5月, 2019 1 次提交
-
-
由 Souptick Joarder 提交于
Convert to use vm_map_pages() to map range of kernel memory to user vma. Link: http://lkml.kernel.org/r/80c3d220fc6ada73a88ce43ca049afb55a889258.1552921225.git.jrdr.linux@gmail.comSigned-off-by: NSouptick Joarder <jrdr.linux@gmail.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: David Airlie <airlied@linux.ie> Cc: Heiko Stuebner <heiko@sntech.de> Cc: Joerg Roedel <joro@8bytes.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Juergen Gross <jgross@suse.com> Cc: Kees Cook <keescook@chromium.org> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Kyungmin Park <kyungmin.park@samsung.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Mauro Carvalho Chehab <mchehab@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com> Cc: Pawel Osciak <pawel@osciak.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@surriel.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Sandy Huang <hjc@rock-chips.com> Cc: Stefan Richter <stefanr@s5r6.in-berlin.de> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Thierry Reding <treding@nvidia.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 07 5月, 2019 1 次提交
-
-
由 Srinath Mannam 提交于
The dma_ranges list field of PCI host bridge structure has resource entries in sorted order representing address ranges allowed for DMA transfers. Process the list and reserve IOVA addresses that are not present in its resource entries (ie DMA memory holes) to prevent allocating IOVA addresses that cannot be accessed by PCI devices. Based-on-a-patch-by: NOza Pawandeep <oza.oza@broadcom.com> Signed-off-by: NSrinath Mannam <srinath.mannam@broadcom.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Reviewed-by: NOza Pawandeep <poza@codeaurora.org> Acked-by: NRobin Murphy <robin.murphy@arm.com>
-
- 03 5月, 2019 2 次提交
-
-
由 Julien Grall 提交于
A recent change split iommu_dma_map_msi_msg() in two new functions. The function was still implemented to avoid modifying all the callers at once. Now that all the callers have been reworked, iommu_dma_map_msi_msg() can be removed. Signed-off-by: NJulien Grall <julien.grall@arm.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Acked-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Julien Grall 提交于
On RT, iommu_dma_map_msi_msg() may be called from non-preemptible context. This will lead to a splat with CONFIG_DEBUG_ATOMIC_SLEEP as the function is using spin_lock (they can sleep on RT). iommu_dma_map_msi_msg() is used to map the MSI page in the IOMMU PT and update the MSI message with the IOVA. Only the part to lookup for the MSI page requires to be called in preemptible context. As the MSI page cannot change over the lifecycle of the MSI interrupt, the lookup can be cached and re-used later on. iomma_dma_map_msi_msg() is now split in two functions: - iommu_dma_prepare_msi(): This function will prepare the mapping in the IOMMU and store the cookie in the structure msi_desc. This function should be called in preemptible context. - iommu_dma_compose_msi_msg(): This function will update the MSI message with the IOVA when the device is behind an IOMMU. Signed-off-by: NJulien Grall <julien.grall@arm.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Acked-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 24 1月, 2019 1 次提交
-
-
由 Shaokun Zhang 提交于
end_pfn is never used after commit <aa3ac946> ('iommu/iova: Make dma 32bit pfn implicit'), cleanup it. Cc: Joerg Roedel <jroedel@suse.de> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: NShaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 17 12月, 2018 1 次提交
-
-
由 Joerg Roedel 提交于
Use the new helpers dev_iommu_fwspec_get()/set() to access the dev->iommu_fwspec pointer. This makes it easier to move that pointer later into another struct. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 11 12月, 2018 1 次提交
-
-
由 Ganapatrao Kulkarni 提交于
Change function __iommu_dma_alloc_pages() to allocate pages for DMA from respective device NUMA node. The ternary operator which would be for alloc_pages_node() is tidied along with this. The motivation for this change is to have a policy for page allocation consistent with direct DMA mapping, which attempts to allocate pages local to the device, as mentioned in [1]. In addition, for certain workloads it has been observed a marginal performance improvement. The patch caused an observation of 0.9% average throughput improvement for running tcrypt with HiSilicon crypto engine. We also include a modification to use kvzalloc() for kzalloc()/vzalloc() combination. [1] https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1692998.htmlSigned-off-by: NGanapatrao Kulkarni <ganapatrao.kulkarni@cavium.com> [JPG: Added kvzalloc(), drop pages ** being device local, remove ternary operator, update message] Signed-off-by: NJohn Garry <john.garry@huawei.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 06 12月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
Return DMA_MAPPING_ERROR instead of 0 on a dma mapping failure and let the core dma-mapping code handle the rest. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 01 10月, 2018 1 次提交
-
-
由 Zhen Lei 提交于
With the flush queue infrastructure already abstracted into IOVA domains, hooking it up in iommu-dma is pretty simple. Since there is a degree of dependency on the IOMMU driver knowing what to do to play along, we key the whole thing off a domain attribute which will be set on default DMA ops domains to request non-strict invalidation. That way, drivers can indicate the appropriate support by acknowledging the attribute, and we can easily fall back to strict invalidation otherwise. The flush queue callback needs a handle on the iommu_domain which owns our cookie, so we have to add a pointer back to that, but neatly, that's also sufficient to indicate whether we're using a flush queue or not, and thus which way to release IOVAs. The only slight subtlety is switching __iommu_dma_unmap() from calling iommu_unmap() to explicit iommu_unmap_fast()/iommu_tlb_sync() so that we can elide the sync entirely in non-strict mode. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> [rm: convert to domain attribute, tweak comments and commit message] Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 25 9月, 2018 1 次提交
-
-
由 Robin Murphy 提交于
Most parts of iommu-dma already assume they are operating on a default domain set up by iommu_dma_init_domain(), and can be converted straight over to avoid the refcounting bottleneck. MSI page mappings may be in an unmanaged domain with an explicit MSI-only cookie, so retain the non-specific lookup, but that's OK since they're far from a contended fast path either way. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Tested-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 28 7月, 2018 1 次提交
-
-
由 Robin Murphy 提交于
Take the new bus limit into account (when present) for IOVA allocations, to accommodate those SoCs which integrate off-the-shelf IP blocks with narrower interconnects such that the link between a device output and an IOMMU input can truncate DMA addresses to even fewer bits than the native size of either block's interface would imply. Eventually it might make sense for the DMA core to apply this constraint up-front in dma_set_mask() and friends, but for now this seems like the least risky approach. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 03 5月, 2018 1 次提交
-
-
由 Shameer Kolothum 提交于
This pretty much reverts commit 273df963 ("iommu/dma: Make PCI window reservation generic") by moving the PCI window region reservation back into the dma specific path so that these regions doesn't get exposed via the IOMMU API interface. With this change, the vfio interface will report only iommu specific reserved regions to the user space. Cc: Joerg Roedel <joro@8bytes.org> Signed-off-by: NShameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Fixes: 273df963 ('iommu/dma: Make PCI window reservation generic') Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 14 2月, 2018 1 次提交
-
-
由 Shameer Kolothum 提交于
Modified iommu_dma_get_resv_regions() to include GICv3 ITS region on ACPI based ARM platfiorms which may require HW MSI reservations. Signed-off-by: NShameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 12 10月, 2017 1 次提交
-
-
由 Tomasz Nowicki 提交于
Since IOVA allocation failure is not unusual case we need to flush CPUs' rcache in hope we will succeed in next round. However, it is useful to decide whether we need rcache flush step because of two reasons: - Not scalability. On large system with ~100 CPUs iterating and flushing rcache for each CPU becomes serious bottleneck so we may want to defer it. - free_cpu_cached_iovas() does not care about max PFN we are interested in. Thus we may flush our rcaches and still get no new IOVA like in the commonly used scenario: if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev)) iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift); if (!iova) iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift); 1. First alloc_iova_fast() call is limited to DMA_BIT_MASK(32) to get PCI devices a SAC address 2. alloc_iova() fails due to full 32-bit space 3. rcaches contain PFNs out of 32-bit space so free_cpu_cached_iovas() throws entries away for nothing and alloc_iova() fails again 4. Next alloc_iova_fast() call cannot take advantage of rcache since we have just defeated caches. In this case we pick the slowest option to proceed. This patch reworks flushed_rcache local flag to be additional function argument instead and control rcache flush step. Also, it updates all users to do the flush as the last chance. Signed-off-by: NTomasz Nowicki <Tomasz.Nowicki@caviumnetworks.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Tested-by: NNate Watterson <nwatters@codeaurora.org> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 27 9月, 2017 1 次提交
-
-
由 Zhen Lei 提交于
Now that the cached node optimisation can apply to all allocations, the couple of users which were playing tricks with dma_32bit_pfn in order to benefit from it can stop doing so. Conversely, there is also no need for all the other users to explicitly calculate a 'real' 32-bit PFN, when init_iova_domain() can happily do that itself from the page granularity. CC: Thierry Reding <thierry.reding@gmail.com> CC: Jonathan Hunter <jonathanh@nvidia.com> CC: David Airlie <airlied@linux.ie> CC: Sudeep Dutt <sudeep.dutt@intel.com> CC: Ashutosh Dixit <ashutosh.dixit@intel.com> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NZhen Lei <thunder.leizhen@huawei.com> Tested-by: NNate Watterson <nwatters@codeaurora.org> [rm: use iova_shift(), rewrote commit message] Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 20 6月, 2017 1 次提交
-
-
由 Christoph Hellwig 提交于
DMA_ERROR_CODE is not a public API and will go away soon. dma dma-iommu driver already implements a proper ->mapping_error method, so it's only using the value internally. Add a new local define using the value that arm64 which is the only current user of dma-iommu. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 17 5月, 2017 2 次提交
-
-
由 Robin Murphy 提交于
When walking the rbtree, the fact that iovad->start_pfn and limit_pfn are both inclusive limits creates an ambiguity once limit_pfn reaches the bottom of the address space and they overlap. Commit 5016bdb7 ("iommu/iova: Fix underflow bug in __alloc_and_insert_iova_range") fixed the worst side-effect of this, that of underflow wraparound leading to bogus allocations, but the remaining fallout is that any attempt to allocate start_pfn itself erroneously fails. The cleanest way to resolve the ambiguity is to simply make limit_pfn an exclusive limit when inside the guts of the rbtree. Since we're working with PFNs, representing one past the top of the address space is always possible without fear of overflow, and elsewhere it just makes life a little more straightforward. Reported-by: NAaron Sierra <asierra@xes-inc.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
When __iommu_dma_map() and iommu_dma_free_iova() are called from iommu_dma_get_msi_page(), various iova_*() helpers are still invoked in the process, whcih is unwise since they access a different member of the union (the iova_domain) from that which was last written, and there's no guarantee that sensible values will result anyway. CLean up the code paths that are valid for an MSI cookie to ensure we only do iova_domain-specific things when we're actually dealing with one. Fixes: a44e6657 ("iommu/dma: Clean up MSI IOVA allocation") Reported-by: NNate Watterson <nwatters@codeaurora.org> Tested-by: NShanker Donthineni <shankerd@codeaurora.org> Tested-by: NBharat Bhushan <bharat.bhushan@nxp.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Tested-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 03 4月, 2017 3 次提交
-
-
由 Robin Murphy 提交于
With IOVA allocation suitably tidied up, we are finally free to opt in to the per-CPU caching mechanism. The caching alone can provide a modest improvement over walking the rbtree for weedier systems (iperf3 shows ~10% more ethernet throughput on an ARM Juno r1 constrained to a single 650MHz Cortex-A53), but the real gain will be in sidestepping the rbtree lock contention which larger ARM-based systems with lots of parallel I/O are starting to feel the pain of. Reviewed-by: NNate Watterson <nwatters@codeaurora.org> Tested-by: NNate Watterson <nwatters@codeaurora.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
Now that allocation is suitably abstracted, our private alloc/free helpers can drive the trivial MSI cookie allocator directly as well, which lets us clean up its exposed guts from iommu_dma_map_msi_msg() and simplify things quite a bit. Reviewed-by: NNate Watterson <nwatters@codeaurora.org> Tested-by: NNate Watterson <nwatters@codeaurora.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
In preparation for some IOVA allocation improvements, clean up all the explicit struct iova usage such that all our mapping, unmapping and cleanup paths deal exclusively with addresses rather than implementation details. In the process, a few of the things we're touching get renamed for the sake of internal consistency. Reviewed-by: NNate Watterson <nwatters@codeaurora.org> Tested-by: NNate Watterson <nwatters@codeaurora.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 22 3月, 2017 3 次提交
-
-
由 Robin Murphy 提交于
Now that we're applying the IOMMU API reserved regions to our IOVA domains, we shouldn't need to privately special-case PCI windows, or indeed anything else which isn't specific to our iommu-dma layer. However, since those aren't IOMMU-specific either, rather than start duplicating code into IOMMU drivers let's transform the existing function into an iommu_get_resv_regions() helper that they can share. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
Now that it's simple to discover the necessary reservations for a given device/IOMMU combination, let's wire up the appropriate handling. Basic reserved regions and direct-mapped regions we simply have to carve out of IOVA space (the IOMMU core having already mapped the latter before attaching the device). For hardware MSI regions, we also pre-populate the cookie with matching msi_pages. That way, irqchip drivers which normally assume MSIs to require mapping at the IOMMU can keep working without having to special-case their iommu_dma_map_msi_msg() hook, or indeed be aware at all of quirks preventing the IOMMU from translating certain addresses. Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
Even if a host controller's CPU-side MMIO windows into PCI I/O space do happen to leak into PCI memory space such that it might treat them as peer addresses, trying to reserve the corresponding I/O space addresses doesn't do anything to help solve that problem. Stop doing a silly thing. Fixes: fade1ec0 ("iommu/dma: Avoid PCI host bridge windows") Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 06 2月, 2017 1 次提交
-
-
由 Robin Murphy 提交于
Back when this was first written, dma_supported() was somewhat of a murky mess, with subtly different interpretations being relied upon in various places. The "does device X support DMA to address range Y?" uses assuming Y to be physical addresses, which motivated the current iommu_dma_supported() implementation and are alluded to in the comment therein, have since been cleaned up, leaving only the far less ambiguous "can device X drive address bits Y" usage internal to DMA API mask setting. As such, there is no reason to keep a slightly misleading callback which does nothing but duplicate the current default behaviour; we already constrain IOVA allocations to the iommu_domain aperture where necessary, so let's leave DMA mask business to architecture-specific code where it belongs. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 30 1月, 2017 2 次提交
-
-
由 Robin Murphy 提交于
Whilst PCI devices may have 64-bit DMA masks, they still benefit from using 32-bit addresses wherever possible in order to avoid DAC (PCI) or longer address packets (PCIe), which may incur a performance overhead. Implement the same optimisation as other allocators by trying to get a 32-bit address first, only falling back to the full mask if that fails. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
iommu_dma_init_domain() was originally written under the misconception that dma_32bit_pfn represented some sort of size limit for IOVA domains. Since the truth is almost the exact opposite of that, rework the logic and comments to reflect its real purpose of optimising lookups when allocating from a subset of the available 64-bit space. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 23 1月, 2017 1 次提交
-
-
由 Robin Murphy 提交于
IOMMU domain users such as VFIO face a similar problem to DMA API ops with regard to mapping MSI messages in systems where the MSI write is subject to IOMMU translation. With the relevant infrastructure now in place for managed DMA domains, it's actually really simple for other users to piggyback off that and reap the benefits without giving up their own IOVA management, and without having to reinvent their own wheel in the MSI layer. Allow such users to opt into automatic MSI remapping by dedicating a region of their IOVA space to a managed cookie, and extend the mapping routine to implement a trivial linear allocator in such cases, to avoid the needless overhead of a full-blown IOVA domain. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Tested-by: NEric Auger <eric.auger@redhat.com> Tested-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Tested-by: NBharat Bhushan <bharat.bhushan@nxp.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 19 1月, 2017 1 次提交
-
-
由 Mitchel Humpherys 提交于
The newly added DMA_ATTR_PRIVILEGED is useful for creating mappings that are only accessible to privileged DMA engines. Implement it in dma-iommu.c so that the ARM64 DMA IOMMU mapper can make use of it. Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Tested-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMitchel Humpherys <mitchelh@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 14 11月, 2016 1 次提交
-
-
由 Robin Murphy 提交于
With the new dma_{map,unmap}_resource() functions added to the DMA API for the benefit of cases like slave DMA, add suitable implementations to the arsenal of our generic layer. Since cache maintenance should not be a concern, these can both be standalone callback implementations without the need for arch code wrappers. CC: Joerg Roedel <joro@8bytes.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 16 9月, 2016 2 次提交
-
-
由 Robin Murphy 提交于
With our DMA ops enabled for PCI devices, we should avoid allocating IOVAs which a host bridge might misinterpret as peer-to-peer DMA and lead to faults, corruption or other badness. To be safe, punch out holes for all of the relevant host bridge's windows when initialising a DMA domain for a PCI device. CC: Marek Szyprowski <m.szyprowski@samsung.com> CC: Inki Dae <inki.dae@samsung.com> Reported-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
When an MSI doorbell is located downstream of an IOMMU, attaching devices to a DMA ops domain and switching on translation leads to a rude shock when their attempt to write to the physical address returned by the irqchip driver faults (or worse, writes into some already-mapped buffer) and no interrupt is forthcoming. Address this by adding a hook for relevant irqchip drivers to call from their compose_msi_msg() callback, to swizzle the physical address with an appropriatly-mapped IOVA for any device attached to one of our DMA ops domains. Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 10 8月, 2016 1 次提交
-
-
由 Robin Murphy 提交于
Where a device driver has set a 64-bit DMA mask to indicate the absence of addressing limitations, we still need to ensure that we don't allocate IOVAs beyond the actual input size of the IOMMU. The reported aperture is the most reliable way we have of inferring that input address size, so use that to enforce a hard upper limit where available. Fixes: 0db2e5d1 ("iommu: Implement common IOMMU ops for DMA mapping") Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-