- 25 2月, 2016 9 次提交
-
-
由 Marek Szyprowski 提交于
SYSMMU on some SoCs reports bogus values in VERSION register. Force hardware version to 1.0 for such controllers. This patch also moves reading version register to driver's probe() function. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Marek Szyprowski 提交于
This patch simplifies the code for handling of flpdcache invalidation. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Marek Szyprowski 提交于
This patch rewrites sysmmu_init_config function to make it easier to read and understand. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Marek Szyprowski 提交于
This patch provides a new implementation for page fault handing code. The new implementation is ready for future extensions. No functional changes have been made. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Marek Szyprowski 提交于
This patch changes some internal functions to have access to the state of sysmmu device instead of having only it's registers. This will make the code ready for future extensions. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Marek Szyprowski 提交于
All clock API function can be called on NULL clock, so simplify code avoid checking of master clock presence. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Marek Szyprowski 提交于
This patch replaces custom ARM-specific code for performing CPU cache flush operations with generic code based on DMA-mapping. Domain managing code is independent of particular SYSMMU device, so the first registered SYSMMU device is used for DMA-mapping calls. This simplification works fine because all SYSMMU controllers are in the same address space (where DMA address equals physical address) and the DMA-mapping calls are done mainly to flush CPU cache to make changes visible to SYSMMU controllers. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Marek Szyprowski 提交于
This patch adds support for DMA domain type. Such domain have DMA cookie prepared and can be used by generic DMA-IOMMU glue layer. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Marek Szyprowski 提交于
This patch replaces custom code in add_device implementation with iommu_group_get_for_dev() call and provides the needed callback. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 15 2月, 2016 1 次提交
-
-
由 David Woodhouse 提交于
According to the VT-d specification we need to clear the PPR bit in the Page Request Status register when handling page requests, or the hardware won't generate any more interrupts. This wasn't actually necessary on SKL/KBL (which may well be the subject of a hardware erratum, although it's harmless enough). But other implementations do appear to get it right, and we only ever get one interrupt unless we clear the PPR bit. Reported-by: NCQ Tang <cq.tang@intel.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com> Cc: stable@vger.kernel.org
-
- 29 1月, 2016 3 次提交
-
-
由 Baoquan He 提交于
In below commit alias DTE is set when its peripheral is setting DTE. However there's a code bug here to wrongly set the alias DTE, correct it in this patch. commit e25bfb56 Author: Joerg Roedel <jroedel@suse.de> Date: Tue Oct 20 17:33:38 2015 +0200 iommu/amd: Set alias DTE in do_attach/do_detach Signed-off-by: NBaoquan He <bhe@redhat.com> Tested-by: NMark Hounschell <markh@compro.net> Cc: stable@vger.kernel.org # v4.4 Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jeremy McNicoll 提交于
Fix a simple typo when disabling IOTLB on PCI(e) devices. Fixes: b16d0cb9 ("iommu/vt-d: Always enable PASID/PRI PCI capabilities before ATS") Cc: stable@vger.kernel.org # v4.4 Signed-off-by: NJeremy McNicoll <jmcnicol@redhat.com> Reviewed-by: NAlex Williamson <alex.williamson@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Lada Trimasova 提交于
Trying to build a kernel for ARC with both options CONFIG_COMPILE_TEST and CONFIG_IOMMU_IO_PGTABLE_LPAE enabled (e.g. as a result of "make allyesconfig") results in the following build failure: | CC drivers/iommu/io-pgtable-arm.o | linux/drivers/iommu/io-pgtable-arm.c: In | function ‘__arm_lpae_alloc_pages’: | linux/drivers/iommu/io-pgtable-arm.c:221:3: | error: implicit declaration of function ‘dma_map_single’ | [-Werror=implicit-function-declaration] | dma = dma_map_single(dev, pages, size, DMA_TO_DEVICE); | ^ | linux/drivers/iommu/io-pgtable-arm.c:221:42: | error: ‘DMA_TO_DEVICE’ undeclared (first use in this function) | dma = dma_map_single(dev, pages, size, DMA_TO_DEVICE); | ^ Since IOMMU_IO_PGTABLE_LPAE depends on DMA API, io-pgtable-arm.c should include linux/dma-mapping.h. This fixes the reported failure. Cc: Alexey Brodkin <abrodkin@synopsys.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Joerg Roedel <joro@8bytes.org> Signed-off-by: NLada Trimasova <ltrimas@synopsys.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 14 1月, 2016 2 次提交
-
-
由 CQ Tang 提交于
This is a 32-bit register. Apparently harmless on real hardware, but causing justified warnings in simulation. Signed-off-by: NCQ Tang <cq.tang@intel.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com> Cc: stable@vger.kernel.org
-
由 David Woodhouse 提交于
Holding mm_users works OK for graphics, which was the first user of SVM with VT-d. However, it works less well for other devices, where we actually do a mmap() from the file descriptor to which the SVM PASID state is tied. In this case on process exit we end up with a recursive reference count: - The MM remains alive until the file is closed and the driver's release() call ends up unbinding the PASID. - The VMA corresponding to the mmap() remains intact until the MM is destroyed. - Thus the file isn't closed, even when exit_files() runs, because the VMA is still holding a reference to it. And the MM remains alive… To address this issue, we *stop* holding mm_users while the PASID is bound. We already hold mm_count by virtue of the MMU notifier, and that can be made to be sufficient. It means that for a period during process exit, the fun part of mmput() has happened and exit_mmap() has been called so the MM is basically defunct. But the PGD still exists and the PASID is still bound to it. During this period, we have to be very careful — exit_mmap() doesn't use mm->mmap_sem because it doesn't expect anyone else to be touching the MM (quite reasonably, since mm_users is zero). So we also need to fix the fault handler to just report failure if mm_users is already zero, and to temporarily bump mm_users while handling any faults. Additionally, exit_mmap() calls mmu_notifier_release() *before* it tears down the page tables, which is too early for us to flush the IOTLB for this PASID. And __mmu_notifier_release() removes every notifier from the list, so when exit_mmap() finally *does* tear down the mappings and clear the page tables, we don't get notified. So we work around this by clearing the PASID table entry in our MMU notifier release() callback. That way, the hardware *can't* get any pages back from the page tables before they get cleared. Hardware designers have confirmed that the resulting 'PASID not present' faults should be handled just as gracefully as 'page not present' faults, the important criterion being that they don't perturb the operation for any *other* PASID in the system. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com> Cc: stable@vger.kernel.org
-
- 07 1月, 2016 4 次提交
-
-
由 Joerg Roedel 提交于
Only check for error when iommu->iommu_dev has been assigned and only assign drhd->iommu when the function can't fail anymore. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Nicholas Krause 提交于
This adds the proper check to alloc_iommu to make sure that the call to iommu_device_create has completed successfully and if not return the error code to the caller after freeing up resources allocated previously. Signed-off-by: NNicholas Krause <xerofoify@gmail.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
When mapping a non-page-aligned scatterlist entry, we copy the original offset to the output DMA address before aligning it to hand off to iommu_map_sg(), then later adding the IOVA page address portion to get the final mapped address. However, when the IOVA page size is smaller than the CPU page size, it is the offset within the IOVA page we want, not that within the CPU page, which can easily be larger than an IOVA page and thus result in an incorrect final address. Fix the bug by taking only the IOVA-aligned part of the offset as the basis of the DMA address, not the whole thing. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Dan Carpenter 提交于
get_device_id() returns an unsigned short device id. It never fails and it never returns a negative so we can remove this condition. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 29 12月, 2015 21 次提交
-
-
由 Joerg Roedel 提交于
Preallocate between 4 and 8 apertures when a device gets it dma_mask. With more apertures we reduce the lock contention of the domain lock significantly. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
First search for a non-contended aperture with trylock before spinning. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Make this pointer percpu so that we start searching for new addresses in the range we last stopped and which is has a higher probability of being still in the cache. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Remove the long holding times of the domain->lock and rely on the bitmap_lock instead. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Make sure the aperture range is fully initialized before it is visible to the address allocator. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This allows to build up the page-tables without holding any locks. As a consequence it removes the need to pre-populate dma_ops page-tables. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
It really belongs there and not in __map_single. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Don't flush the iommu tlb when we free something behind the current next_bit pointer. Update the next_bit pointer instead and let the flush happen on the next wraparound in the allocation path. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
The flushing of iommu tlbs is now done on a per-range basis. So there is no need anymore for domain-wide flush tracking. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This way we don't need to care about the next_index wrapping around in dma_ops_alloc_addresses. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Instead of setting need_flush, do the flush directly in dma_ops_free_addresses. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
It points to the next aperture index to allocate from. We don't need the full address anymore because this is now tracked in struct aperture_range. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Parameter is not needed because the value is part of the already passed in struct dma_ops_domain. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Since the allocator wraparound happens in this function now, flush the iommu tlb there too. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Instead of skipping to the next aperture, first try again in the current one. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Moving it before the pte_pages array puts in into the same cache-line as the spin-lock and the bitmap array pointer. This should safe a cache-miss. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Make this a wrapper around iommu_ops_area_alloc() for now and add more logic to this function later on. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
The page-offset of the aperture must be passed instead of 0. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This allows to keep the bitmap_lock only for a very short period of time. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
There have been present PTEs which in theory could have made it to the IOMMU TLB. Flush the addresses out on the error path to make sure no stale entries remain. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This lock only protects the address allocation bitmap in one aperture. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-