- 13 7月, 2016 6 次提交
-
-
由 Joerg Roedel 提交于
Use the iommu-api map/unmap functions instead. This will be required anyway when IOVA code is used for address allocation. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Make this function ready to be used in the DMA-API path. Reorder parameters a bit while at it. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
It is used to reserve the dm-regions in the iova-tree. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Put the MSI-range, the HT-range and the MMIO ranges of PCI devices into that range, so that these addresses are not allocated for DMA. Copy this address list into every created dma_ops_domain. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Use it later for allocating the IO virtual addresses. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
The default domain for a device might also be identity-mapped. In this case the kernel would crash when unity mappings are defined for the device. Fix that by making sure the domain is a dma_ops domain. Fixes: 0bb6e243 ('iommu/amd: Support IOMMU_DOMAIN_DMA type allocation') Cc: stable@vger.kernel.org # v4.2+ Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 15 6月, 2016 1 次提交
-
-
由 Wan Zongshun 提交于
AMD has more drivers will use ACPI to platform bus driver later, all those devices need iommu support, for example: eMMC driver. For latest AMD eMMC controller, it will utilize sdhci-acpi.c driver, which will rely on platform bus to match device and driver, where we will set 'dev' of struct platform_device as map_sg parameter passing to iommu driver for DMA request, so the iommu-ops are needed on the platform bus. Signed-off-by: NWan Zongshun <Vincent.Wan@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 09 5月, 2016 1 次提交
-
-
由 Joerg Roedel 提交于
The statistics are not really used for anything and should be replaced by generic and per-device statistic counters. Remove the code for now. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 22 4月, 2016 2 次提交
-
-
由 Joerg Roedel 提交于
They will be needed there later. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Use the better 'var < 0' check. Fixes: 7aba6cb9 ('iommu/amd: Make call-sites of get_device_id aware of its return value') Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 15 4月, 2016 1 次提交
-
-
由 Dan Carpenter 提交于
"devid" needs to be signed for the error handling to work. Fixes: b097d11a ('iommu/amd: Manage iommu_group for ACPI HID devices') Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 11 4月, 2016 1 次提交
-
-
由 Joerg Roedel 提交于
Commit 61289cba ('iommu/amd: Remove old alias handling code') removed the old alias handling code from the AMD IOMMU driver because this is now handled by the IOMMU core code. But this also removed the handling of PCI aliases, which is not handled by the core code. This caused issues with PCI devices that have hidden PCIe-to-PCI bridges that rewrite the request-id. Fix this bug by re-introducing some of the removed functions from commit 61289cba and add a alias field 'struct iommu_dev_data'. This field carrys the return value of the get_alias() function and uses that instead of the amd_iommu_alias_table[] array in the code. Fixes: 61289cba ('iommu/amd: Remove old alias handling code') Cc: stable@vger.kernel.org # v4.4+ Tested-by: NTomasz Golinski <tomaszg@math.uwb.edu.pl> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 07 4月, 2016 5 次提交
-
-
由 Wan Zongshun 提交于
AMD Uart DMA belongs to ACPI HID type device, and its driver is basing on AMBA Bus, need also IOMMU support. This patch is just to set the AMD iommu callbacks for amba bus. Signed-off-by: NWan Zongshun <Vincent.Wan@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Wan Zongshun 提交于
This patch creates a new function for finding or creating an IOMMU group for acpihid(ACPI Hardware ID) device. The acpihid devices with the same devid will be put into same group and there will have the same domain id and share the same page table. Signed-off-by: NWan Zongshun <Vincent.Wan@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Wan Zongshun 提交于
Current IOMMU driver make assumption that the downstream devices are PCI. With the newly added ACPI-HID IVHD device entry support, this is no longer true. This patch is to add dev type check and to distinguish the pci and acpihid device code path. Signed-off-by: NWan Zongshun <Vincent.Wan@amd.com> Signed-off-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Wan Zongshun 提交于
This patch is to make the call-sites of get_device_id aware of its return value. Signed-off-by: NWan Zongshun <Vincent.Wan@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Wan Zongshun 提交于
This patch introduces acpihid_map, which is used to store the new IVHD device entry extracted from BIOS IVRS table. It also provides a utility function add_acpi_hid_device(), to add this types of devices to the map. Signed-off-by: NWan Zongshun <Vincent.Wan@amd.com> Signed-off-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 01 3月, 2016 1 次提交
-
-
由 Joerg Roedel 提交于
Detach the device that is about to be removed from its domain (if it has one) to clear any related state like DTE entry and device's ATS state. Reported-by: NKelly Zytaruk <Kelly.Zytaruk@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 29 1月, 2016 1 次提交
-
-
由 Baoquan He 提交于
In below commit alias DTE is set when its peripheral is setting DTE. However there's a code bug here to wrongly set the alias DTE, correct it in this patch. commit e25bfb56 Author: Joerg Roedel <jroedel@suse.de> Date: Tue Oct 20 17:33:38 2015 +0200 iommu/amd: Set alias DTE in do_attach/do_detach Signed-off-by: NBaoquan He <bhe@redhat.com> Tested-by: NMark Hounschell <markh@compro.net> Cc: stable@vger.kernel.org # v4.4 Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 07 1月, 2016 1 次提交
-
-
由 Dan Carpenter 提交于
get_device_id() returns an unsigned short device id. It never fails and it never returns a negative so we can remove this condition. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 29 12月, 2015 20 次提交
-
-
由 Joerg Roedel 提交于
Preallocate between 4 and 8 apertures when a device gets it dma_mask. With more apertures we reduce the lock contention of the domain lock significantly. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
First search for a non-contended aperture with trylock before spinning. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Make this pointer percpu so that we start searching for new addresses in the range we last stopped and which is has a higher probability of being still in the cache. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Remove the long holding times of the domain->lock and rely on the bitmap_lock instead. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Make sure the aperture range is fully initialized before it is visible to the address allocator. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This allows to build up the page-tables without holding any locks. As a consequence it removes the need to pre-populate dma_ops page-tables. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
It really belongs there and not in __map_single. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Don't flush the iommu tlb when we free something behind the current next_bit pointer. Update the next_bit pointer instead and let the flush happen on the next wraparound in the allocation path. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
The flushing of iommu tlbs is now done on a per-range basis. So there is no need anymore for domain-wide flush tracking. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This way we don't need to care about the next_index wrapping around in dma_ops_alloc_addresses. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Instead of setting need_flush, do the flush directly in dma_ops_free_addresses. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
It points to the next aperture index to allocate from. We don't need the full address anymore because this is now tracked in struct aperture_range. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Parameter is not needed because the value is part of the already passed in struct dma_ops_domain. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Since the allocator wraparound happens in this function now, flush the iommu tlb there too. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Instead of skipping to the next aperture, first try again in the current one. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Moving it before the pte_pages array puts in into the same cache-line as the spin-lock and the bitmap array pointer. This should safe a cache-miss. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Make this a wrapper around iommu_ops_area_alloc() for now and add more logic to this function later on. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
The page-offset of the aperture must be passed instead of 0. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This allows to keep the bitmap_lock only for a very short period of time. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
There have been present PTEs which in theory could have made it to the IOMMU TLB. Flush the addresses out on the error path to make sure no stale entries remain. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-