- 29 12月, 2015 7 次提交
-
-
由 Joerg Roedel 提交于
Make this a wrapper around iommu_ops_area_alloc() for now and add more logic to this function later on. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
The page-offset of the aperture must be passed instead of 0. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This allows to keep the bitmap_lock only for a very short period of time. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
There have been present PTEs which in theory could have made it to the IOMMU TLB. Flush the addresses out on the error path to make sure no stale entries remain. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This lock only protects the address allocation bitmap in one aperture. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
It is only used in this file anyway, so keep it there. Same with 'struct aperture_range'. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This prevents possible flooding of the kernel log. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 07 11月, 2015 1 次提交
-
-
由 Mel Gorman 提交于
mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd __GFP_WAIT has been used to identify atomic context in callers that hold spinlocks or are in interrupts. They are expected to be high priority and have access one of two watermarks lower than "min" which can be referred to as the "atomic reserve". __GFP_HIGH users get access to the first lower watermark and can be called the "high priority reserve". Over time, callers had a requirement to not block when fallback options were available. Some have abused __GFP_WAIT leading to a situation where an optimisitic allocation with a fallback option can access atomic reserves. This patch uses __GFP_ATOMIC to identify callers that are truely atomic, cannot sleep and have no alternative. High priority users continue to use __GFP_HIGH. __GFP_DIRECT_RECLAIM identifies callers that can sleep and are willing to enter direct reclaim. __GFP_KSWAPD_RECLAIM to identify callers that want to wake kswapd for background reclaim. __GFP_WAIT is redefined as a caller that is willing to enter direct reclaim and wake kswapd for background reclaim. This patch then converts a number of sites o __GFP_ATOMIC is used by callers that are high priority and have memory pools for those requests. GFP_ATOMIC uses this flag. o Callers that have a limited mempool to guarantee forward progress clear __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall into this category where kswapd will still be woken but atomic reserves are not used as there is a one-entry mempool to guarantee progress. o Callers that are checking if they are non-blocking should use the helper gfpflags_allow_blocking() where possible. This is because checking for __GFP_WAIT as was done historically now can trigger false positives. Some exceptions like dm-crypt.c exist where the code intent is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to flag manipulations. o Callers that built their own GFP flags instead of starting with GFP_KERNEL and friends now also need to specify __GFP_KSWAPD_RECLAIM. The first key hazard to watch out for is callers that removed __GFP_WAIT and was depending on access to atomic reserves for inconspicuous reasons. In some cases it may be appropriate for them to use __GFP_HIGH. The second key hazard is callers that assembled their own combination of GFP flags instead of starting with something like GFP_KERNEL. They may now wish to specify __GFP_KSWAPD_RECLAIM. It's almost certainly harmless if it's missed in most cases as other activity will wake kswapd. Signed-off-by: NMel Gorman <mgorman@techsingularity.net> Acked-by: NVlastimil Babka <vbabka@suse.cz> Acked-by: NMichal Hocko <mhocko@suse.com> Acked-by: NJohannes Weiner <hannes@cmpxchg.org> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Cc: Vitaly Wool <vitalywool@gmail.com> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 22 10月, 2015 1 次提交
-
-
由 Joerg Roedel 提交于
Set the device_group call-back to pci_device_group() for the Intel VT-d and the AMD IOMMU driver. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 21 10月, 2015 8 次提交
-
-
由 Joerg Roedel 提交于
The driver always uses a constant size for these buffers anyway, so there is no need to waste memory to store the sizes. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This mostly removes the code to create dev_data structures for alias device ids. They are not necessary anymore, as they were only created for device ids which have no struct pci_dev associated with it. But these device ids are handled in a simpler way now, so there is no need for this code anymore. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
With this we don't have to create dev_data entries for non-existent devices (which only exist as request-ids). Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
These functions rely on being called with IRQs disabled. Add a WARN_ON to detect early when its not. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This function is already called with IRQs disabled already. So no need to disable them again. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
The alias list is handled aleady by iommu core code. No need anymore to handle it in this part of the AMD IOMMU code Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
The condition in the BUG_ON is an indicator of a BUG, but no reason to kill the code path. Turn it into a WARN_ON and bail out if it is hit. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
During device assignment/deassignment the flags in the DTE get lost, which might cause spurious faults, for example when the device tries to access the system management range. Fix this by not clearing the flags with the rest of the DTE. Reported-by: NG. Richard Bellamy <rbellamy@pteradigm.com> Tested-by: NG. Richard Bellamy <rbellamy@pteradigm.com> Cc: stable@vger.kernel.org Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 09 10月, 2015 1 次提交
-
-
由 Joerg Roedel 提交于
When a device group is detached from its domain, the iommu core code calls into the iommu driver to detach each device individually. Before this functionality went into the iommu core code, it was implemented in the drivers, also in the AMD IOMMU driver as the device alias handling code. This code is still present, as there might be aliases that don't exist as real PCI devices (and are therefore invisible to the iommu core code). Unfortunatly it might happen now, that a device is unbound multiple times from its domain, first by the alias handling code and then by the iommu core code (or vice verca). This ends up in the do_detach function which dereferences the dev_data->domain pointer. When the device is already detached, this pointer is NULL and we get a kernel oops. Removing the alias code completly is not an option, as that would also remove the code which handles invisible aliases. The code could be simplified, but this is too big of a change outside the merge window. For now, just check the dev_data->domain pointer in do_detach and bail out if it is NULL. Reported-by: NAndreas Hartmann <andihartmann@freenet.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 14 8月, 2015 2 次提交
-
-
由 Joerg Roedel 提交于
Found by a coccicheck script. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Allocate the irq data only in the loop. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 31 7月, 2015 1 次提交
-
-
由 Joerg Roedel 提交于
With the grouping of multi-function devices a non-ATS capable device might also end up in the same domain as an IOMMUv2 capable device. So handle this situation gracefully and don't consider it a bug anymore. Tested-by: NOded Gabbay <oded.gabbay@gmail.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 30 7月, 2015 4 次提交
-
-
由 Joerg Roedel 提交于
Some AMD systems also have non-PCI devices which can do DMA. Those can't be handled by the AMD IOMMU, as the hardware can only handle PCI. These devices would end up with no dma_ops, as neither the per-device nor the global dma_ops will get set. SWIOTLB provides global dma_ops when it is active, so make sure there are global dma_ops too when swiotlb is disabled. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
In passthrough mode (iommu=pt) all devices are identity mapped. If a device does not support 64bit DMA it might still need remapping. Make sure swiotlb is initialized to provide this remapping. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Since devices with IOMMUv2 functionality might be in the same group as devices without it, allow those devices in IOMMUv2 domains too. Otherwise attaching the group with the IOMMUv2 device to the domain will fail. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Remove the AMD IOMMU driver implementation for passthrough mode and rely on the new iommu core features for that. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 01 7月, 2015 1 次提交
-
-
由 Joerg Roedel 提交于
This function contains the common parts between the initialization of dma_ops_domains and usual protection domains. This also fixes a long-standing bug which was uncovered by recent changes, in which the api_lock was not initialized for dma_ops_domains. Reported-by: NGeorge Wang <xuw2015@gmail.com> Tested-by: NGeorge Wang <xuw2015@gmail.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 19 6月, 2015 1 次提交
-
-
由 Joerg Roedel 提交于
Make sure that we are skipping over large PTEs while walking the page-table tree. Cc: stable@kernel.org Fixes: 5c34c403 ("iommu/amd: Fix memory leak in free_pagetable") Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 11 6月, 2015 10 次提交
-
-
由 Joerg Roedel 提交于
Without this patch only -ENOTSUPP is handled, but there are other possible errors. Handle them too. Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This function can fail. Propagate any errors back to the initialization state machine. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
The list_head and target_dev members are not used anymore. Remove them. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
With device intialization done in the add_device call-back now there is no reason for this function anymore. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
A device that might be used for HSA needs to be in a direct mapped domain so that all DMA-API mappings stay alive when the IOMMUv2 stack is used. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Add support to allocate direct mapped domains through the IOMMU-API. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This enables allocation of DMA-API default domains from the IOMMU core and switches allocation of domain dma-api domain to the IOMMU core too. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Implement these two iommu-ops call-backs to make use of the initialization and notifier features of the iommu core. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Add the get_dm_regions and put_dm_regions callbacks to the iommu_ops of the AMD IOMMU driver. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 02 6月, 2015 1 次提交
-
-
由 Joerg Roedel 提交于
This reverts commit 5fc872c7. The DMA-API does not strictly require that the memory returned by dma_alloc_coherent is zeroed out. For that another function (dma_zalloc_coherent) should be used. But all other x86 DMA-API implementation I checked zero out the memory, so that some drivers rely on it and break when it is not. It seems the (driver-)world is not yet ready for this change, so revert it. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 29 5月, 2015 1 次提交
-
-
由 Joerg Roedel 提交于
Handle this case to make sure boundary_size does not become 0 and trigger a BUG_ON later. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 24 4月, 2015 1 次提交
-
-
由 Jiang Liu 提交于
Move check of cfg->move_in_progress into send_cleanup_vector() to prepare for simplifying struct irq_cfg. Signed-off-by: NJiang Liu <jiang.liu@linux.intel.com> Tested-by: NJoerg Roedel <jroedel@suse.de> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: David Cohen <david.a.cohen@linux.intel.com> Cc: Sander Eikelenboom <linux@eikelenboom.it> Cc: David Vrabel <david.vrabel@citrix.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: iommu@lists.linux-foundation.org Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Dimitri Sivanich <sivanich@sgi.com> Cc: Joerg Roedel <joro@8bytes.org> Link: http://lkml.kernel.org/r/1428978610-28986-26-git-send-email-jiang.liu@linux.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-