- 29 12月, 2015 18 次提交
-
-
由 Joerg Roedel 提交于
This allows to build up the page-tables without holding any locks. As a consequence it removes the need to pre-populate dma_ops page-tables. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
It really belongs there and not in __map_single. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Don't flush the iommu tlb when we free something behind the current next_bit pointer. Update the next_bit pointer instead and let the flush happen on the next wraparound in the allocation path. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
The flushing of iommu tlbs is now done on a per-range basis. So there is no need anymore for domain-wide flush tracking. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This way we don't need to care about the next_index wrapping around in dma_ops_alloc_addresses. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Instead of setting need_flush, do the flush directly in dma_ops_free_addresses. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
It points to the next aperture index to allocate from. We don't need the full address anymore because this is now tracked in struct aperture_range. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Parameter is not needed because the value is part of the already passed in struct dma_ops_domain. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Since the allocator wraparound happens in this function now, flush the iommu tlb there too. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Instead of skipping to the next aperture, first try again in the current one. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Moving it before the pte_pages array puts in into the same cache-line as the spin-lock and the bitmap array pointer. This should safe a cache-miss. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Make this a wrapper around iommu_ops_area_alloc() for now and add more logic to this function later on. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
The page-offset of the aperture must be passed instead of 0. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This allows to keep the bitmap_lock only for a very short period of time. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
There have been present PTEs which in theory could have made it to the IOMMU TLB. Flush the addresses out on the error path to make sure no stale entries remain. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This lock only protects the address allocation bitmap in one aperture. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
It is only used in this file anyway, so keep it there. Same with 'struct aperture_range'. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This prevents possible flooding of the kernel log. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 14 12月, 2015 5 次提交
-
-
由 Julia Lawall 提交于
This mmu_notifier_ops structure is never modified, so declare it as const, like the other mmu_notifier_ops structures. Done with the help of Coccinelle. Signed-off-by: NJulia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Get rid of the three error paths that look the same and move error handling to a single place. Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Acked-By: NDavid Woodhouse <David.Woodhouse@intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Instead of just checking for a write access, calculate the flags that are passed to handle_mm_fault() more precisly and use the pre-defined macros. Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Acked-By: NDavid Woodhouse <David.Woodhouse@intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Not doing so is a bug and might trigger a BUG_ON in handle_mm_fault(). So add the proper permission checks before calling into mm code. Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Acked-By: NDavid Woodhouse <David.Woodhouse@intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
The handle_mm_fault function expects the caller to do the access checks. Not doing so and calling the function with wrong permissions is a bug (catched by a BUG_ON). So fix this bug by adding proper access checking to the io page-fault code in the AMD IOMMUv2 driver. Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Acked-By: NDavid Woodhouse <David.Woodhouse@intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 09 11月, 2015 1 次提交
-
-
由 Sebastian Ott 提交于
We use lazy allocation for translation table entries but don't handle allocation (and other) failures during translation table updates. Handle these failures and undo translation table updates when it's meaningful. Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: NGerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 07 11月, 2015 1 次提交
-
-
由 Mel Gorman 提交于
mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd __GFP_WAIT has been used to identify atomic context in callers that hold spinlocks or are in interrupts. They are expected to be high priority and have access one of two watermarks lower than "min" which can be referred to as the "atomic reserve". __GFP_HIGH users get access to the first lower watermark and can be called the "high priority reserve". Over time, callers had a requirement to not block when fallback options were available. Some have abused __GFP_WAIT leading to a situation where an optimisitic allocation with a fallback option can access atomic reserves. This patch uses __GFP_ATOMIC to identify callers that are truely atomic, cannot sleep and have no alternative. High priority users continue to use __GFP_HIGH. __GFP_DIRECT_RECLAIM identifies callers that can sleep and are willing to enter direct reclaim. __GFP_KSWAPD_RECLAIM to identify callers that want to wake kswapd for background reclaim. __GFP_WAIT is redefined as a caller that is willing to enter direct reclaim and wake kswapd for background reclaim. This patch then converts a number of sites o __GFP_ATOMIC is used by callers that are high priority and have memory pools for those requests. GFP_ATOMIC uses this flag. o Callers that have a limited mempool to guarantee forward progress clear __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall into this category where kswapd will still be woken but atomic reserves are not used as there is a one-entry mempool to guarantee progress. o Callers that are checking if they are non-blocking should use the helper gfpflags_allow_blocking() where possible. This is because checking for __GFP_WAIT as was done historically now can trigger false positives. Some exceptions like dm-crypt.c exist where the code intent is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to flag manipulations. o Callers that built their own GFP flags instead of starting with GFP_KERNEL and friends now also need to specify __GFP_KSWAPD_RECLAIM. The first key hazard to watch out for is callers that removed __GFP_WAIT and was depending on access to atomic reserves for inconspicuous reasons. In some cases it may be appropriate for them to use __GFP_HIGH. The second key hazard is callers that assembled their own combination of GFP flags instead of starting with something like GFP_KERNEL. They may now wish to specify __GFP_KSWAPD_RECLAIM. It's almost certainly harmless if it's missed in most cases as other activity will wake kswapd. Signed-off-by: NMel Gorman <mgorman@techsingularity.net> Acked-by: NVlastimil Babka <vbabka@suse.cz> Acked-by: NMichal Hocko <mhocko@suse.com> Acked-by: NJohannes Weiner <hannes@cmpxchg.org> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Cc: Vitaly Wool <vitalywool@gmail.com> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 02 11月, 2015 1 次提交
-
-
由 Joerg Roedel 提交于
The function returns 0 on success, so check for the right value. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 28 10月, 2015 1 次提交
-
-
由 David Woodhouse 提交于
This is the downside of using bitfields in the struct definition, rather than doing all the explicit masking and shifting. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 25 10月, 2015 2 次提交
-
-
由 David Woodhouse 提交于
When booted with intel_iommu=ecs_off we were still allocating the PASID tables even though we couldn't actually use them. We really want to make the pasid_enabled() macro depend on ecs_enabled(). Which is unfortunate, because currently they're the other way round to cope with the Broadwell/Skylake problems with ECS. Instead of having ecs_enabled() depend on pasid_enabled(), which was never something that made me happy anyway, make it depend in the normal case on the "broken PASID" bit 28 *not* being set. Then pasid_enabled() can depend on ecs_enabled() as it should. And we also don't need to mess with it if we ever see an implementation that has some features requiring ECS (like PRI) but which *doesn't* have PASID support. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Not entirely clear why, but it seems we need to reserve PASID zero and flush it when we make a PASID entry present. Quite we we couldn't use the true PASID value, isn't clear. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 23 10月, 2015 2 次提交
-
-
由 Joerg Roedel 提交于
Propagate the error-value from the function ir_parse_ioapic_hpet_scope() in parse_ioapics_under_ir() and cleanup its calling loop. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Baoquan He 提交于
Adjust the return value of parse_ioapics_under_ir as negative value representing failure and "0" representing succcess. Just make it consistent with other function implementations, and we can judge if calling is successfull by if (!parse_ioapics_under_ir()) style. Signed-off-by: NBaoquan He <bhe@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 22 10月, 2015 9 次提交
-
-
由 Scott Wood 提交于
Freescale's Layerscape ARM chips use the same structure. Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Joerg Roedel 提交于
Now that the iommu core support for iommu groups is not pci-centric anymore, we can move default domain allocation to the bus independent iommu_group_get_for_dev() function. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
All callers of iommu_group_get_for_dev() provide a device_group call-back now, so this fall-back is no longer needed. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This converts the ARM SMMU and the SMMUv3 driver to use the new device_group call-back. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Convert the fsl pamu driver to make use of the new device_group call-back. Cc: Varun Sethi <Varun.Sethi@freescale.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Set the device_group call-back to pci_device_group() for the Intel VT-d and the AMD IOMMU driver. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This function can be used as a device_group call-back and just allocates one iommu-group per device. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Rename that function to pci_device_group() and export it, so that IOMMU drivers can use it as their device_group call-back. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
That call-back is currently unused, change it into a call-back function for finding the right IOMMU group for a device. This is a first step to remove the hard-coded PCI dependency in the iommu-group code. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-