- 09 8月, 2009 1 次提交
-
-
由 David Woodhouse 提交于
All callers of the former were also calling the latter, in one order or the other, and failing to correctly clean up if the second returned failure. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 06 8月, 2009 1 次提交
-
-
由 Sheng Yang 提交于
Two defects work together result in KVM device passthrough randomly can't work: 1. iommu_snooping is not initialized to zero when vm_iommu_init() called. So it is possible to get a random value. 2. One line added by commit 2c2e2c38("IOMMU Identity Mapping Support") change the code path, let it bypass domain_update_iommu_cap(), as well as missing the increment of domain iommu reference count. The latter is also likely to cause a leak of domains on repeated VMM assignment and deassignment. Signed-off-by: NSheng Yang <sheng@linux.intel.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 05 8月, 2009 2 次提交
-
-
由 Fenghua Yu 提交于
The physical address passed to domain_pfn_mapping() should be rounded down to the start of the MM page, not the VT-d page. This issue causes kernel panic on PAGE_SIZE>VTD_PAGE_SIZE platforms e.g. ia64 platforms. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 Fenghua Yu 提交于
In domain_sg_mapping(), use aligned_nrpages() instead of hand-coded rounding code for calculating the size of each sg elem. This means that on IA64 we correctly round up to the MM page size, not just to the VT-d page size. Also remove the incorrect mm_to_dma_pfn() when intel_map_sg() calls domain_sg_mapping() -- the 'size' variable is in VT-d pages already. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 04 8月, 2009 1 次提交
-
-
由 David Woodhouse 提交于
This makes the hardware passthrough mode work a lot more like the software version, so that the behaviour of a kernel with 'iommu=pt' is the same whether the hardware supports passthrough or not. In particular: - We use a single si_domain for the pass-through devices. - 32-bit devices can be taken out of the pass-through domain so that they don't have to use swiotlb. - Devices will work again after being removed from a KVM guest. - A potential oops on OOM (in init_context_pass_through()) is fixed. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 20 7月, 2009 1 次提交
-
-
由 Dan Carpenter 提交于
g_iommus is freed after we "goto error;". Found by smatch (http://repo.or.cz/w/smatch.git). Signed-off-by: NDan Carpenter <error27@gmail.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 15 7月, 2009 3 次提交
-
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
I see no reason why we did this _only_ in intel_unmap_page(). Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
We only ever obtain this lock immediately before the iova_rbtree_lock, and release it immediately after the iova_rbtree_lock. So ditch it and just use iova_rbtree_lock. [v2: Remove the lockdep bits this time too] Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 09 7月, 2009 1 次提交
-
-
由 Sheng Yang 提交于
After some API change, intel_iommu_unmap_range() introduced a assumption that parameter size != 0, otherwise the dma_pte_clean_range() would have a overflowed argument. But the user like KVM don't have this assumption before, then some BUG() triggered. Fix it by ignoring size = 0. Signed-off-by: NSheng Yang <sheng@linux.intel.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 08 7月, 2009 1 次提交
-
-
由 David Woodhouse 提交于
We did before, in the end -- but it was at the bottom of a long stack of functions. Add an inline wrapper get_valid_domain_for_dev() which will use the cached one _first_ and only make the out-of-line call if it's not already set. This takes the average time taken for a 1-page intel_map_sg() from 5961 cycles to 4812 cycles on my Lenovo x200s test box -- a modest 20%. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 05 7月, 2009 2 次提交
-
-
由 David Woodhouse 提交于
Our current strategy for pass-through mode is to put all devices into the 1:1 domain at startup (which is before we know what their dma_mask will be), and only _later_ take them out of that domain, if it turns out that they really can't address all of memory. However, when there are a bunch of PCI devices behind a bridge, they all end up with the same source-id on their DMA transactions, and hence in the same IOMMU domain. This means that we _can't_ easily move them from the 1:1 domain into their own domain at runtime, because there might be DMA in-flight from their siblings. So we have to adjust our pass-through strategy: For PCI devices not on the root bus, and for the bridges which will take responsibility for their transactions, we have to start up _out_ of the 1:1 domain, just in case. This fixes the BUG() we see when we have 32-bit-capable devices behind a PCI-PCI bridge, and use the software identity mapping. It does mean that we might end up using 'normal' mapping mode for some devices which could actually live with the faster 1:1 mapping -- but this is only for PCI devices behind bridges, which presumably aren't the devices for which people are most concerned about performance. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
At boot time, the dma_mask won't have been set on any devices, so we assume that all devices will be 64-bit capable (and thus get a 1:1 map). Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 04 7月, 2009 6 次提交
-
-
由 David Woodhouse 提交于
This should fix kernel.org bug #11821, where the dcdbas driver makes up a platform device and then uses dma_alloc_coherent() on it, in an attempt to get memory < 4GiB. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
We need to give people a little more time to fix the broken drivers. Re-introduce this, but tied in properly with the 'iommu=pt' support this time. Change the config option name and make it default to 'no' too. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
We do this twice, and it's about to get more complicated. This makes the code slightly clearer about what it's doing, too. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
When we reattach a device to the si_domain (because it's been removed from a VM), we weren't calling domain_context_mapping() to actually tell the hardware about that. We should really put the call to domain_context_mapping() into domain_add_dev_info() -- we never call the latter without also doing the former, and we can keep the error paths simple that way. But that's a cleanup which can wait for 2.6.32 now. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
We should check iommu_dummy() _first_, because that means it's attached to an iommu that we've just disabled completely. At the moment, we might try to put the device into the identity mapping domain. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
The aligned_nrpages() function rounds up to the next VM page, but returns its result as a number of DMA pages. Purely theoretical except on IA64, which doesn't boot with VT-d right now anyway. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 02 7月, 2009 6 次提交
-
-
由 David Woodhouse 提交于
Check dma_pte_present() and only free the page if there _is_ one. Kind of surprising that there was no warning about this. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
On Wed, 2009-07-01 at 16:59 -0700, Linus Torvalds wrote: > I also _really_ hate how you do > > (unsigned long)pte >> VTD_PAGE_SHIFT == > (unsigned long)first_pte >> VTD_PAGE_SHIFT Kill this, in favour of just looking to see if the incremented pte pointer has 'wrapped' onto the next page. Which means we have to check it _after_ incrementing it, not before. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
This would have found the bug in i386 pci_unmap_addr() a long time ago. We shouldn't just silently return without doing anything. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Since we're using cmpxchg64() anyway (because that's the only way to do an atomic 64-bit store on i386), we might as well ditch the extra locking and just use cmpxchg64() to ensure that we don't add the page twice. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 30 6月, 2009 5 次提交
-
-
由 David Woodhouse 提交于
As with other functions, batch the CPU data cache flushes and don't keep recalculating PTE addresses. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
The loop condition was wrong -- we should free a PMD only if its _entire_ range is within the range we're intending to clear. The early-termination condition was right, but not the loop. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Instead of calling domain_pfn_mapping() repeatedly with single or small numbers of pages, just pass the sglist in. It can optimise the number of cache flushes like domain_pfn_mapping() does, and gives a huge speedup for large scatterlists. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 29 6月, 2009 10 次提交
-
-
由 David Woodhouse 提交于
There's no need for the separate iommu_alloc_iova() function, and certainly not for it to be global. Remove the underscores while we're at it. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
As with dma_pte_clear_range(), don't keep flushing a single PTE at a time. And also micro-optimise the setting of PTE values rather than using the helper functions to do all the masking. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
It's a bit silly to repeatedly call domain_flush_cache() for each PTE individually, as we clear it. Instead, batch them up and flush a whole range at a time. We might as well refrain from recalculating the PTE address from scratch each time round the loop too. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
This is fairly broken anyway -- it doesn't take hotplug into account. We should probably be checking page_is_ram() instead. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Most of its callers are having to shift for themselves anyway, so we might as well do it in iommu_flush_iotlb_psi(). Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-