- 02 7月, 2009 6 次提交
-
-
由 David Woodhouse 提交于
Check dma_pte_present() and only free the page if there _is_ one. Kind of surprising that there was no warning about this. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
On Wed, 2009-07-01 at 16:59 -0700, Linus Torvalds wrote: > I also _really_ hate how you do > > (unsigned long)pte >> VTD_PAGE_SHIFT == > (unsigned long)first_pte >> VTD_PAGE_SHIFT Kill this, in favour of just looking to see if the incremented pte pointer has 'wrapped' onto the next page. Which means we have to check it _after_ incrementing it, not before. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
This would have found the bug in i386 pci_unmap_addr() a long time ago. We shouldn't just silently return without doing anything. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Since we're using cmpxchg64() anyway (because that's the only way to do an atomic 64-bit store on i386), we might as well ditch the extra locking and just use cmpxchg64() to ensure that we don't add the page twice. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 30 6月, 2009 5 次提交
-
-
由 David Woodhouse 提交于
As with other functions, batch the CPU data cache flushes and don't keep recalculating PTE addresses. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
The loop condition was wrong -- we should free a PMD only if its _entire_ range is within the range we're intending to clear. The early-termination condition was right, but not the loop. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Instead of calling domain_pfn_mapping() repeatedly with single or small numbers of pages, just pass the sglist in. It can optimise the number of cache flushes like domain_pfn_mapping() does, and gives a huge speedup for large scatterlists. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 29 6月, 2009 26 次提交
-
-
由 David Woodhouse 提交于
There's no need for the separate iommu_alloc_iova() function, and certainly not for it to be global. Remove the underscores while we're at it. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
As with dma_pte_clear_range(), don't keep flushing a single PTE at a time. And also micro-optimise the setting of PTE values rather than using the helper functions to do all the masking. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
It's a bit silly to repeatedly call domain_flush_cache() for each PTE individually, as we clear it. Instead, batch them up and flush a whole range at a time. We might as well refrain from recalculating the PTE address from scratch each time round the loop too. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
This is fairly broken anyway -- it doesn't take hotplug into account. We should probably be checking page_is_ram() instead. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Most of its callers are having to shift for themselves anyway, so we might as well do it in iommu_flush_iotlb_psi(). Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
... and use it in the trivial cases; the other callers want individual (and bisectable) attention, since I screwed them up the first time... Make the BUG_ON() happen on too-large virtual address rather than physical address, too. That's the one we care about. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
No more masking and alignment; just use pfns. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Use unaligned address for domain->max_addr. That algorithm isn't ideal anyway -- we should probably just look at the last iova in the tree. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
With some cleanup of intel_unmap_page(), intel_unmap_sg() and vm_domain_exit() to no longer play with 64-bit addresses. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Noting that this is now an _inclusive_ range. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
We're shifting the inputs for now, but that'll change... Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
Add some helpers for converting between VT-d and normal system pfns, since system pages can be larger than VT-d pages. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
There's no need for the GFX workaround now we have 'iommu=pt' for the cases where people really care about performance. There's no need to have a special case for just one type of device. This also speeds up the iommu=pt path and reduces memory usage by setting up the si_domain _once_ and then using it for all devices, rather than giving each device its own private page tables. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 David Woodhouse 提交于
We'll want to do this to a _domain_ (the si_domain) rather than a PCI device. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
由 Yu Zhao 提交于
In caching mode, domain ID 0 is reserved for non-present to present mapping flush. Device IOTLB doesn't need to be flushed in this case. Previously we were avoiding the flush for domain zero, even if the IOMMU wasn't in caching mode and domain zero wasn't special. Signed-off-by: NYu Zhao <yu.zhao@intel.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 26 6月, 2009 1 次提交
-
-
由 Chris Wright 提交于
Drop the e820 scanning and use existing function for finding valid RAM regions to add to 1:1 mapping. Signed-off-by: NChris Wright <chrisw@redhat.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 24 6月, 2009 1 次提交
-
-
由 Fenghua Yu 提交于
Identity mapping for IOMMU defines a single domain to 1:1 map all PCI devices to all usable memory. This reduces map/unmap overhead in DMA API's and improve IOMMU performance. On 10Gb network cards, Netperf shows no performance degradation compared to non-IOMMU performance. This method may lose some of DMA remapping benefits like isolation. The patch sets up identity mapping for all PCI devices to all usable memory. In the DMA API, there is no overhead to maintain page tables, invalidate iotlb, flush cache etc. 32 bit DMA devices don't use identity mapping domain, in order to access memory beyond 4GiB. When kernel option iommu=pt, pass through is first tried. If pass through succeeds, IOMMU goes to pass through. If pass through is not supported in hw or fail for whatever reason, IOMMU goes to identity mapping. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 18 5月, 2009 1 次提交
-
-
由 Yu Zhao 提交于
Enable the device IOTLB (i.e. ATS) for both the bare metal and KVM environments. Signed-off-by: NYu Zhao <yu.zhao@intel.com> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-