1. 30 6月, 2009 2 次提交
  2. 29 6月, 2009 26 次提交
  3. 26 6月, 2009 1 次提交
  4. 24 6月, 2009 1 次提交
    • F
      IOMMU Identity Mapping Support (drivers/pci/intel_iommu.c) · 2c2e2c38
      Fenghua Yu 提交于
      Identity mapping for IOMMU defines a single domain to 1:1 map all PCI
      devices to all usable memory.
      
      This reduces map/unmap overhead in DMA API's and improve IOMMU
      performance. On 10Gb network cards, Netperf shows no performance
      degradation compared to non-IOMMU performance.
      
      This method may lose some of DMA remapping benefits like isolation.
      
      The patch sets up identity mapping for all PCI devices to all usable
      memory. In the DMA API, there is no overhead to maintain page tables,
      invalidate iotlb, flush cache etc.
      
      32 bit DMA devices don't use identity mapping domain, in order to access
      memory beyond 4GiB.
      
      When kernel option iommu=pt, pass through is first tried. If pass
      through succeeds, IOMMU goes to pass through. If pass through is not
      supported in hw or fail for whatever reason, IOMMU goes to identity
      mapping.
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      2c2e2c38
  5. 18 5月, 2009 2 次提交
  6. 11 5月, 2009 5 次提交
    • D
      intel-iommu: PAE memory corruption fix · fd18de50
      David Woodhouse 提交于
      PAGE_MASK is 0xFFFFF000 on i386 -- even with PAE.
      
      So it's not sufficient to ensure that you use phys_addr_t or uint64_t
      everywhere you handle physical addresses -- you also have to avoid using
      the construct 'addr & PAGE_MASK', because that will strip the high 32
      bits of the address.
      
      This patch avoids that problem by using PHYSICAL_PAGE_MASK instead of
      PAGE_MASK where appropriate. It leaves '& PAGE_MASK' in a few instances
      that don't matter -- where it's being used on the virtual bus addresses
      we're dishing out, which are 32-bit anyway.
      
      Since PHYSICAL_PAGE_MASK is not present on other architectures, we have
      to define it (to PAGE_MASK) if it's not already defined.
      
      Maybe it would be better just to fix PAGE_MASK for i386/PAE?
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fd18de50
    • D
      c416daa9
    • D
      intel-iommu: Fix tiny theoretical race in write-buffer flush. · 462b60f6
      David Woodhouse 提交于
      In iommu_flush_write_buffer() we read iommu->gcmd before taking the
      register_lock, and then we mask in the WBF bit and write it to the
      register.
      
      There is a tiny chance that something else could have _changed_
      iommu->gcmd before we take the lock, but after we read it. So we could
      be undoing that change.
      
      Never actually going to have happened in practice, since nothing else
      changes that register at runtime -- aside from the write-buffer flush
      it's only ever touched at startup for enabling translation, etc.
      
      But worth fixing anyway.
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      462b60f6
    • D
      intel-iommu: Clean up handling of "caching mode" vs. IOTLB flushing. · 1f0ef2aa
      David Woodhouse 提交于
      As we just did for context cache flushing, clean up the logic around
      whether we need to flush the iotlb or just the write-buffer, depending
      on caching mode.
      
      Fix the same bug in qi_flush_iotlb() that qi_flush_context() had -- it
      isn't supposed to be returning an error; it's supposed to be returning a
      flag which triggers a write-buffer flush.
      
      Remove some superfluous conditional write-buffer flushes which could
      never have happened because they weren't for non-present-to-present
      mapping changes anyway.
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      1f0ef2aa
    • D
      intel-iommu: Clean up handling of "caching mode" vs. context flushing. · 4c25a2c1
      David Woodhouse 提交于
      It really doesn't make a lot of sense to have some of the logic to
      handle caching vs. non-caching mode duplicated in qi_flush_context() and
      __iommu_flush_context(), while the return value indicates whether the
      caller should take other action which depends on the same thing.
      
      Especially since qi_flush_context() thought it was returning something
      entirely different anyway.
      
      This patch makes qi_flush_context() and __iommu_flush_context() both
      return void, removes the 'non_present_entry_flush' argument and makes
      the only call site which _set_ that argument to 1 do the right thing.
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      4c25a2c1
  7. 10 5月, 2009 1 次提交
  8. 01 5月, 2009 1 次提交
  9. 29 4月, 2009 1 次提交
    • F
      Intel IOMMU Pass Through Support · 4ed0d3e6
      Fenghua Yu 提交于
      The patch adds kernel parameter intel_iommu=pt to set up pass through
      mode in context mapping entry. This disables DMAR in linux kernel; but
      KVM still runs on VT-d and interrupt remapping still works.
      
      In this mode, kernel uses swiotlb for DMA API functions but other VT-d
      functionalities are enabled for KVM. KVM always uses multi level
      translation page table in VT-d. By default, pass though mode is disabled
      in kernel.
      
      This is useful when people don't want to enable VT-d DMAR in kernel but
      still want to use KVM and interrupt remapping for reasons like DMAR
      performance concern or debug purpose.
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Acked-by: NWeidong Han <weidong@intel.com>
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      4ed0d3e6