1. 03 6月, 2021 3 次提交
  2. 09 4月, 2021 2 次提交
  3. 19 2月, 2021 1 次提交
    • N
      iommu/vt-d: Do not use flush-queue when caching-mode is on · 208b2434
      Nadav Amit 提交于
      stable inclusion
      from stable-5.10.14
      commit c4e8fa21a30be4656c58c209038b8f4270bf972a
      bugzilla: 48051
      
      --------------------------------
      
      commit 29b32839 upstream.
      
      When an Intel IOMMU is virtualized, and a physical device is
      passed-through to the VM, changes of the virtual IOMMU need to be
      propagated to the physical IOMMU. The hypervisor therefore needs to
      monitor PTE mappings in the IOMMU page-tables. Intel specifications
      provide "caching-mode" capability that a virtual IOMMU uses to report
      that the IOMMU is virtualized and a TLB flush is needed after mapping to
      allow the hypervisor to propagate virtual IOMMU mappings to the physical
      IOMMU. To the best of my knowledge no real physical IOMMU reports
      "caching-mode" as turned on.
      
      Synchronizing the virtual and the physical IOMMU tables is expensive if
      the hypervisor is unaware which PTEs have changed, as the hypervisor is
      required to walk all the virtualized tables and look for changes.
      Consequently, domain flushes are much more expensive than page-specific
      flushes on virtualized IOMMUs with passthrough devices. The kernel
      therefore exploited the "caching-mode" indication to avoid domain
      flushing and use page-specific flushing in virtualized environments. See
      commit 78d5f0f5 ("intel-iommu: Avoid global flushes with caching
      mode.")
      
      This behavior changed after commit 13cf0174 ("iommu/vt-d: Make use
      of iova deferred flushing"). Now, when batched TLB flushing is used (the
      default), full TLB domain flushes are performed frequently, requiring
      the hypervisor to perform expensive synchronization between the virtual
      TLB and the physical one.
      
      Getting batched TLB flushes to use page-specific invalidations again in
      such circumstances is not easy, since the TLB invalidation scheme
      assumes that "full" domain TLB flushes are performed for scalability.
      
      Disable batched TLB flushes when caching-mode is on, as the performance
      benefit from using batched TLB invalidations is likely to be much
      smaller than the overhead of the virtual-to-physical IOMMU page-tables
      synchronization.
      
      Fixes: 13cf0174 ("iommu/vt-d: Make use of iova deferred flushing")
      Signed-off-by: NNadav Amit <namit@vmware.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Lu Baolu <baolu.lu@linux.intel.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Will Deacon <will@kernel.org>
      Cc: stable@vger.kernel.org
      Acked-by: NLu Baolu <baolu.lu@linux.intel.com>
      Link: https://lore.kernel.org/r/20210127175317.1600473-1-namit@vmware.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      Acked-by: NXie XiuQi <xiexiuqi@huawei.com>
      208b2434
  4. 09 2月, 2021 1 次提交
  5. 28 1月, 2021 3 次提交
  6. 27 1月, 2021 3 次提交
  7. 12 1月, 2021 1 次提交
  8. 26 11月, 2020 1 次提交
  9. 19 11月, 2020 2 次提交
  10. 18 11月, 2020 1 次提交
  11. 16 11月, 2020 1 次提交
  12. 13 11月, 2020 1 次提交
    • T
      iommu/vt-d: Cure VF irqdomain hickup · ff828729
      Thomas Gleixner 提交于
      The recent changes to store the MSI irqdomain pointer in struct device
      missed that Intel DMAR does not register virtual function devices.  Due to
      that a VF device gets the plain PCI-MSI domain assigned and then issues
      compat MSI messages which get caught by the interrupt remapping unit.
      
      Cure that by inheriting the irq domain from the physical function
      device.
      
      Ideally the irqdomain would be associated to the bus, but DMAR can have
      multiple units and therefore irqdomains on a single bus. The VF 'bus' could
      of course inherit the domain from the PF, but that'd be yet another x86
      oddity.
      
      Fixes: 85a8dfc5 ("iommm/vt-d: Store irq domain in struct device")
      Reported-by: NJason Gunthorpe <jgg@nvidia.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NLu Baolu <baolu.lu@linux.intel.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Link: https://lore.kernel.org/r/draft-87eekymlpz.fsf@nanos.tec.linutronix.de
      ff828729
  13. 03 11月, 2020 3 次提交
  14. 02 11月, 2020 1 次提交
  15. 19 10月, 2020 1 次提交
  16. 07 10月, 2020 1 次提交
  17. 06 10月, 2020 2 次提交
  18. 01 10月, 2020 3 次提交
  19. 25 9月, 2020 1 次提交
    • C
      dma-mapping: add a new dma_alloc_pages API · efa70f2f
      Christoph Hellwig 提交于
      This API is the equivalent of alloc_pages, except that the returned memory
      is guaranteed to be DMA addressable by the passed in device.  The
      implementation will also be used to provide a more sensible replacement
      for DMA_ATTR_NON_CONSISTENT flag.
      
      Additionally dma_alloc_noncoherent is switched over to use dma_alloc_pages
      as its backend.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> (MIPS part)
      efa70f2f
  20. 24 9月, 2020 2 次提交
  21. 18 9月, 2020 3 次提交
  22. 16 9月, 2020 3 次提交