1. 15 10月, 2019 3 次提交
    • T
      iommu/dma-iommu: Handle deferred devices · 795bbbb9
      Tom Murphy 提交于
      Handle devices which defer their attach to the iommu in the dma-iommu api
      Signed-off-by: NTom Murphy <murphyt7@tcd.ie>
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      795bbbb9
    • T
      iommu: Add gfp parameter to iommu_ops::map · 781ca2de
      Tom Murphy 提交于
      Add a gfp_t parameter to the iommu_ops::map function.
      Remove the needless locking in the AMD iommu driver.
      
      The iommu_ops::map function (or the iommu_map function which calls it)
      was always supposed to be sleepable (according to Joerg's comment in
      this thread: https://lore.kernel.org/patchwork/patch/977520/ ) and so
      should probably have had a "might_sleep()" since it was written. However
      currently the dma-iommu api can call iommu_map in an atomic context,
      which it shouldn't do. This doesn't cause any problems because any iommu
      driver which uses the dma-iommu api uses gfp_atomic in it's
      iommu_ops::map function. But doing this wastes the memory allocators
      atomic pools.
      Signed-off-by: NTom Murphy <murphyt7@tcd.ie>
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      781ca2de
    • T
      iommu/amd: Remove unnecessary locking from AMD iommu driver · 37ec8eb8
      Tom Murphy 提交于
      With or without locking it doesn't make sense for two writers to be
      writing to the same IOVA range at the same time. Even with locking we
      still have a race condition, whoever gets the lock first, so we still
      can't be sure what the result will be. With locking the result will be
      more sane, it will be correct for the last writer, but still useless
      because we can't be sure which writer will get the lock last. It's a
      fundamentally broken design to have two writers writing to the same
      IOVA range at the same time.
      
      So we can remove the locking and work on the assumption that no two
      writers will be writing to the same IOVA range at the same time.
      
      The only exception is when we have to allocate a middle page in the page
      tables, the middle page can cover more than just the IOVA range a writer
      has been allocated. However this isn't an issue in the AMD driver
      because it can atomically allocate middle pages using "cmpxchg64()".
      Signed-off-by: NTom Murphy <murphyt7@tcd.ie>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      37ec8eb8
  2. 28 9月, 2019 6 次提交
  3. 24 9月, 2019 5 次提交
  4. 14 9月, 2019 1 次提交
  5. 11 9月, 2019 6 次提交
  6. 06 9月, 2019 3 次提交
    • A
      iommu/omap: Mark pm functions __maybe_unused · 96088a20
      Arnd Bergmann 提交于
      The runtime_pm functions are unused when CONFIG_PM is disabled:
      
      drivers/iommu/omap-iommu.c:1022:12: error: unused function 'omap_iommu_runtime_suspend' [-Werror,-Wunused-function]
      static int omap_iommu_runtime_suspend(struct device *dev)
      drivers/iommu/omap-iommu.c:1064:12: error: unused function 'omap_iommu_runtime_resume' [-Werror,-Wunused-function]
      static int omap_iommu_runtime_resume(struct device *dev)
      
      Mark them as __maybe_unused to let gcc silently drop them
      instead of warning.
      
      Fixes: db8918f6 ("iommu/omap: streamline enable/disable through runtime pm callbacks")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Acked-by: NSuman Anna <s-anna@ti.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      96088a20
    • J
      iommu/amd: Fix race in increase_address_space() · 754265bc
      Joerg Roedel 提交于
      After the conversion to lock-less dma-api call the
      increase_address_space() function can be called without any
      locking. Multiple CPUs could potentially race for increasing
      the address space, leading to invalid domain->mode settings
      and invalid page-tables. This has been happening in the wild
      under high IO load and memory pressure.
      
      Fix the race by locking this operation. The function is
      called infrequently so that this does not introduce
      a performance regression in the dma-api path again.
      Reported-by: NQian Cai <cai@lca.pw>
      Fixes: 256e4621 ('iommu/amd: Make use of the generic IOVA allocator')
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      754265bc
    • S
      iommu/amd: Flush old domains in kdump kernel · 36b7200f
      Stuart Hayes 提交于
      When devices are attached to the amd_iommu in a kdump kernel, the old device
      table entries (DTEs), which were copied from the crashed kernel, will be
      overwritten with a new domain number.  When the new DTE is written, the IOMMU
      is told to flush the DTE from its internal cache--but it is not told to flush
      the translation cache entries for the old domain number.
      
      Without this patch, AMD systems using the tg3 network driver fail when kdump
      tries to save the vmcore to a network system, showing network timeouts and
      (sometimes) IOMMU errors in the kernel log.
      
      This patch will flush IOMMU translation cache entries for the old domain when
      a DTE gets overwritten with a new domain number.
      Signed-off-by: NStuart Hayes <stuart.w.hayes@gmail.com>
      Fixes: 3ac3e5ee ('iommu/amd: Copy old trans table from old kernel')
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      36b7200f
  7. 05 9月, 2019 2 次提交
  8. 04 9月, 2019 3 次提交
  9. 03 9月, 2019 4 次提交
  10. 30 8月, 2019 7 次提交