1. 09 8月, 2021 1 次提交
  2. 26 7月, 2021 1 次提交
  3. 14 7月, 2021 1 次提交
  4. 25 6月, 2021 2 次提交
  5. 08 6月, 2021 2 次提交
  6. 07 4月, 2021 2 次提交
  7. 18 3月, 2021 1 次提交
  8. 17 3月, 2021 2 次提交
  9. 15 3月, 2021 2 次提交
  10. 04 3月, 2021 1 次提交
  11. 10 2月, 2021 1 次提交
  12. 28 1月, 2021 2 次提交
    • L
      iommu: use the __iommu_attach_device() directly for deferred attach · 3ab65729
      Lianbo Jiang 提交于
      Currently, because domain attach allows to be deferred from iommu
      driver to device driver, and when iommu initializes, the devices
      on the bus will be scanned and the default groups will be allocated.
      
      Due to the above changes, some devices could be added to the same
      group as below:
      
      [    3.859417] pci 0000:01:00.0: Adding to iommu group 16
      [    3.864572] pci 0000:01:00.1: Adding to iommu group 16
      [    3.869738] pci 0000:02:00.0: Adding to iommu group 17
      [    3.874892] pci 0000:02:00.1: Adding to iommu group 17
      
      But when attaching these devices, it doesn't allow that a group has
      more than one device, otherwise it will return an error. This conflicts
      with the deferred attaching. Unfortunately, it has two devices in the
      same group for my side, for example:
      
      [    9.627014] iommu_group_device_count(): device name[0]:0000:01:00.0
      [    9.633545] iommu_group_device_count(): device name[1]:0000:01:00.1
      ...
      [   10.255609] iommu_group_device_count(): device name[0]:0000:02:00.0
      [   10.262144] iommu_group_device_count(): device name[1]:0000:02:00.1
      
      Finally, which caused the failure of tg3 driver when tg3 driver calls
      the dma_alloc_coherent() to allocate coherent memory in the tg3_test_dma().
      
      [    9.660310] tg3 0000:01:00.0: DMA engine test failed, aborting
      [    9.754085] tg3: probe of 0000:01:00.0 failed with error -12
      [    9.997512] tg3 0000:01:00.1: DMA engine test failed, aborting
      [   10.043053] tg3: probe of 0000:01:00.1 failed with error -12
      [   10.288905] tg3 0000:02:00.0: DMA engine test failed, aborting
      [   10.334070] tg3: probe of 0000:02:00.0 failed with error -12
      [   10.578303] tg3 0000:02:00.1: DMA engine test failed, aborting
      [   10.622629] tg3: probe of 0000:02:00.1 failed with error -12
      
      In addition, the similar situations also occur in other drivers such
      as the bnxt_en driver. That can be reproduced easily in kdump kernel
      when SME is active.
      
      Let's move the handling currently in iommu_dma_deferred_attach() into
      the iommu core code so that it can call the __iommu_attach_device()
      directly instead of the iommu_attach_device(). The external interface
      iommu_attach_device() is not suitable for handling this situation.
      Signed-off-by: NLianbo Jiang <lijiang@redhat.com>
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Link: https://lore.kernel.org/r/20210126115337.20068-3-lijiang@redhat.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
      3ab65729
    • L
      dma-iommu: use static-key to minimize the impact in the fast-path · a8e8af35
      Lianbo Jiang 提交于
      Let's move out the is_kdump_kernel() check from iommu_dma_deferred_attach()
      to iommu_dma_init(), and use the static-key in the fast-path to minimize
      the impact in the normal case.
      Co-developed-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NLianbo Jiang <lijiang@redhat.com>
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Link: https://lore.kernel.org/r/20210126115337.20068-2-lijiang@redhat.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
      a8e8af35
  13. 07 1月, 2021 1 次提交
  14. 09 12月, 2020 1 次提交
  15. 25 11月, 2020 4 次提交
  16. 06 10月, 2020 3 次提交
  17. 25 9月, 2020 2 次提交
  18. 18 9月, 2020 1 次提交
  19. 04 9月, 2020 2 次提交
  20. 14 8月, 2020 1 次提交
  21. 20 4月, 2020 1 次提交
    • D
      dma-pool: add additional coherent pools to map to gfp mask · c84dc6e6
      David Rientjes 提交于
      The single atomic pool is allocated from the lowest zone possible since
      it is guaranteed to be applicable for any DMA allocation.
      
      Devices may allocate through the DMA API but not have a strict reliance
      on GFP_DMA memory.  Since the atomic pool will be used for all
      non-blockable allocations, returning all memory from ZONE_DMA may
      unnecessarily deplete the zone.
      
      Provision for multiple atomic pools that will map to the optimal gfp
      mask of the device.
      
      When allocating non-blockable memory, determine the optimal gfp mask of
      the device and use the appropriate atomic pool.
      
      The coherent DMA mask will remain the same between allocation and free
      and, thus, memory will be freed to the same atomic pool it was allocated
      from.
      
      __dma_atomic_pool_init() will be changed to return struct gen_pool *
      later once dynamic expansion is added.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      c84dc6e6
  22. 04 3月, 2020 1 次提交
    • M
      iommu/dma: Fix MSI reservation allocation · 65ac74f1
      Marc Zyngier 提交于
      The way cookie_init_hw_msi_region() allocates the iommu_dma_msi_page
      structures doesn't match the way iommu_put_dma_cookie() frees them.
      
      The former performs a single allocation of all the required structures,
      while the latter tries to free them one at a time. It doesn't quite
      work for the main use case (the GICv3 ITS where the range is 64kB)
      when the base granule size is 4kB.
      
      This leads to a nice slab corruption on teardown, which is easily
      observable by simply creating a VF on a SRIOV-capable device, and
      tearing it down immediately (no need to even make use of it).
      Fortunately, this only affects systems where the ITS isn't translated
      by the SMMU, which are both rare and non-standard.
      
      Fix it by allocating iommu_dma_msi_page structures one at a time.
      
      Fixes: 7c1b058c ("iommu/dma: Handle IOMMU API reserved regions")
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Reviewed-by: NEric Auger <eric.auger@redhat.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: stable@vger.kernel.org
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      65ac74f1
  23. 08 1月, 2020 1 次提交
  24. 19 12月, 2019 1 次提交
  25. 17 12月, 2019 1 次提交
  26. 22 11月, 2019 1 次提交
    • N
      dma-mapping: treat dev->bus_dma_mask as a DMA limit · a7ba70f1
      Nicolas Saenz Julienne 提交于
      Using a mask to represent bus DMA constraints has a set of limitations.
      The biggest one being it can only hold a power of two (minus one). The
      DMA mapping code is already aware of this and treats dev->bus_dma_mask
      as a limit. This quirk is already used by some architectures although
      still rare.
      
      With the introduction of the Raspberry Pi 4 we've found a new contender
      for the use of bus DMA limits, as its PCIe bus can only address the
      lower 3GB of memory (of a total of 4GB). This is impossible to represent
      with a mask. To make things worse the device-tree code rounds non power
      of two bus DMA limits to the next power of two, which is unacceptable in
      this case.
      
      In the light of this, rename dev->bus_dma_mask to dev->bus_dma_limit all
      over the tree and treat it as such. Note that dev->bus_dma_limit should
      contain the higher accessible DMA address.
      Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de>
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      a7ba70f1
  27. 21 11月, 2019 1 次提交