1. 28 4月, 2022 1 次提交
    • L
      iommu: Add DMA ownership management interfaces · 1ea2a07a
      Lu Baolu 提交于
      Multiple devices may be placed in the same IOMMU group because they
      cannot be isolated from each other. These devices must either be
      entirely under kernel control or userspace control, never a mixture.
      
      This adds dma ownership management in iommu core and exposes several
      interfaces for the device drivers and the device userspace assignment
      framework (i.e. VFIO), so that any conflict between user and kernel
      controlled dma could be detected at the beginning.
      
      The device driver oriented interfaces are,
      
      	int iommu_device_use_default_domain(struct device *dev);
      	void iommu_device_unuse_default_domain(struct device *dev);
      
      By calling iommu_device_use_default_domain(), the device driver tells
      the iommu layer that the device dma is handled through the kernel DMA
      APIs. The iommu layer will manage the IOVA and use the default domain
      for DMA address translation.
      
      The device user-space assignment framework oriented interfaces are,
      
      	int iommu_group_claim_dma_owner(struct iommu_group *group,
      					void *owner);
      	void iommu_group_release_dma_owner(struct iommu_group *group);
      	bool iommu_group_dma_owner_claimed(struct iommu_group *group);
      
      The device userspace assignment must be disallowed if the DMA owner
      claiming interface returns failure.
      Signed-off-by: NJason Gunthorpe <jgg@nvidia.com>
      Signed-off-by: NKevin Tian <kevin.tian@intel.com>
      Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com>
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Link: https://lore.kernel.org/r/20220418005000.897664-2-baolu.lu@linux.intel.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
      1ea2a07a
  2. 28 2月, 2022 6 次提交
  3. 20 12月, 2021 1 次提交
  4. 20 8月, 2021 1 次提交
  5. 18 8月, 2021 4 次提交
  6. 09 8月, 2021 1 次提交
  7. 02 8月, 2021 2 次提交
    • N
      iommu: Factor iommu_iotlb_gather_is_disjoint() out · febb82c2
      Nadav Amit 提交于
      Refactor iommu_iotlb_gather_add_page() and factor out the logic that
      detects whether IOTLB gather range and a new range are disjoint. To be
      used by the next patch that implements different gathering logic for
      AMD.
      
      Note that updating gather->pgsize unconditionally does not affect
      correctness as the function had (and has) an invariant, in which
      gather->pgsize always represents the flushing granularity of its range.
      Arguably, “size" should never be zero, but lets assume for the matter of
      discussion that it might.
      
      If "size" equals to "gather->pgsize", then the assignment in question
      has no impact.
      
      Otherwise, if "size" is non-zero, then iommu_iotlb_sync() would
      initialize the size and range (see iommu_iotlb_gather_init()), and the
      invariant is kept.
      
      Otherwise, "size" is zero, and "gather" already holds a range, so
      gather->pgsize is non-zero and (gather->pgsize && gather->pgsize !=
      size) is true. Therefore, again, iommu_iotlb_sync() would be called and
      initialize the size.
      
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Jiajun Cao <caojiajun@vmware.com>
      Cc: Lu Baolu <baolu.lu@linux.intel.com>
      Cc: iommu@lists.linux-foundation.org
      Cc: linux-kernel@vger.kernel.org>
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Acked-by: NWill Deacon <will@kernel.org>
      Signed-off-by: NNadav Amit <namit@vmware.com>
      Link: https://lore.kernel.org/r/20210723093209.714328-5-namit@vmware.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
      febb82c2
    • R
      iommu: Improve iommu_iotlb_gather helpers · 3136895c
      Robin Murphy 提交于
      The Mediatek driver is not the only one which might want a basic
      address-based gathering behaviour, so although it's arguably simple
      enough to open-code, let's factor it out for the sake of cleanliness.
      Let's also take this opportunity to document the intent of these
      helpers for clarity.
      
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Will Deacon <will@kernel.org>
      Cc: Jiajun Cao <caojiajun@vmware.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Lu Baolu <baolu.lu@linux.intel.com>
      Cc: iommu@lists.linux-foundation.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NNadav Amit <namit@vmware.com>
      Link: https://lore.kernel.org/r/20210723093209.714328-4-namit@vmware.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
      3136895c
  8. 26 7月, 2021 3 次提交
  9. 16 4月, 2021 2 次提交
  10. 07 4月, 2021 17 次提交
  11. 02 2月, 2021 1 次提交
  12. 28 1月, 2021 1 次提交
    • L
      iommu: use the __iommu_attach_device() directly for deferred attach · 3ab65729
      Lianbo Jiang 提交于
      Currently, because domain attach allows to be deferred from iommu
      driver to device driver, and when iommu initializes, the devices
      on the bus will be scanned and the default groups will be allocated.
      
      Due to the above changes, some devices could be added to the same
      group as below:
      
      [    3.859417] pci 0000:01:00.0: Adding to iommu group 16
      [    3.864572] pci 0000:01:00.1: Adding to iommu group 16
      [    3.869738] pci 0000:02:00.0: Adding to iommu group 17
      [    3.874892] pci 0000:02:00.1: Adding to iommu group 17
      
      But when attaching these devices, it doesn't allow that a group has
      more than one device, otherwise it will return an error. This conflicts
      with the deferred attaching. Unfortunately, it has two devices in the
      same group for my side, for example:
      
      [    9.627014] iommu_group_device_count(): device name[0]:0000:01:00.0
      [    9.633545] iommu_group_device_count(): device name[1]:0000:01:00.1
      ...
      [   10.255609] iommu_group_device_count(): device name[0]:0000:02:00.0
      [   10.262144] iommu_group_device_count(): device name[1]:0000:02:00.1
      
      Finally, which caused the failure of tg3 driver when tg3 driver calls
      the dma_alloc_coherent() to allocate coherent memory in the tg3_test_dma().
      
      [    9.660310] tg3 0000:01:00.0: DMA engine test failed, aborting
      [    9.754085] tg3: probe of 0000:01:00.0 failed with error -12
      [    9.997512] tg3 0000:01:00.1: DMA engine test failed, aborting
      [   10.043053] tg3: probe of 0000:01:00.1 failed with error -12
      [   10.288905] tg3 0000:02:00.0: DMA engine test failed, aborting
      [   10.334070] tg3: probe of 0000:02:00.0 failed with error -12
      [   10.578303] tg3 0000:02:00.1: DMA engine test failed, aborting
      [   10.622629] tg3: probe of 0000:02:00.1 failed with error -12
      
      In addition, the similar situations also occur in other drivers such
      as the bnxt_en driver. That can be reproduced easily in kdump kernel
      when SME is active.
      
      Let's move the handling currently in iommu_dma_deferred_attach() into
      the iommu core code so that it can call the __iommu_attach_device()
      directly instead of the iommu_attach_device(). The external interface
      iommu_attach_device() is not suitable for handling this situation.
      Signed-off-by: NLianbo Jiang <lijiang@redhat.com>
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Link: https://lore.kernel.org/r/20210126115337.20068-3-lijiang@redhat.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
      3ab65729