1. 16 8月, 2017 8 次提交
    • J
      iommu/vt-d: Allow to flush more than 4GB of device TLBs · c8acb28b
      Joerg Roedel 提交于
      The shift qi_flush_dev_iotlb() is done on an int, which
      limits the mask to 32 bits. Make the mask 64 bits wide so
      that more than 4GB of address range can be flushed at once.
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      c8acb28b
    • J
      iommu/amd: Make use of iova queue flushing · 9003d618
      Joerg Roedel 提交于
      Rip out the implementation in the AMD IOMMU driver and use
      the one in the common iova code instead.
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      9003d618
    • J
      iommu/iova: Add flush timer · 9a005a80
      Joerg Roedel 提交于
      Add a timer to flush entries from the Flush-Queues every
      10ms. This makes sure that no stale TLB entries remain for
      too long after an IOVA has been unmapped.
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      9a005a80
    • J
      iommu/iova: Add locking to Flush-Queues · 8109c2a2
      Joerg Roedel 提交于
      The lock is taken from the same CPU most of the time. But
      having it allows to flush the queue also from another CPU if
      necessary.
      
      This will be used by a timer to regularily flush any pending
      IOVAs from the Flush-Queues.
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      8109c2a2
    • J
      iommu/iova: Add flush counters to Flush-Queue implementation · fb418dab
      Joerg Roedel 提交于
      There are two counters:
      
      	* fq_flush_start_cnt  - Increased when a TLB flush
      	                        is started.
      
      	* fq_flush_finish_cnt - Increased when a TLB flush
      				is finished.
      
      The fq_flush_start_cnt is assigned to every Flush-Queue
      entry on its creation. When freeing entries from the
      Flush-Queue, the value in the entry is compared to the
      fq_flush_finish_cnt. The entry can only be freed when its
      value is less than the value of fq_flush_finish_cnt.
      
      The reason for these counters it to take advantage of IOMMU
      TLB flushes that happened on other CPUs. These already
      flushed the TLB for Flush-Queue entries on other CPUs so
      that they can already be freed without flushing the TLB
      again.
      
      This makes it less likely that the Flush-Queue is full and
      saves IOMMU TLB flushes.
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      fb418dab
    • J
      iommu/iova: Implement Flush-Queue ring buffer · 19282101
      Joerg Roedel 提交于
      Add a function to add entries to the Flush-Queue ring
      buffer. If the buffer is full, call the flush-callback and
      free the entries.
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      19282101
    • J
      iommu/iova: Add flush-queue data structures · 42f87e71
      Joerg Roedel 提交于
      This patch adds the basic data-structures to implement
      flush-queues in the generic IOVA code. It also adds the
      initialization and destroy routines for these data
      structures.
      
      The initialization routine is designed so that the use of
      this feature is optional for the users of IOVA code.
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      42f87e71
    • R
      iommu/of: Fix of_iommu_configure() for disabled IOMMUs · da4b0275
      Robin Murphy 提交于
      Sudeep reports that the logic got slightly broken when a PCI iommu-map
      entry targets an IOMMU marked as disabled in DT, since of_pci_map_rid()
      succeeds in following a phandle, and of_iommu_xlate() doesn't return an
      error value, but we miss checking whether ops was actually non-NULL.
      Whilst this could be solved with a point fix in of_pci_iommu_init(), it
      suggests that all the juggling of ERR_PTR values through the ops pointer
      is proving rather too complicated for its own good, so let's instead
      simplify the whole flow (with a side-effect of eliminating the cause of
      the bug).
      
      The fact that we now rely on iommu_fwspec means that we no longer need
      to pass around an iommu_ops pointer at all - we can simply propagate a
      regular int return value until we know whether we have a viable IOMMU,
      then retrieve the ops from the fwspec if and when we actually need them.
      This makes everything a bit more uniform and certainly easier to follow.
      
      Fixes: d87beb74 ("iommu/of: Handle PCI aliases properly")
      Reported-by: NSudeep Holla <sudeep.holla@arm.com>
      Tested-by: NSudeep Holla <sudeep.holla@arm.com>
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      da4b0275
  2. 10 8月, 2017 4 次提交
  3. 26 7月, 2017 2 次提交
    • R
      iommu: Convert to using %pOF instead of full_name · 6bd4f1c7
      Rob Herring 提交于
      Now that we have a custom printf format specifier, convert users of
      full_name to use %pOF instead. This is preparation to remove storing
      of the full path string for each node.
      Signed-off-by: NRob Herring <robh@kernel.org>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Heiko Stuebner <heiko@sntech.de>
      Cc: iommu@lists.linux-foundation.org
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-rockchip@lists.infradead.org
      Reviewed-by: NHeiko Stuebner <heiko@sntech.de>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      6bd4f1c7
    • R
      iommu/of: Handle PCI aliases properly · d87beb74
      Robin Murphy 提交于
      When a PCI device has DMA quirks, we need to ensure that an upstream
      IOMMU knows about all possible aliases, since the presence of a DMA
      quirk does not preclude the device still also emitting transactions
      (e.g. MSIs) on its 'real' RID. Similarly, the rules for bridge aliasing
      are relatively complex, and some bridges may only take ownership of
      transactions under particular transient circumstances, leading again to
      multiple RIDs potentially being seen at the IOMMU for the given device.
      
      Take all this into account in the OF code by translating every RID
      produced by the alias walk, not just whichever one comes out last.
      Happily, this also makes things tidy enough that we can reduce the
      number of both total lines of code, and confusing levels of indirection,
      by pulling the "iommus"/"iommu-map" parsing helpers back in-line again.
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      d87beb74
  4. 23 7月, 2017 2 次提交
  5. 21 7月, 2017 4 次提交
  6. 20 7月, 2017 20 次提交