1. 12 10月, 2019 4 次提交
  2. 15 6月, 2019 1 次提交
  3. 13 2月, 2019 2 次提交
    • W
      iommu/arm-smmu-v3: Use explicit mb() when moving cons pointer · 710e1e56
      Will Deacon 提交于
      [ Upstream commit a868e8530441286342f90c1fd9c5f24de3aa2880 ]
      
      After removing an entry from a queue (e.g. reading an event in
      arm_smmu_evtq_thread()) it is necessary to advance the MMIO consumer
      pointer to free the queue slot back to the SMMU. A memory barrier is
      required here so that all reads targetting the queue entry have
      completed before the consumer pointer is updated.
      
      The implementation of queue_inc_cons() relies on a writel() to complete
      the previous reads, but this is incorrect because writel() is only
      guaranteed to complete prior writes. This patch replaces the call to
      writel() with an mb(); writel_relaxed() sequence, which gives us the
      read->write ordering which we require.
      
      Cc: Robin Murphy <robin.murphy@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      710e1e56
    • Z
      iommu/arm-smmu-v3: Avoid memory corruption from Hisilicon MSI payloads · 00b0fbb8
      Zhen Lei 提交于
      [ Upstream commit 84a9a75774961612d0c7dd34a1777e8f98a65abd ]
      
      The GITS_TRANSLATER MMIO doorbell register in the ITS hardware is
      architected to be 4 bytes in size, yet on hi1620 and earlier, Hisilicon
      have allocated the adjacent 4 bytes to carry some IMPDEF sideband
      information which results in an 8-byte MSI payload being delivered when
      signalling an interrupt:
      
      MSIAddr:
      	 |----4bytes----|----4bytes----|
      	 |    MSIData   |    IMPDEF    |
      
      This poses no problem for the ITS hardware because the adjacent 4 bytes
      are reserved in the memory map. However, when delivering MSIs to memory,
      as we do in the SMMUv3 driver for signalling the completion of a SYNC
      command, the extended payload will corrupt the 4 bytes adjacent to the
      "sync_count" member in struct arm_smmu_device. Fortunately, the current
      layout allocates these bytes to padding, but this is fragile and we
      should make this explicit.
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com>
      [will: Rewrote commit message and comment]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      00b0fbb8
  4. 10 1月, 2019 1 次提交
    • R
      iommu/arm-smmu-v3: Fix big-endian CMD_SYNC writes · 1817b2cc
      Robin Murphy 提交于
      commit 3cd508a8c1379427afb5e16c2e0a7c986d907853 upstream.
      
      When we insert the sync sequence number into the CMD_SYNC.MSIData field,
      we do so in CPU-native byte order, before writing out the whole command
      as explicitly little-endian dwords. Thus on big-endian systems, the SMMU
      will receive and write back a byteswapped version of sync_nr, which would
      be perfect if it were targeting a similarly-little-endian ITS, but since
      it's actually writing back to memory being polled by the CPUs, they're
      going to end up seeing the wrong thing.
      
      Since the SMMU doesn't care what the MSIData actually contains, the
      minimal-overhead solution is to simply add an extra byteswap initially,
      such that it then writes back the big-endian format directly.
      
      Cc: <stable@vger.kernel.org>
      Fixes: 37de98f8 ("iommu/arm-smmu-v3: Use CMD_SYNC completion MSI")
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      1817b2cc
  5. 08 8月, 2018 1 次提交
  6. 27 7月, 2018 2 次提交
  7. 26 7月, 2018 1 次提交
  8. 10 7月, 2018 1 次提交
    • R
      iommu: Remove IOMMU_OF_DECLARE · ac6bbf0c
      Rob Herring 提交于
      Now that we use the driver core to stop deferred probe for missing
      drivers, IOMMU_OF_DECLARE can be removed.
      
      This is slightly less optimal than having a list of built-in drivers in
      that we'll now defer probe twice before giving up. This shouldn't have a
      significant impact on boot times as past discussions about deferred
      probe have given no evidence of deferred probe having a substantial
      impact.
      
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Kukjin Kim <kgene@kernel.org>
      Cc: Krzysztof Kozlowski <krzk@kernel.org>
      Cc: Rob Clark <robdclark@gmail.com>
      Cc: Heiko Stuebner <heiko@sntech.de>
      Cc: Frank Rowand <frowand.list@gmail.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: iommu@lists.linux-foundation.org
      Cc: linux-samsung-soc@vger.kernel.org
      Cc: linux-arm-msm@vger.kernel.org
      Cc: linux-rockchip@lists.infradead.org
      Cc: devicetree@vger.kernel.org
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com>
      Acked-by: NJoerg Roedel <jroedel@suse.de>
      Signed-off-by: NRob Herring <robh@kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ac6bbf0c
  9. 27 3月, 2018 8 次提交
  10. 17 1月, 2018 1 次提交
  11. 03 1月, 2018 2 次提交
  12. 20 10月, 2017 10 次提交
    • R
      iommu/arm-smmu-v3: Use burst-polling for sync completion · 8ff0f723
      Robin Murphy 提交于
      While CMD_SYNC is unlikely to complete immediately such that we never go
      round the polling loop, with a lightly-loaded queue it may still do so
      long before the delay period is up. If we have no better completion
      notifier, use similar logic as we have for SMMUv2 to spin a number of
      times before each backoff, so that we have more chance of catching syncs
      which complete relatively quickly and avoid delaying unnecessarily.
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      8ff0f723
    • W
      iommu/arm-smmu-v3: Consolidate identical timeouts · a529ea19
      Will Deacon 提交于
      We have separate (identical) timeout values for polling for a queue to
      drain and waiting for an MSI to signal CMD_SYNC completion. In reality,
      we only wait for the command queue to drain if we're waiting on a sync,
      so just merged these two timeouts into a single constant.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      a529ea19
    • W
      iommu/arm-smmu-v3: Split arm_smmu_cmdq_issue_sync in half · 49806599
      Will Deacon 提交于
      arm_smmu_cmdq_issue_sync is a little unwieldy now that it supports both
      MSI and event-based polling, so split it into two functions to make things
      easier to follow.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      49806599
    • R
      iommu/arm-smmu-v3: Use CMD_SYNC completion MSI · 37de98f8
      Robin Murphy 提交于
      As an IRQ, the CMD_SYNC interrupt is not particularly useful, not least
      because we often need to wait for sync completion within someone else's
      IRQ handler anyway. However, when the SMMU is both coherent and supports
      MSIs, we can have a lot more fun by not using it as an interrupt at all.
      Following the example suggested in the architecture and using a write
      targeting normal memory, we can let callers wait on a status variable
      outside the lock instead of having to stall the entire queue or even
      touch MMIO registers. Since multiple sync commands are guaranteed to
      complete in order, a simple incrementing sequence count is all we need
      to unambiguously support any realistic number of overlapping waiters.
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      37de98f8
    • R
      iommu/arm-smmu-v3: Forget about cmdq-sync interrupt · dce032a1
      Robin Murphy 提交于
      The cmdq-sync interrupt is never going to be particularly useful, since
      for stage 1 DMA at least we'll often need to wait for sync completion
      within someone else's IRQ handler, thus have to implement polling
      anyway. Beyond that, the overhead of taking an interrupt, then still
      having to grovel around in the queue to figure out *which* sync command
      completed, doesn't seem much more attractive than simple polling either.
      
      Furthermore, if an implementation both has wired interrupts and supports
      MSIs, then we don't want to be taking the IRQ unnecessarily if we're
      using the MSI write to update memory. Let's just make life simpler by
      not even bothering to claim it in the first place.
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      dce032a1
    • R
      iommu/arm-smmu-v3: Specialise CMD_SYNC handling · 2f657add
      Robin Murphy 提交于
      CMD_SYNC already has a bit of special treatment here and there, but as
      we're about to extend it with more functionality for completing outside
      the CMDQ lock, things are going to get rather messy if we keep trying to
      cram everything into a single generic command interface. Instead, let's
      break out the issuing of CMD_SYNC into its own specific helper where
      upcoming changes will have room to breathe.
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2f657add
    • R
      iommu/arm-smmu-v3: Correct COHACC override message · 2a22baa2
      Robin Murphy 提交于
      Slightly confusingly, when reporting a mismatch of the ID register
      value, we still refer to the IORT COHACC override flag as the
      "dma-coherent property" if we booted with ACPI. Update the message
      to be firmware-agnostic in line with SMMUv2.
      Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Reported-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2a22baa2
    • Y
      iommu/arm-smmu-v3: Avoid ILLEGAL setting of STE.S1STALLD and CD.S · 9cff86fd
      Yisheng Xie 提交于
      According to Spec, it is ILLEGAL to set STE.S1STALLD if STALL_MODEL
      is not 0b00, which means we should not disable stall mode if stall
      or terminate mode is not configuable.
      
      Meanwhile, it is also ILLEGAL when STALL_MODEL==0b10 && CD.S==0 which
      means if stall mode is force we should always set CD.S.
      
      As Jean-Philippe's suggestion, this patch introduce a feature bit
      ARM_SMMU_FEAT_STALL_FORCE, which means smmu only supports stall force.
      Therefore, we can avoid the ILLEGAL setting of STE.S1STALLD.by checking
      ARM_SMMU_FEAT_STALL_FORCE.
      
      This patch keeps the ARM_SMMU_FEAT_STALLS as the meaning of stall supported
      (force or configuable) to easy to expand the future function, i.e. we can
      only use ARM_SMMU_FEAT_STALLS to check whether we should register fault
      handle or enable master can_stall, etc to supporte platform SVM.
      
      The feature bit, STE.S1STALLD and CD.S setting will be like:
      
      STALL_MODEL  FEATURE                                         S1STALLD CD.S
      0b00         ARM_SMMU_FEAT_STALLS                                 0b1 0b0
      0b01         !ARM_SMMU_FEAT_STALLS && !ARM_SMMU_FEAT_STALL_FORCE  0b0 0b0
      0b10         ARM_SMMU_FEAT_STALLS && ARM_SMMU_FEAT_STALL_FORCE    0b0 0b1
      
      after apply this patch.
      Signed-off-by: NYisheng Xie <xieyisheng1@huawei.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      9cff86fd
    • W
      iommu/arm-smmu-v3: Ensure we sync STE when only changing config field · 704c0382
      Will Deacon 提交于
      The SMMUv3 architecture permits caching of data structures deemed to be
      "reachable" by the SMU, which includes STEs marked as invalid. When
      transitioning an STE to a bypass/fault configuration at init or detach
      time, we mistakenly elide the CMDQ_OP_CFGI_STE operation in some cases,
      therefore potentially leaving the old STE state cached in the SMMU.
      
      This patch fixes the problem by ensuring that we perform the
      CMDQ_OP_CFGI_STE operation irrespective of the validity of the previous
      STE.
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Reported-by: NEric Auger <eric.auger@redhat.com>
      Reviewed-by: NEric Auger <eric.auger@redhat.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      704c0382
    • R
      iommu/arm-smmu: Remove ACPICA workarounds · 6948d4a7
      Robin Murphy 提交于
      Now that the kernel headers have synced with the relevant upstream
      ACPICA updates, it's time to clean up the temporary local definitions.
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      6948d4a7
  13. 02 10月, 2017 1 次提交
  14. 17 8月, 2017 1 次提交
  15. 24 6月, 2017 4 次提交