提交 234035c8 编写于 作者: J Jean-Philippe Brucker 提交者: Zheng Zengkai

iommu/arm-smmu-v3: Enable broadcast TLB maintenance

maillist inclusion
category: feature
bugzilla: 51855
CVE: NA

Reference: https://jpbrucker.net/git/linux/commit/?h=sva/2021-03-01&id=027d704339ca0e1a89987eac24e60678f6844a56

---------------------------------------------

The SMMUv3 can handle invalidation targeted at TLB entries with shared
ASIDs. If the implementation supports broadcast TLB maintenance, enable it
and keep track of it in a feature bit. The SMMU will then be affected by
inner-shareable TLB invalidations from other agents.

A major side-effect of this change is that stage-2 translation contexts
are now affected by all invalidations by VMID. VMIDs are all shared and
the only ways to prevent over-invalidation, since the stage-2 page tables
are not shared between CPU and SMMU, are to either disable BTM or allocate
different VMIDs. This patch does not address the problem.
Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org>
Signed-off-by: NLijun Fang <fanglijun3@huawei.com>
Reviewed-by: NWeilong Chen <chenweilong@huawei.com>
Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
上级 c97e6be2
...@@ -3556,11 +3556,14 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass) ...@@ -3556,11 +3556,14 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
writel_relaxed(reg, smmu->base + ARM_SMMU_CR1); writel_relaxed(reg, smmu->base + ARM_SMMU_CR1);
/* CR2 (random crap) */ /* CR2 (random crap) */
reg = CR2_PTM | CR2_RECINVSID; reg = CR2_RECINVSID;
if (smmu->features & ARM_SMMU_FEAT_E2H) if (smmu->features & ARM_SMMU_FEAT_E2H)
reg |= CR2_E2H; reg |= CR2_E2H;
if (!(smmu->features & ARM_SMMU_FEAT_BTM))
reg |= CR2_PTM;
writel_relaxed(reg, smmu->base + ARM_SMMU_CR2); writel_relaxed(reg, smmu->base + ARM_SMMU_CR2);
/* Stream table */ /* Stream table */
...@@ -3669,6 +3672,7 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) ...@@ -3669,6 +3672,7 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
{ {
u32 reg; u32 reg;
bool coherent = smmu->features & ARM_SMMU_FEAT_COHERENCY; bool coherent = smmu->features & ARM_SMMU_FEAT_COHERENCY;
bool vhe = cpus_have_cap(ARM64_HAS_VIRT_HOST_EXTN);
/* IDR0 */ /* IDR0 */
reg = readl_relaxed(smmu->base + ARM_SMMU_IDR0); reg = readl_relaxed(smmu->base + ARM_SMMU_IDR0);
...@@ -3721,10 +3725,19 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) ...@@ -3721,10 +3725,19 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
if (reg & IDR0_HYP) { if (reg & IDR0_HYP) {
smmu->features |= ARM_SMMU_FEAT_HYP; smmu->features |= ARM_SMMU_FEAT_HYP;
if (cpus_have_cap(ARM64_HAS_VIRT_HOST_EXTN)) if (vhe)
smmu->features |= ARM_SMMU_FEAT_E2H; smmu->features |= ARM_SMMU_FEAT_E2H;
} }
/*
* If the CPU is using VHE, but the SMMU doesn't support it, the SMMU
* will create TLB entries for NH-EL1 world and will miss the
* broadcasted TLB invalidations that target EL2-E2H world. Don't enable
* BTM in that case.
*/
if (reg & IDR0_BTM && (!vhe || reg & IDR0_HYP))
smmu->features |= ARM_SMMU_FEAT_BTM;
/* /*
* The coherency feature as set by FW is used in preference to the ID * The coherency feature as set by FW is used in preference to the ID
* register, but warn on mismatch. * register, but warn on mismatch.
......
...@@ -33,6 +33,7 @@ ...@@ -33,6 +33,7 @@
#define IDR0_ASID16 (1 << 12) #define IDR0_ASID16 (1 << 12)
#define IDR0_ATS (1 << 10) #define IDR0_ATS (1 << 10)
#define IDR0_HYP (1 << 9) #define IDR0_HYP (1 << 9)
#define IDR0_BTM (1 << 5)
#define IDR0_COHACC (1 << 4) #define IDR0_COHACC (1 << 4)
#define IDR0_TTF GENMASK(3, 2) #define IDR0_TTF GENMASK(3, 2)
#define IDR0_TTF_AARCH64 2 #define IDR0_TTF_AARCH64 2
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册