提交 e6fab544 编写于 作者: A Ard Biesheuvel 提交者: Christoffer Dall

ARM/arm64: KVM: test properly for a PTE's uncachedness

The open coded tests for checking whether a PTE maps a page as
uncached use a flawed '(pte_val(xxx) & CONST) != CONST' pattern,
which is not guaranteed to work since the type of a mapping is
not a set of mutually exclusive bits

For HYP mappings, the type is an index into the MAIR table (i.e, the
index itself does not contain any information whatsoever about the
type of the mapping), and for stage-2 mappings it is a bit field where
normal memory and device types are defined as follows:

    #define MT_S2_NORMAL            0xf
    #define MT_S2_DEVICE_nGnRE      0x1

I.e., masking *and* comparing with the latter matches on the former,
and we have been getting lucky merely because the S2 device mappings
also have the PTE_UXN bit set, or we would misidentify memory mappings
as device mappings.

Since the unmap_range() code path (which contains one instance of the
flawed test) is used both for HYP mappings and stage-2 mappings, and
considering the difference between the two, it is non-trivial to fix
this by rewriting the tests in place, as it would involve passing
down the type of mapping through all the functions.

However, since HYP mappings and stage-2 mappings both deal with host
physical addresses, we can simply check whether the mapping is backed
by memory that is managed by the host kernel, and only perform the
D-cache maintenance if this is the case.

Cc: stable@vger.kernel.org
Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: NPavel Fedin <p.fedin@samsung.com>
Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
上级 1ec21837
...@@ -98,6 +98,11 @@ static void kvm_flush_dcache_pud(pud_t pud) ...@@ -98,6 +98,11 @@ static void kvm_flush_dcache_pud(pud_t pud)
__kvm_flush_dcache_pud(pud); __kvm_flush_dcache_pud(pud);
} }
static bool kvm_is_device_pfn(unsigned long pfn)
{
return !pfn_valid(pfn);
}
/** /**
* stage2_dissolve_pmd() - clear and flush huge PMD entry * stage2_dissolve_pmd() - clear and flush huge PMD entry
* @kvm: pointer to kvm structure. * @kvm: pointer to kvm structure.
...@@ -213,7 +218,7 @@ static void unmap_ptes(struct kvm *kvm, pmd_t *pmd, ...@@ -213,7 +218,7 @@ static void unmap_ptes(struct kvm *kvm, pmd_t *pmd,
kvm_tlb_flush_vmid_ipa(kvm, addr); kvm_tlb_flush_vmid_ipa(kvm, addr);
/* No need to invalidate the cache for device mappings */ /* No need to invalidate the cache for device mappings */
if ((pte_val(old_pte) & PAGE_S2_DEVICE) != PAGE_S2_DEVICE) if (!kvm_is_device_pfn(__phys_to_pfn(addr)))
kvm_flush_dcache_pte(old_pte); kvm_flush_dcache_pte(old_pte);
put_page(virt_to_page(pte)); put_page(virt_to_page(pte));
...@@ -305,8 +310,7 @@ static void stage2_flush_ptes(struct kvm *kvm, pmd_t *pmd, ...@@ -305,8 +310,7 @@ static void stage2_flush_ptes(struct kvm *kvm, pmd_t *pmd,
pte = pte_offset_kernel(pmd, addr); pte = pte_offset_kernel(pmd, addr);
do { do {
if (!pte_none(*pte) && if (!pte_none(*pte) && !kvm_is_device_pfn(__phys_to_pfn(addr)))
(pte_val(*pte) & PAGE_S2_DEVICE) != PAGE_S2_DEVICE)
kvm_flush_dcache_pte(*pte); kvm_flush_dcache_pte(*pte);
} while (pte++, addr += PAGE_SIZE, addr != end); } while (pte++, addr += PAGE_SIZE, addr != end);
} }
...@@ -1037,11 +1041,6 @@ static bool kvm_is_write_fault(struct kvm_vcpu *vcpu) ...@@ -1037,11 +1041,6 @@ static bool kvm_is_write_fault(struct kvm_vcpu *vcpu)
return kvm_vcpu_dabt_iswrite(vcpu); return kvm_vcpu_dabt_iswrite(vcpu);
} }
static bool kvm_is_device_pfn(unsigned long pfn)
{
return !pfn_valid(pfn);
}
/** /**
* stage2_wp_ptes - write protect PMD range * stage2_wp_ptes - write protect PMD range
* @pmd: pointer to pmd entry * @pmd: pointer to pmd entry
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册