提交 005e975a 编写于 作者: M Marc Zyngier

arm64: KVM: Dynamically compute the HYP VA mask

As we're moving towards a much more dynamic way to compute our
HYP VA, let's express the mask in a slightly different way.

Instead of comparing the idmap position to the "low" VA mask,
we directly compute the mask by taking into account the idmap's
(VA_BIT-1) bit.

No functionnal change.
Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
上级 11d76407
...@@ -21,24 +21,19 @@ ...@@ -21,24 +21,19 @@
#include <asm/insn.h> #include <asm/insn.h>
#include <asm/kvm_mmu.h> #include <asm/kvm_mmu.h>
#define HYP_PAGE_OFFSET_HIGH_MASK ((UL(1) << VA_BITS) - 1)
#define HYP_PAGE_OFFSET_LOW_MASK ((UL(1) << (VA_BITS - 1)) - 1)
static u64 va_mask; static u64 va_mask;
static void compute_layout(void) static void compute_layout(void)
{ {
phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start); phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
unsigned long mask = HYP_PAGE_OFFSET_HIGH_MASK; u64 hyp_va_msb;
/* /* Where is my RAM region? */
* Activate the lower HYP offset only if the idmap doesn't hyp_va_msb = idmap_addr & BIT(VA_BITS - 1);
* clash with it, hyp_va_msb ^= BIT(VA_BITS - 1);
*/
if (idmap_addr > HYP_PAGE_OFFSET_LOW_MASK)
mask = HYP_PAGE_OFFSET_LOW_MASK;
va_mask = mask; va_mask = GENMASK_ULL(VA_BITS - 2, 0);
va_mask |= hyp_va_msb;
} }
static u32 compute_instruction(int n, u32 rd, u32 rn) static u32 compute_instruction(int n, u32 rd, u32 rn)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册