提交 b5741bb3 编写于 作者: M Mihai Caraman 提交者: Alexander Graf

KVM: PPC: e500mc: Revert "add load inst fixup"

The commit 1d628af7 "add load inst fixup" made an attempt to handle
failures generated by reading the guest current instruction. The fixup
code that was added works by chance hiding the real issue.

Load external pid (lwepx) instruction, used by KVM to read guest
instructions, is executed in a subsituted guest translation context
(EPLC[EGS] = 1). In consequence lwepx's TLB error and data storage
interrupts need to be handled by KVM, even though these interrupts
are generated from host context (MSR[GS] = 0) where lwepx is executed.

Currently, KVM hooks only interrupts generated from guest context
(MSR[GS] = 1), doing minimal checks on the fast path to avoid host
performance degradation. As a result, the host kernel handles lwepx
faults searching the faulting guest data address (loaded in DEAR) in
its own Logical Partition ID (LPID) 0 context. In case a host translation
is found the execution returns to the lwepx instruction instead of the
fixup, the host ending up in an infinite loop.

Revert the commit "add load inst fixup". lwepx issue will be addressed
in a subsequent patch without needing fixup code.
Signed-off-by: NMihai Caraman <mihai.caraman@freescale.com>
Signed-off-by: NAlexander Graf <agraf@suse.de>
上级 34f754b9
...@@ -29,7 +29,6 @@ ...@@ -29,7 +29,6 @@
#include <asm/asm-compat.h> #include <asm/asm-compat.h>
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/bitsperlong.h> #include <asm/bitsperlong.h>
#include <asm/thread_info.h>
#ifdef CONFIG_64BIT #ifdef CONFIG_64BIT
#include <asm/exception-64e.h> #include <asm/exception-64e.h>
...@@ -164,32 +163,9 @@ ...@@ -164,32 +163,9 @@
PPC_STL r30, VCPU_GPR(R30)(r4) PPC_STL r30, VCPU_GPR(R30)(r4)
PPC_STL r31, VCPU_GPR(R31)(r4) PPC_STL r31, VCPU_GPR(R31)(r4)
mtspr SPRN_EPLC, r8 mtspr SPRN_EPLC, r8
/* disable preemption, so we are sure we hit the fixup handler */
CURRENT_THREAD_INFO(r8, r1)
li r7, 1
stw r7, TI_PREEMPT(r8)
isync isync
lwepx r9, 0, r5
/*
* In case the read goes wrong, we catch it and write an invalid value
* in LAST_INST instead.
*/
1: lwepx r9, 0, r5
2:
.section .fixup, "ax"
3: li r9, KVM_INST_FETCH_FAILED
b 2b
.previous
.section __ex_table,"a"
PPC_LONG_ALIGN
PPC_LONG 1b,3b
.previous
mtspr SPRN_EPLC, r3 mtspr SPRN_EPLC, r3
li r7, 0
stw r7, TI_PREEMPT(r8)
stw r9, VCPU_LAST_INST(r4) stw r9, VCPU_LAST_INST(r4)
.endif .endif
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册