- 26 7月, 2022 1 次提交
-
-
由 Kalesh Singh 提交于
In non-protected nVHE mode (non-pKVM) the host can directly access hypervisor memory; and unwinding of the hypervisor stacktrace is done from EL1 to save on memory for shared buffers. To unwind the hypervisor stack from EL1 the host needs to know the starting point for the unwind and information that will allow it to translate hypervisor stack addresses to the corresponding kernel addresses. This patch sets up this book keeping. It is made use of later in the series. Signed-off-by: NKalesh Singh <kaleshsingh@google.com> Reviewed-by: NFuad Tabba <tabba@google.com> Tested-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220726073750.3219117-10-kaleshsingh@google.com
-
- 06 7月, 2022 1 次提交
-
-
由 Quentin Perret 提交于
Although harmless, the return statement in kvm_unexpected_el2_exception is rather confusing as the function itself has a void return type. The C standard is also pretty clear that "A return statement with an expression shall not appear in a function whose return type is void". Given that this return statement does not seem to add any actual value, let's not pointlessly violate the standard. Build-tested with GCC 10 and CLANG 13 for good measure, the disassembled code is identical with or without the return statement. Fixes: e9ee186b ("KVM: arm64: Add kvm_extable for vaxorcism code") Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220705142310.3847918-1-qperret@google.com
-
- 09 6月, 2022 2 次提交
-
-
由 Marc Zyngier 提交于
The KVM FP code uses a pair of flags to denote three states: - FP_ENABLED set: the guest owns the FP state - FP_HOST set: the host owns the FP state - FP_ENABLED and FP_HOST clear: nobody owns the FP state at all and both flags set is an illegal state, which nothing ever checks for... As it turns out, this isn't really a good match for flags, and we'd be better off if this was a simpler tristate, each state having a name that actually reflect the state: - FP_STATE_FREE - FP_STATE_HOST_OWNED - FP_STATE_GUEST_OWNED Kill the two flags, and move over to an enum encoding these three states. This results in less confusing code, and less risk of ending up in the uncharted territory of a 4th state if we forget to clear one of the two flags. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NMark Brown <broonie@kernel.org> Reviewed-by: NReiji Watanabe <reijiw@google.com>
-
由 Marc Zyngier 提交于
The vcpu KVM_ARM64_FP_FOREIGN_FPSTATE flag tracks the thread's own TIF_FOREIGN_FPSTATE so that we can evaluate just before running the vcpu whether it the FP regs contain something that is owned by the vcpu or not by updating the rest of the FP flags. We do this in the hypervisor code in order to make sure we're in a context where we are not interruptible. But we already have a hook in the run loop to generate this flag. We may as well update the FP flags directly and save the pointless flag tracking. Whilst we're at it, rename update_fp_enabled() to guest_owns_fp_regs() to indicate what the leftover of this helper actually do. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NReiji Watanabe <reijiw@google.com> Reviewed-by: NMark Brown <broonie@kernel.org>
-
- 16 5月, 2022 1 次提交
-
-
由 Marc Zyngier 提交于
Moving kvm_pmu_events into the vcpu (and refering to it) broke the somewhat unusual case where the kernel has no support for a PMU at all. In order to solve this, move things around a bit so that we can easily avoid refering to the pmu structure outside of PMU-aware code. As a bonus, pmu.c isn't compiled in when HW_PERF_EVENTS isn't selected. Reported-by: Nkernel test robot <lkp@intel.com> Reviewed-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/202205161814.KQHpOzsJ-lkp@intel.com
-
- 15 5月, 2022 1 次提交
-
-
由 Fuad Tabba 提交于
Instead of the host accessing hyp data directly, pass the pmu events of the current cpu to hyp via the vcpu. This adds 64 bits (in two fields) to the vcpu that need to be synced before every vcpu run in nvhe and protected modes. However, it isolates the hypervisor from the host, which allows us to use pmu in protected mode in a subsequent patch. No visible side effects in behavior intended. Signed-off-by: NFuad Tabba <tabba@google.com> Reviewed-by: NOliver Upton <oupton@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220510095710.148178-4-tabba@google.com
-
- 10 5月, 2022 1 次提交
-
-
由 Oliver Upton 提交于
The pVM-specific FP/SIMD trap handler just calls straight into the generic trap handler. Avoid the indirection and just call the hyp handler directly. Note that the BUILD_BUG_ON() pattern is repeated in pvm_init_traps_aa64pfr0(), which is likely a better home for it. No functional change intended. Signed-off-by: NOliver Upton <oupton@google.com> Reviewed-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220509162559.2387784-2-oupton@google.com
-
- 06 5月, 2022 1 次提交
-
-
由 Randy Dunlap 提交于
Don't use begin-kernel-doc notation (/**) for comments that are not in kernel-doc format. This prevents these kernel-doc warnings: arch/arm64/kvm/hyp/nvhe/switch.c:126: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst * Disable host events, enable guest events arch/arm64/kvm/hyp/nvhe/switch.c:146: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst * Disable guest events, enable host events arch/arm64/kvm/hyp/nvhe/switch.c:164: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst * Handler for protected VM restricted exceptions. arch/arm64/kvm/hyp/nvhe/switch.c:176: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst * Handler for protected VM MSR, MRS or System instruction execution in AArch64. arch/arm64/kvm/hyp/nvhe/switch.c:196: warning: Function parameter or member 'vcpu' not described in 'kvm_handle_pvm_fpsimd' arch/arm64/kvm/hyp/nvhe/switch.c:196: warning: Function parameter or member 'exit_code' not described in 'kvm_handle_pvm_fpsimd' arch/arm64/kvm/hyp/nvhe/switch.c:196: warning: expecting prototype for Handler for protected floating(). Prototype was for kvm_handle_pvm_fpsimd() instead Fixes: 09cf57eb ("KVM: arm64: Split hyp/switch.c to VHE/nVHE") Fixes: 1423afcb ("KVM: arm64: Trap access to pVM restricted features") Signed-off-by: NRandy Dunlap <rdunlap@infradead.org> Reported-by: Nkernel test robot <lkp@intel.com> Cc: Fuad Tabba <tabba@google.com> Cc: Marc Zyngier <maz@kernel.org> Cc: David Brazdil <dbrazdil@google.com> Cc: James Morse <james.morse@arm.com> Cc: Alexandru Elisei <alexandru.elisei@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220430050123.2844-1-rdunlap@infradead.org
-
- 29 4月, 2022 1 次提交
-
-
由 Kalesh Singh 提交于
The hypervisor stacks (for both nVHE Hyp mode and nVHE protected mode) are aligned such that any valid stack address has PAGE_SHIFT bit as 1. This allows us to conveniently check for overflow in the exception entry without corrupting any GPRs. We won't recover from a stack overflow so panic the hypervisor. Signed-off-by: NKalesh Singh <kaleshsingh@google.com> Tested-by: NFuad Tabba <tabba@google.com> Reviewed-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220420214317.3303360-6-kaleshsingh@google.com
-
- 23 4月, 2022 1 次提交
-
-
由 Mark Brown 提交于
SME defines two new traps which need to be enabled for guests to ensure that they can't use SME, one for the main SME operations which mirrors the traps for SVE and another for access to TPIDR2 in SCTLR_EL2. For VHE manage SMEN along with ZEN in activate_traps() and the FP state management callbacks, along with SCTLR_EL2.EnTPIDR2. There is no existing dynamic management of SCTLR_EL2. For nVHE manage TSM in activate_traps() along with the fine grained traps for TPIDR2 and SMPRI. There is no existing dynamic management of fine grained traps. Signed-off-by: NMark Brown <broonie@kernel.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220419112247.711548-26-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 24 11月, 2021 1 次提交
-
-
由 Marc Zyngier 提交于
Protected KVM is trying to turn AArch32 exceptions into an illegal exception entry. Unfortunately, it does that in a way that is a bit abrupt, and too early for PSTATE to be available. Instead, move it to the fixup code, which is a more reasonable place for it. This will also be useful for the NV code. Reviewed-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 23 11月, 2021 1 次提交
-
-
由 Marc Zyngier 提交于
Now that we can track an equivalent of TIF_FOREIGN_FPSTATE, drop the mapping of current's thread_info at EL2. Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 18 10月, 2021 5 次提交
-
-
由 Marc Zyngier 提交于
Checking for pvm handling first means that we cannot handle ptrauth traps or apply any of the workarounds (GICv3 or TX2 #219). Flip the order around. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NFuad Tabba <tabba@google.com> Tested-by: NFuad Tabba <tabba@google.com> Link: https://lore.kernel.org/r/20211013120346.2926621-12-maz@kernel.org
-
由 Marc Zyngier 提交于
Passing a VM pointer around is odd, and results in extra work on VHE. Follow the rest of the design that uses the vcpu instead, and let the nVHE code look into the struct kvm as required. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NFuad Tabba <tabba@google.com> Tested-by: NFuad Tabba <tabba@google.com> Link: https://lore.kernel.org/r/20211013120346.2926621-11-maz@kernel.org
-
由 Marc Zyngier 提交于
Place kvm_handle_pvm_restricted() next to its little friends such as kvm_handle_pvm_sysreg(). This allows to make inject_undef64() static. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NFuad Tabba <tabba@google.com> Tested-by: NFuad Tabba <tabba@google.com> Link: https://lore.kernel.org/r/20211013120346.2926621-10-maz@kernel.org
-
由 Marc Zyngier 提交于
kvm_fixed_config.h is pkvm specific, and would be better placed near its users. At the same time, include/nvhe/sys_regs.h is now almost empty. Merge the two into arch/arm64/kvm/hyp/include/nvhe/fixed_config.h. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NFuad Tabba <tabba@google.com> Tested-by: NFuad Tabba <tabba@google.com> Link: https://lore.kernel.org/r/20211013120346.2926621-9-maz@kernel.org
-
由 Marc Zyngier 提交于
Don't drop a potential SError when a guest gets caught red-handed running AArch32 code. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NFuad Tabba <tabba@google.com> Tested-by: NFuad Tabba <tabba@google.com> Link: https://lore.kernel.org/r/20211013120346.2926621-8-maz@kernel.org
-
- 11 10月, 2021 5 次提交
-
-
由 Fuad Tabba 提交于
Protected KVM does not support protected AArch32 guests. However, it is possible for the guest to force run AArch32, potentially causing problems. Add an extra check so that if the hypervisor catches the guest doing that, it can prevent the guest from running again by resetting vcpu->arch.target and returning ARM_EXCEPTION_IL. If this were to happen, The VMM can try and fix it by re- initializing the vcpu with KVM_ARM_VCPU_INIT, however, this is likely not possible for protected VMs. Adapted from commit 22f55384 ("KVM: arm64: Handle Asymmetric AArch32 systems") Signed-off-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211010145636.1950948-12-tabba@google.com
-
由 Fuad Tabba 提交于
Trap accesses to restricted features for VMs running in protected mode. Access to feature registers are emulated, and only supported features are exposed to protected VMs. Accesses to restricted registers as well as restricted instructions are trapped, and an undefined exception is injected into the protected guests, i.e., with EC = 0x0 (unknown reason). This EC is the one used, according to the Arm Architecture Reference Manual, for unallocated or undefined system registers or instructions. Only affects the functionality of protected VMs. Otherwise, should not affect non-protected VMs when KVM is running in protected mode. Signed-off-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211010145636.1950948-11-tabba@google.com
-
由 Fuad Tabba 提交于
Add system register handlers for protected VMs. These cover Sys64 registers (including feature id registers), and debug. No functional change intended as these are not hooked in yet to the guest exit handlers introduced earlier. So when trapping is triggered, the exit handlers let the host handle it, as before. Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211010145636.1950948-8-tabba@google.com
-
由 Fuad Tabba 提交于
We need struct kvm to check for protected VMs to be able to pick the right handlers for them in subsequent patches. Signed-off-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211010145636.1950948-5-tabba@google.com
-
由 Marc Zyngier 提交于
Simplify the early exception handling by slicing the gigantic decoding tree into a more manageable set of functions, similar to what we have in handle_exit.c. This will also make the structure reusable for pKVM's own early exit handling. Signed-off-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NFuad Tabba <tabba@google.com> Link: https://lore.kernel.org/r/20211010145636.1950948-4-tabba@google.com
-
- 20 8月, 2021 5 次提交
-
-
由 Fuad Tabba 提交于
Track the baseline guest value for cptr_el2 in struct kvm_vcpu_arch, similar to the other registers that control traps. Use this value when setting cptr_el2 for the guest. Currently this value is unchanged (CPTR_EL2_DEFAULT), but future patches will set trapping bits based on features supported for the guest. No functional change intended. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210817081134.2918285-9-tabba@google.com
-
由 Fuad Tabba 提交于
__init_el2_debug configures mdcr_el2 at initialization based on, among other things, available hardware support. Trap deactivation doesn't check that, so keep the initial value. No functional change intended. Signed-off-by: NFuad Tabba <tabba@google.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210817081134.2918285-8-tabba@google.com
-
由 Fuad Tabba 提交于
On deactivating traps, restore the value of mdcr_el2 from the newly created and preserved host value vcpu context, rather than directly reading the hardware register. Up until and including this patch the two values are the same, i.e., the hardware register and the vcpu one. A future patch will be changing the value of mdcr_el2 on activating traps, and this ensures that its value will be restored. No functional change intended. Signed-off-by: NFuad Tabba <tabba@google.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210817081134.2918285-7-tabba@google.com
-
由 Marc Zyngier 提交于
The protected mode relies on a separate helper to load the S2 context. Move over to the __load_guest_stage2() helper instead, and rename it to __load_stage2() to present a unified interface. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Jade Alglave <jade.alglave@arm.com> Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210806113109.2475-5-will@kernel.org
-
由 Marc Zyngier 提交于
It is a bit awkward to use kern_hyp_va() in __load_guest_stage2(), specially as the helper is shared between VHE and nVHE. Instead, move the use of kern_hyp_va() in the nVHE code, and pass a pointer to the kvm->arch structure instead. Although this may look a bit awkward, it allows for some further simplification. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Jade Alglave <jade.alglave@arm.com> Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210806113109.2475-4-will@kernel.org
-
- 15 5月, 2021 1 次提交
-
-
由 Marc Zyngier 提交于
In order to make it easy to call __adjust_pc() from the EL1 code (in the case of nVHE), rename it to __kvm_adjust_pc() and move it out of line. No expected functional change. Reviewed-by: NAlexandru Elisei <alexandru.elisei@arm.com> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Tested-by: NZenghui Yu <yuzenghui@huawei.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org # 5.11
-
- 07 4月, 2021 1 次提交
-
-
由 Suzuki K Poulose 提交于
For a nvhe host, the EL2 must allow the EL1&0 translation regime for TraceBuffer (MDCR_EL2.E2TB == 0b11). This must be saved/restored over a trip to the guest. Also, before entering the guest, we must flush any trace data if the TRBE was enabled. And we must prohibit the generation of trace while we are in EL1 by clearing the TRFCR_EL1. For vhe, the EL2 must prevent the EL1 access to the Trace Buffer. The MDCR_EL2 bit definitions for TRBE are available here : https://developer.arm.com/documentation/ddi0601/2020-12/AArch64-Registers/ Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Acked-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210405164307.1720226-8-suzuki.poulose@arm.comSigned-off-by: NMathieu Poirier <mathieu.poirier@linaro.org>
-
- 19 3月, 2021 2 次提交
-
-
由 Quentin Perret 提交于
When KVM runs in protected nVHE mode, make use of a stage 2 page-table to give the hypervisor some control over the host memory accesses. The host stage 2 is created lazily using large block mappings if possible, and will default to page mappings in absence of a better solution. >From this point on, memory accesses from the host to protected memory regions (e.g. not 'owned' by the host) are fatal and lead to hyp_panic(). Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-36-qperret@google.com
-
由 Quentin Perret 提交于
Move the registers relevant to host stage 2 enablement to kvm_nvhe_init_params to prepare the ground for enabling it in later patches. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-22-qperret@google.com
-
- 18 3月, 2021 2 次提交
-
-
由 Marc Zyngier 提交于
Implement the SVE save/restore for nVHE, following a similar logic to that of the VHE implementation: - the SVE state is switched on trap from EL1 to EL2 - no further changes to ZCR_EL2 occur as long as the guest isn't preempted or exit to userspace - ZCR_EL2 is reset to its default value on the first SVE access from the host EL1, and ZCR_EL1 restored to the default guest value in vcpu_put() Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
ZCR_EL2 controls the upper bound for ZCR_EL1, and is set to a potentially lower limit when the guest uses SVE. In order to restore the SVE state on the EL1 host, we must first reset ZCR_EL2 to its original value. To make it as lazy as possible on the EL1 host side, set the SVE trapping in place when exiting from the guest. On the first EL1 access to SVE, ZCR_EL2 will be restored to its full glory. Suggested-by: NAndrew Scull <ascull@google.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 06 3月, 2021 2 次提交
-
-
由 Andrew Scull 提交于
When panicking from the nVHE hyp and restoring the host context, x29 is expected to hold a pointer to the host context. This wasn't being done so fix it to make sure there's a valid pointer the host context being used. Rather than passing a boolean indicating whether or not the host context should be restored, instead pass the pointer to the host context. NULL is passed to indicate that no context should be restored. Fixes: a2e102e2 ("KVM: arm64: nVHE: Handle hyp panics") Cc: stable@vger.kernel.org Signed-off-by: NAndrew Scull <ascull@google.com> [maz: partial rewrite to fit 5.12-rc1] Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210219122406.1337626-1-ascull@google.com Message-Id: <20210305185254.3730990-4-maz@kernel.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Suzuki K Poulose 提交于
The nVHE KVM hyp drains and disables the SPE buffer, before entering the guest, as the EL1&0 translation regime is going to be loaded with that of the guest. But this operation is performed way too late, because : - The owning translation regime of the SPE buffer is transferred to EL2. (MDCR_EL2_E2PB == 0) - The guest Stage1 is loaded. Thus the flush could use the host EL1 virtual address, but use the EL2 translations instead of host EL1, for writing out any cached data. Fix this by moving the SPE buffer handling early enough. The restore path is doing the right thing. Fixes: 014c4c77 ("KVM: arm64: Improve debug register save/restore flow") Cc: stable@vger.kernel.org Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Alexandru Elisei <alexandru.elisei@arm.com> Reviewed-by: NAlexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210302120345.3102874-1-suzuki.poulose@arm.com Message-Id: <20210305185254.3730990-2-maz@kernel.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 04 12月, 2020 1 次提交
-
-
由 David Brazdil 提交于
While protected KVM is installed, start trapping all host SMCs. For now these are simply forwarded to EL3, except PSCI CPU_ON/CPU_SUSPEND/SYSTEM_SUSPEND which are intercepted and the hypervisor installed on newly booted cores. Create new constant HCR_HOST_NVHE_PROTECTED_FLAGS with the new set of HCR flags to use while the nVHE vector is installed when the kernel was booted with the protected flag enabled. Switch back to the default HCR flags when switching back to the stub vector. Signed-off-by: NDavid Brazdil <dbrazdil@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201202184122.26046-26-dbrazdil@google.com
-
- 10 11月, 2020 1 次提交
-
-
由 Marc Zyngier 提交于
In an effort to remove the vcpu PC manipulations from EL1 on nVHE systems, move kvm_skip_instr() to be HYP-specific. EL1's intent to increment PC post emulation is now signalled via a flag in the vcpu structure. Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 29 10月, 2020 1 次提交
-
-
由 Rob Herring 提交于
On Cortex-A77 r0p0 and r1p0, a sequence of a non-cacheable or device load and a store exclusive or PAR_EL1 read can cause a deadlock. The workaround requires a DMB SY before and after a PAR_EL1 register read. In addition, it's possible an interrupt (doing a device read) or KVM guest exit could be taken between the DMB and PAR read, so we also need a DMB before returning from interrupt and before returning to a guest. A deadlock is still possible with the workaround as KVM guests must also have the workaround. IOW, a malicious guest can deadlock an affected systems. This workaround also depends on a firmware counterpart to enable the h/w to insert DMB SY after load and store exclusive instructions. See the errata document SDEN-1152370 v10 [1] for more information. [1] https://static.docs.arm.com/101992/0010/Arm_Cortex_A77_MP074_Software_Developer_Errata_Notice_v10.pdfSigned-off-by: NRob Herring <robh@kernel.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMarc Zyngier <maz@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: kvmarm@lists.cs.columbia.edu Link: https://lore.kernel.org/r/20201028182839.166037-2-robh@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
- 30 9月, 2020 2 次提交
-
-
由 David Brazdil 提交于
Host CPU context is stored in a global per-cpu variable `kvm_host_data`. In preparation for introducing independent per-CPU region for nVHE hyp, create two separate instances of `kvm_host_data`, one for VHE and one for nVHE. Signed-off-by: NDavid Brazdil <dbrazdil@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200922204910.7265-9-dbrazdil@google.com
-
由 David Brazdil 提交于
Hyp keeps track of which cores require SSBD callback by accessing a kernel-proper global variable. Create an nVHE symbol of the same name and copy the value from kernel proper to nVHE as KVM is being enabled on a core. Done in preparation for separating percpu memory owned by kernel proper and nVHE. Signed-off-by: NDavid Brazdil <dbrazdil@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200922204910.7265-8-dbrazdil@google.com
-