“8192c758b89917283cc7fd26260b695dedf5b539”上不存在“static/ppdet/data/reader.py”
- 28 7月, 2022 1 次提交
-
-
由 Marc Zyngier 提交于
The unwinding code doesn't really belong to the exit handling code. Instead, move it to a file (conveniently named stacktrace.c to confuse the reviewer), and move all the stacktrace-related stuff there. It will be joined by more code very soon. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NKalesh Singh <kaleshsingh@google.com> Tested-by: NKalesh Singh <kaleshsingh@google.com> Reviewed-by: NOliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20220727142906.1856759-3-maz@kernel.org
-
- 26 7月, 2022 2 次提交
-
-
由 Kalesh Singh 提交于
Dumps the pKVM hypervisor backtrace from EL1 by reading the unwinded addresses from the shared stacktrace buffer. The nVHE hyp backtrace is dumped on hyp_panic(), before panicking the host. [ 111.623091] kvm [367]: nVHE call trace: [ 111.623215] kvm [367]: [<ffff8000090a6570>] __kvm_nvhe_hyp_panic+0xac/0xf8 [ 111.623448] kvm [367]: [<ffff8000090a65cc>] __kvm_nvhe_hyp_panic_bad_stack+0x10/0x10 [ 111.623642] kvm [367]: [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34 . . . [ 111.640366] kvm [367]: [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34 [ 111.640467] kvm [367]: [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34 [ 111.640574] kvm [367]: [<ffff8000090a5de4>] __kvm_nvhe___kvm_vcpu_run+0x30/0x40c [ 111.640676] kvm [367]: [<ffff8000090a8b64>] __kvm_nvhe_handle___kvm_vcpu_run+0x30/0x48 [ 111.640778] kvm [367]: [<ffff8000090a88b8>] __kvm_nvhe_handle_trap+0xc4/0x128 [ 111.640880] kvm [367]: [<ffff8000090a7864>] __kvm_nvhe___host_exit+0x64/0x64 [ 111.640996] kvm [367]: ---[ end nVHE call trace ]--- Signed-off-by: NKalesh Singh <kaleshsingh@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220726073750.3219117-18-kaleshsingh@google.com
-
由 Kalesh Singh 提交于
In non-protected nVHE mode, unwinds and dumps the hypervisor backtrace from EL1. This is possible beacause the host can directly access the hypervisor stack pages in non-protected mode. The nVHE backtrace is dumped on hyp_panic(), before panicking the host. [ 101.498183] kvm [377]: nVHE call trace: [ 101.498363] kvm [377]: [<ffff8000090a6570>] __kvm_nvhe_hyp_panic+0xac/0xf8 [ 101.499045] kvm [377]: [<ffff8000090a65cc>] __kvm_nvhe_hyp_panic_bad_stack+0x10/0x10 [ 101.499498] kvm [377]: [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34 . . . [ 101.524929] kvm [377]: [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34 [ 101.525062] kvm [377]: [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34 [ 101.525195] kvm [377]: [<ffff8000090a5de4>] __kvm_nvhe___kvm_vcpu_run+0x30/0x40c [ 101.525333] kvm [377]: [<ffff8000090a8b64>] __kvm_nvhe_handle___kvm_vcpu_run+0x30/0x48 [ 101.525468] kvm [377]: [<ffff8000090a88b8>] __kvm_nvhe_handle_trap+0xc4/0x128 [ 101.525602] kvm [377]: [<ffff8000090a7864>] __kvm_nvhe___host_exit+0x64/0x64 [ 101.525745] kvm [377]: ---[ end nVHE call trace ]--- Signed-off-by: NKalesh Singh <kaleshsingh@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220726073750.3219117-12-kaleshsingh@google.com
-
- 17 7月, 2022 1 次提交
-
-
由 Kalesh Singh 提交于
With CONFIG_RANDOMIZE_BASE=y vmlinux addresses will resolve incorrectly from kallsyms. Fix this by adding the KASLR offset before printing the symbols. Fixes: 6ccf9cb5 ("KVM: arm64: Symbolize the nVHE HYP addresses") Reported-by: NFuad Tabba <tabba@google.com> Signed-off-by: NKalesh Singh <kaleshsingh@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220715235824.2549012-1-kaleshsingh@google.com
-
- 29 6月, 2022 1 次提交
-
-
由 Marc Zyngier 提交于
The host kernel uses the WFIT flag to remember that a vcpu has used this instruction and wake it up as required. Move it to the state set, as nothing in the hypervisor uses this information. Reviewed-by: NFuad Tabba <tabba@google.com> Reviewed-by: NReiji Watanabe <reijiw@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 03 5月, 2022 1 次提交
-
-
由 Oliver Upton 提交于
In order to enable HCR_EL2.TID3 for AArch32 guests KVM needs to handle traps where ESR_EL2.EC=0x8, which corresponds to an attempted VMRS access from an ID group register. Specifically, the MVFR{0-2} registers are accessed this way from AArch32. Conveniently, these registers are architecturally mapped to MVFR{0-2}_EL1 in AArch64. Furthermore, KVM already handles reads to these aliases in AArch64. Plumb VMRS read traps through to the general AArch64 system register handler. Signed-off-by: NOliver Upton <oupton@google.com> Reviewed-by: NReiji Watanabe <reijiw@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220503060205.2823727-5-oupton@google.com
-
- 30 4月, 2022 2 次提交
-
-
由 Alexandru Elisei 提交于
When userspace is debugging a VM, the kvm_debug_exit_arch part of the kvm_run struct contains arm64 specific debug information: the ESR_EL2 value, encoded in the field "hsr", and the address of the instruction that caused the exception, encoded in the field "far". Linux has moved to treating ESR_EL2 as a 64-bit register, but unfortunately kvm_debug_exit_arch.hsr cannot be changed because that would change the memory layout of the struct on big endian machines: Current layout: | Layout with "hsr" extended to 64 bits: | offset 0: ESR_EL2[31:0] (hsr) | offset 0: ESR_EL2[61:32] (hsr[61:32]) offset 4: padding | offset 4: ESR_EL2[31:0] (hsr[31:0]) offset 8: FAR_EL2[61:0] (far) | offset 8: FAR_EL2[61:0] (far) which breaks existing code. The padding is inserted by the compiler because the "far" field must be aligned to 8 bytes (each field must be naturally aligned - aapcs64 [1], page 18), and the struct itself must be aligned to 8 bytes (the struct must be aligned to the maximum alignment of its fields - aapcs64, page 18), which means that "hsr" must be aligned to 8 bytes as it is the first field in the struct. To avoid changing the struct size and layout for the existing fields, add a new field, "hsr_high", which replaces the existing padding. "hsr_high" will be used to hold the ESR_EL2[61:32] bits of the register. The memory layout, both on big and little endian machine, becomes: offset 0: ESR_EL2[31:0] (hsr) offset 4: ESR_EL2[61:32] (hsr_high) offset 8: FAR_EL2[61:0] (far) The padding that the compiler inserts for the current struct layout is unitialized. To prevent an updated userspace running on an old kernel mistaking the padding for a valid "hsr_high" value, add a new flag, KVM_DEBUG_ARCH_HSR_HIGH_VALID, to kvm_run->flags to let userspace know that "hsr_high" holds a valid ESR_EL2[61:32] value. [1] https://github.com/ARM-software/abi-aa/releases/download/2021Q3/aapcs64.pdfSigned-off-by: NAlexandru Elisei <alexandru.elisei@arm.com> Reviewed-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220425114444.368693-6-alexandru.elisei@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Alexandru Elisei 提交于
ESR_EL2 was defined as a 32-bit register in the initial release of the ARM Architecture Manual for Armv8-A, and was later extended to 64 bits, with bits [63:32] RES0. ARMv8.7 introduced FEAT_LS64, which makes use of bits [36:32]. KVM treats ESR_EL1 as a 64-bit register when saving and restoring the guest context, but ESR_EL2 is handled as a 32-bit register. Start treating ESR_EL2 as a 64-bit register to allow KVM to make use of the most significant 32 bits in the future. The type chosen to represent ESR_EL2 is u64, as that is consistent with the notation KVM overwhelmingly uses today (u32), and how the rest of the registers are declared. Signed-off-by: NAlexandru Elisei <alexandru.elisei@arm.com> Reviewed-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220425114444.368693-5-alexandru.elisei@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 29 4月, 2022 1 次提交
-
-
由 Kalesh Singh 提交于
Reintroduce the __kvm_nvhe_ symbols in kallsyms, ignoring the local symbols in this namespace. The local symbols are not informative and can cause aliasing issues when symbolizing the addresses. With the necessary symbols now in kallsyms we can symbolize nVHE addresses using the %p print format specifier: [ 98.916444][ T426] kvm [426]: nVHE hyp panic at: [<ffffffc0096156fc>] __kvm_nvhe_overflow_stack+0x8/0x34! Signed-off-by: NKalesh Singh <kaleshsingh@google.com> Tested-by: NFuad Tabba <tabba@google.com> Reviewed-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220420214317.3303360-7-kaleshsingh@google.com
-
- 20 4月, 2022 2 次提交
-
-
由 Marc Zyngier 提交于
For WFxT instructions used with very small delays, it is not unlikely that the deadline is already expired by the time we reach the WFx handling code. Check for this condition as soon as possible, and return to the guest immediately if we can. Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220419182755.601427-7-maz@kernel.org
-
由 Marc Zyngier 提交于
When trapping a blocking WFIT instruction, take it into account when computing the deadline of the background timer. The state is tracked with a new vcpu flag, and is gated by a new CPU capability, which isn't currently enabled. Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220419182755.601427-6-maz@kernel.org
-
- 18 3月, 2022 1 次提交
-
-
由 Julia Lawall 提交于
Various spelling mistakes in comments. Detected with the help of Coccinelle. Signed-off-by: NJulia Lawall <Julia.Lawall@inria.fr> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220318103729.157574-24-Julia.Lawall@inria.fr
-
- 03 2月, 2022 1 次提交
-
-
由 James Morse 提交于
Prior to commit defe21f4 ("KVM: arm64: Move PC rollback on SError to HYP"), when an SError is synchronised due to another exception, KVM handles the SError first. If the guest survives, the instruction that triggered the original exception is re-exectued to handle the first exception. HVC is treated as a special case as the instruction wouldn't normally be re-exectued, as its not a trap. Commit defe21f4 didn't preserve the behaviour of the 'return 1' that skips the rest of handle_exit(). Since commit defe21f4, KVM will try to handle the SError and the original exception at the same time. When the exception was an HVC, fixup_guest_exit() has already rolled back ELR_EL2, meaning if the guest has virtual SError masked, it will execute and handle the HVC twice. Restore the original behaviour. Fixes: defe21f4 ("KVM: arm64: Move PC rollback on SError to HYP") Cc: stable@vger.kernel.org Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220127122052.1584324-4-james.morse@arm.com
-
- 08 12月, 2021 2 次提交
-
-
由 Sean Christopherson 提交于
Rename kvm_vcpu_block() to kvm_vcpu_halt() in preparation for splitting the actual "block" sequences into a separate helper (to be named kvm_vcpu_block()). x86 will use the standalone block-only path to handle non-halt cases where the vCPU is not runnable. Rename block_ns to halt_ns to match the new function name. No functional change intended. Reviewed-by: NDavid Matlack <dmatlack@google.com> Reviewed-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NSean Christopherson <seanjc@google.com> Message-Id: <20211009021236.4122790-14-seanjc@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Sean Christopherson 提交于
Move the put and reload of the vGIC out of the block/unblock callbacks and into a dedicated WFI helper. Functionally, this is nearly a nop as the block hook is called at the very beginning of kvm_vcpu_block(), and the only code in kvm_vcpu_block() after the unblock hook is to update the halt-polling controls, i.e. can only affect the next WFI. Back when the arch (un)blocking hooks were added by commits 3217f7c2 ("KVM: Add kvm_arch_vcpu_{un}blocking callbacks) and d35268da ("arm/arm64: KVM: arch_timer: Only schedule soft timer on vcpu_block"), the hooks were invoked only when KVM was about to "block", i.e. schedule out the vCPU. The use case at the time was to schedule a timer in the host based on the earliest timer in the guest in order to wake the blocking vCPU when the emulated guest timer fired. Commit accb99bc ("KVM: arm/arm64: Simplify bg_timer programming") reworked the timer logic to be even more precise, by waiting until the vCPU was actually scheduled out, and so move the timer logic from the (un)blocking hooks to vcpu_load/put. In the meantime, the hooks gained usage for enabling vGIC v4 doorbells in commit df9ba959 ("KVM: arm/arm64: GICv4: Use the doorbell interrupt as an unblocking source"), and added related logic for the VMCR in commit 5eeaf10e ("KVM: arm/arm64: Sync ICH_VMCR_EL2 back when about to block"). Finally, commit 07ab0f8d ("KVM: Call kvm_arch_vcpu_blocking early into the blocking sequence") hoisted the (un)blocking hooks so that they wrapped KVM's halt-polling logic in addition to the core "block" logic. In other words, the original need for arch hooks to take action _only_ in the block path is long since gone. Cc: Oliver Upton <oupton@google.com> Cc: Marc Zyngier <maz@kernel.org> Signed-off-by: NSean Christopherson <seanjc@google.com> Message-Id: <20211009021236.4122790-11-seanjc@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 07 12月, 2021 1 次提交
-
-
由 Mark Brown 提交于
The comment on the SVE trap handler in handle_exit.c says that it is a placeholder until we support SVE in guests which we now do for both VHE and nVHE cases so we really shouldn't get here in any sort of standard case. Update the comment to be less immediately incorrect, the handling of such a situation is correct. Signed-off-by: NMark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20211025163232.3502052-1-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 26 8月, 2021 1 次提交
-
-
由 Raghavendra Rao Ananta 提交于
The switch-case for handling guest debug exception covers all the debug exception classes, but functionally, doesn't do anything with them other than ESR_ELx_EC_WATCHPT_LOW. Moreover, even though handled well, the 'default' case could be confusing from a security point of view, stating that the guests' actions can potentially flood the syslog. But in reality, the code is unreachable. Hence, trim down the function to only handle the case with ESR_ELx_EC_WATCHPT_LOW with a simple 'if' check. Suggested-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NRaghavendra Rao Ananta <rananta@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210823223940.1878930-1-rananta@google.com
-
- 18 8月, 2021 1 次提交
-
-
由 Will Deacon 提交于
When protected mode is enabled, the host is unable to access most parts of the EL2 hypervisor image, including 'hyp_physvirt_offset' and the contents of the hypervisor's '.rodata.str' section. Unfortunately, nvhe_hyp_panic_handler() tries to read from both of these locations when handling a BUG() triggered at EL2; the former for converting the ELR to a physical address and the latter for displaying the name of the source file where the BUG() occurred. Hack the EL2 panic asm to pass both physical and virtual ELR values to the host and utilise the newly introduced CONFIG_NVHE_EL2_DEBUG so that we disable stage-2 protection for the host before returning to the EL1 panic handler. If the debug option is not enabled, display the address instead of the source file:line information. Cc: Andrew Scull <ascull@google.com> Cc: Quentin Perret <qperret@google.com> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210813130336.8139-1-will@kernel.org
-
- 01 4月, 2021 1 次提交
-
-
由 Andrew Scull 提交于
To aid with debugging, add details of the source of a panic from nVHE hyp. This is done by having nVHE hyp exit to nvhe_hyp_panic_handler() rather than directly to panic(). The handler will then add the extra details for debugging before panicking the kernel. If the panic was due to a BUG(), look up the metadata to log the file and line, if available, otherwise log an address that can be looked up in vmlinux. The hyp offset is also logged to allow other hyp VAs to be converted, similar to how the kernel offset is logged during a panic. __hyp_panic_string is now inlined since it no longer needs to be referenced as a symbol and the message is free to diverge between VHE and nVHE. The following is an example of the logs generated by a BUG in nVHE hyp. [ 46.754840] kvm [307]: nVHE hyp BUG at: arch/arm64/kvm/hyp/nvhe/switch.c:242! [ 46.755357] kvm [307]: Hyp Offset: 0xfffea6c58e1e0000 [ 46.755824] Kernel panic - not syncing: HYP panic: [ 46.755824] PS:400003c9 PC:0000d93a82c705ac ESR:f2000800 [ 46.755824] FAR:0000000080080000 HPFAR:0000000000800800 PAR:0000000000000000 [ 46.755824] VCPU:0000d93a880d0000 [ 46.756960] CPU: 3 PID: 307 Comm: kvm-vcpu-0 Not tainted 5.12.0-rc3-00005-gc572b99cf65b-dirty #133 [ 46.757459] Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015 [ 46.758366] Call trace: [ 46.758601] dump_backtrace+0x0/0x1b0 [ 46.758856] show_stack+0x18/0x70 [ 46.759057] dump_stack+0xd0/0x12c [ 46.759236] panic+0x16c/0x334 [ 46.759426] arm64_kernel_unmapped_at_el0+0x0/0x30 [ 46.759661] kvm_arch_vcpu_ioctl_run+0x134/0x750 [ 46.759936] kvm_vcpu_ioctl+0x2f0/0x970 [ 46.760156] __arm64_sys_ioctl+0xa8/0xec [ 46.760379] el0_svc_common.constprop.0+0x60/0x120 [ 46.760627] do_el0_svc+0x24/0x90 [ 46.760766] el0_svc+0x2c/0x54 [ 46.760915] el0_sync_handler+0x1a4/0x1b0 [ 46.761146] el0_sync+0x170/0x180 [ 46.761889] SMP: stopping secondary CPUs [ 46.762786] Kernel Offset: 0x3e1cd2820000 from 0xffff800010000000 [ 46.763142] PHYS_OFFSET: 0xffffa9f680000000 [ 46.763359] CPU features: 0x00240022,61806008 [ 46.763651] Memory Limit: none [ 46.813867] ---[ end Kernel panic - not syncing: HYP panic: [ 46.813867] PS:400003c9 PC:0000d93a82c705ac ESR:f2000800 [ 46.813867] FAR:0000000080080000 HPFAR:0000000000800800 PAR:0000000000000000 [ 46.813867] VCPU:0000d93a880d0000 ]--- Signed-off-by: NAndrew Scull <ascull@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210318143311.839894-6-ascull@google.com
-
- 10 11月, 2020 5 次提交
-
-
由 Marc Zyngier 提交于
kvm_coproc.h used to serve as a compatibility layer for the files shared between the 32 and 64 bit ports. Another one bites the dust... Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
Instead of handling the "PC rollback on SError during HVC" at EL1 (which requires disclosing PC to a potentially untrusted kernel), let's move this fixup to ... fixup_guest_exit(), which is where we do all fixups. Isn't that neat? Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
In an effort to remove the vcpu PC manipulations from EL1 on nVHE systems, move kvm_skip_instr() to be HYP-specific. EL1's intent to increment PC post emulation is now signalled via a flag in the vcpu structure. Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
There is no need to feed the result of kvm_vcpu_trap_il_is32bit() to kvm_skip_instr(), as only AArch32 has a variable length ISA, and this helper can equally be called from kvm_skip_instr32(), reducing the complexity at all the call sites. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
On SMC trap, the prefered return address is set to that of the SMC instruction itself. It is thus wrong to try and roll it back when an SError occurs while trapping on SMC. It is still necessary on HVC though, as HVC doesn't cause a trap, and sets ELR to returning *after* the HVC. It also became apparent that there is no 16bit encoding for an AArch32 HVC instruction, meaning that the displacement is always 4 bytes, no matter what the ISA is. Take this opportunity to simplify it. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 24 8月, 2020 1 次提交
-
-
由 Gustavo A. R. Silva 提交于
Replace the existing /* fall through */ comments and its variants with the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary fall-through markings when it is the case. [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-throughSigned-off-by: NGustavo A. R. Silva <gustavoars@kernel.org>
-
- 10 7月, 2020 1 次提交
-
-
由 Tianjia Zhang 提交于
In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu' structure. For historical reasons, many kvm-related function parameters retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This patch does a unified cleanup of these remaining redundant parameters. Signed-off-by: NTianjia Zhang <tianjia.zhang@linux.alibaba.com> Reviewed-by: NVitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20200623131418.31473-3-tianjia.zhang@linux.alibaba.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 06 7月, 2020 1 次提交
-
-
由 Gavin Shan 提交于
kvm/arm32 isn't supported since commit 541ad015 ("arm: Remove 32bit KVM host support"). So HSR isn't meaningful since then. This renames HSR to ESR accordingly. This shouldn't cause any functional changes: * Rename kvm_vcpu_get_hsr() to kvm_vcpu_get_esr() to make the function names self-explanatory. * Rename variables from @hsr to @esr to make them self-explanatory. Note that the renaming on uapi and tracepoint will cause ABI changes, which we should avoid. Specificly, there are 4 related source files in this regard: * arch/arm64/include/uapi/asm/kvm.h (struct kvm_debug_exit_arch::hsr) * arch/arm64/kvm/handle_exit.c (struct kvm_debug_exit_arch::hsr) * arch/arm64/kvm/trace_arm.h (tracepoints) * arch/arm64/kvm/trace_handle_exit.h (tracepoints) Signed-off-by: NGavin Shan <gshan@redhat.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Acked-by: NAndrew Scull <ascull@google.com> Link: https://lore.kernel.org/r/20200630015705.103366-1-gshan@redhat.com
-
- 09 6月, 2020 2 次提交
-
-
由 Marc Zyngier 提交于
The current way we deal with PtrAuth is a bit heavy handed: - We forcefully save the host's keys on each vcpu_load() - Handling the PtrAuth trap forces us to go all the way back to the exit handling code to just set the HCR bits Overall, this is pretty cumbersome. A better approach would be to handle it the same way we deal with the FPSIMD registers: - On vcpu_load() disable PtrAuth for the guest - On first use, save the host's keys, enable PtrAuth in the guest Crucially, this can happen as a fixup, which is done very early on exit. We can then reenter the guest immediately without leaving the hypervisor role. Another thing is that it simplify the rest of the host handling: exiting all the way to the host means that the only possible outcome for this trap is to inject an UNDEF. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
When using the PtrAuth feature in a guest, we need to save the host's keys before allowing the guest to program them. For that, we dump them in a per-CPU data structure (the so called host context). But both call sites that do this are in preemptible context, which may end up in disaster should the vcpu thread get preempted before reentering the guest. Instead, save the keys eagerly on each vcpu_load(). This has an increased overhead, but is at least safe. Cc: stable@vger.kernel.org Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 16 5月, 2020 1 次提交
-
-
由 Marc Zyngier 提交于
Now that the 32bit KVM/arm host is a distant memory, let's move the whole of the KVM/arm64 code into the arm64 tree. As they said in the song: Welcome Home (Sanitarium). Signed-off-by: NMarc Zyngier <maz@kernel.org> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200513104034.74741-1-maz@kernel.org
-
- 22 10月, 2019 1 次提交
-
-
由 Christoffer Dall 提交于
We currently intertwine the KVM PSCI implementation with the general dispatch of hypercall handling, which makes perfect sense because PSCI is the only category of hypercalls we support. However, as we are about to support additional hypercalls, factor out this functionality into a separate hypercall handler file. Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com> [steven.price@arm.com: rebased] Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NSteven Price <steven.price@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 19 6月, 2019 1 次提交
-
-
由 Thomas Gleixner 提交于
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 503 file(s). Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NAlexios Zavras <alexios.zavras@intel.com> Reviewed-by: NAllison Randal <allison@lohutok.net> Reviewed-by: NEnrico Weigelt <info@metux.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 24 4月, 2019 1 次提交
-
-
由 Mark Rutland 提交于
When pointer authentication is supported, a guest may wish to use it. This patch adds the necessary KVM infrastructure for this to work, with a semi-lazy context switch of the pointer auth state. Pointer authentication feature is only enabled when VHE is built in the kernel and present in the CPU implementation so only VHE code paths are modified. When we schedule a vcpu, we disable guest usage of pointer authentication instructions and accesses to the keys. While these are disabled, we avoid context-switching the keys. When we trap the guest trying to use pointer authentication functionality, we change to eagerly context-switching the keys, and enable the feature. The next time the vcpu is scheduled out/in, we start again. However the host key save is optimized and implemented inside ptrauth instruction/register access trap. Pointer authentication consists of address authentication and generic authentication, and CPUs in a system might have varied support for either. Where support for either feature is not uniform, it is hidden from guests via ID register emulation, as a result of the cpufeature framework in the host. Unfortunately, address authentication and generic authentication cannot be trapped separately, as the architecture provides a single EL2 trap covering both. If we wish to expose one without the other, we cannot prevent a (badly-written) guest from intermittently using a feature which is not uniformly supported (when scheduled on a physical CPU which supports the relevant feature). Hence, this patch expects both type of authentication to be present in a cpu. This switch of key is done from guest enter/exit assembly as preparation for the upcoming in-kernel pointer authentication support. Hence, these key switching routines are not implemented in C code as they may cause pointer authentication key signing error in some situations. Signed-off-by: NMark Rutland <mark.rutland@arm.com> [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks , save host key in ptrauth exception trap] Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: kvmarm@lists.cs.columbia.edu [maz: various fixups] Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 18 12月, 2018 1 次提交
-
-
由 Mark Rutland 提交于
When we emulate a guest instruction, we don't advance the hardware singlestep state machine, and thus the guest will receive a software step exception after a next instruction which is not emulated by the host. We bodge around this in an ad-hoc fashion. Sometimes we explicitly check whether userspace requested a single step, and fake a debug exception from within the kernel. Other times, we advance the HW singlestep state rely on the HW to generate the exception for us. Thus, the observed step behaviour differs for host and guest. Let's make this simpler and consistent by always advancing the HW singlestep state machine when we skip an instruction. Thus we can rely on the hardware to generate the singlestep exception for us, and never need to explicitly check for an active-pending step, nor do we need to fake a debug exception from the guest. Cc: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 14 12月, 2018 1 次提交
-
-
由 Mark Rutland 提交于
In subsequent patches we're going to expose ptrauth to the host kernel and userspace, but things are a bit trickier for guest kernels. For the time being, let's hide ptrauth from KVM guests. Regardless of how well-behaved the guest kernel is, guest userspace could attempt to use ptrauth instructions, triggering a trap to EL2, resulting in noise from kvm_handle_unknown_ec(). So let's write up a handler for the PAC trap, which silently injects an UNDEF into the guest, as if the feature were really missing. Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Reviewed-by: NChristoffer Dall <christoffer.dall@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 19 10月, 2018 1 次提交
-
-
由 Christoffer Dall 提交于
This commit adds a paranoid check when entering the guest to make sure we don't attempt running guest code in an equally or more privilged mode than the hypervisor. We also catch other accidental programming of the SPSR_EL2 which results in an illegal exception return and report this safely back to the user. Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 07 2月, 2018 4 次提交
-
-
由 Marc Zyngier 提交于
The new SMC Calling Convention (v1.1) allows for a reduced overhead when calling into the firmware, and provides a new feature discovery mechanism. Make it visible to KVM guests. Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
As we're about to update the PSCI support, and because I'm lazy, let's move the PSCI include file to include/kvm so that both ARM architectures can find it. Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
When handling an SMC trap, the "preferred return address" is set to that of the SMC, and not the next PC (which is a departure from the behaviour of an SMC that isn't trapped). Increment PC in the handler, as the guest is otherwise forever stuck... Cc: stable@vger.kernel.org Fixes: acfb3b88 ("arm64: KVM: Fix SMCCC handling of unimplemented SMC/HVC calls") Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
KVM doesn't follow the SMCCC when it comes to unimplemented calls, and inject an UNDEF instead of returning an error. Since firmware calls are now used for security mitigation, they are becoming more common, and the undef is counter productive. Instead, let's follow the SMCCC which states that -1 must be returned to the caller when getting an unknown function number. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-