- 09 2月, 2022 1 次提交
-
-
由 Marc Zyngier 提交于
Userspace can specify which events a guest is allowed to use with the KVM_ARM_VCPU_PMU_V3_FILTER attribute. The list of allowed events can be identified by a guest from reading the PMCEID{0,1}_EL0 registers. Changing the PMU event filter after a VCPU has run can cause reads of the registers performed before the filter is changed to return different values than reads performed with the new event filter in place. The architecture defines the two registers as read-only, and this behaviour contradicts that. Keep track when the first VCPU has run and deny changes to the PMU event filter to prevent this from happening. Signed-off-by: NMarc Zyngier <maz@kernel.org> [ Alexandru E: Added commit message, updated ioctl documentation ] Signed-off-by: NAlexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220127161759.53553-2-alexandru.elisei@arm.com
-
- 03 2月, 2022 3 次提交
-
-
由 James Morse 提交于
Cortex-A510's erratum #2077057 causes SPSR_EL2 to be corrupted when single-stepping authenticated ERET instructions. A single step is expected, but a pointer authentication trap is taken instead. The erratum causes SPSR_EL1 to be copied to SPSR_EL2, which could allow EL1 to cause a return to EL2 with a guest controlled ELR_EL2. Because the conditions require an ERET into active-not-pending state, this is only a problem for the EL2 when EL2 is stepping EL1. In this case the previous SPSR_EL2 value is preserved in struct kvm_vcpu, and can be restored. Cc: stable@vger.kernel.org # 53960faf: arm64: Add Cortex-A510 CPU part definition Cc: stable@vger.kernel.org Signed-off-by: NJames Morse <james.morse@arm.com> [maz: fixup cpucaps ordering] Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220127122052.1584324-5-james.morse@arm.com
-
由 James Morse 提交于
Prior to commit defe21f4 ("KVM: arm64: Move PC rollback on SError to HYP"), when an SError is synchronised due to another exception, KVM handles the SError first. If the guest survives, the instruction that triggered the original exception is re-exectued to handle the first exception. HVC is treated as a special case as the instruction wouldn't normally be re-exectued, as its not a trap. Commit defe21f4 didn't preserve the behaviour of the 'return 1' that skips the rest of handle_exit(). Since commit defe21f4, KVM will try to handle the SError and the original exception at the same time. When the exception was an HVC, fixup_guest_exit() has already rolled back ELR_EL2, meaning if the guest has virtual SError masked, it will execute and handle the HVC twice. Restore the original behaviour. Fixes: defe21f4 ("KVM: arm64: Move PC rollback on SError to HYP") Cc: stable@vger.kernel.org Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220127122052.1584324-4-james.morse@arm.com
-
由 James Morse 提交于
When any exception other than an IRQ occurs, the CPU updates the ESR_EL2 register with the exception syndrome. An SError may also become pending, and will be synchronised by KVM. KVM notes the exception type, and whether an SError was synchronised in exit_code. When an exception other than an IRQ occurs, fixup_guest_exit() updates vcpu->arch.fault.esr_el2 from the hardware register. When an SError was synchronised, the vcpu esr value is used to determine if the exception was due to an HVC. If so, ELR_EL2 is moved back one instruction. This is so that KVM can process the SError first, and re-execute the HVC if the guest survives the SError. But if an IRQ synchronises an SError, the vcpu's esr value is stale. If the previous non-IRQ exception was an HVC, KVM will corrupt ELR_EL2, causing an unrelated guest instruction to be executed twice. Check ARM_EXCEPTION_CODE() before messing with ELR_EL2, IRQs don't update this register so don't need to check. Fixes: defe21f4 ("KVM: arm64: Move PC rollback on SError to HYP") Cc: stable@vger.kernel.org Reported-by: NSteven Price <steven.price@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220127122052.1584324-3-james.morse@arm.com
-
- 02 2月, 2022 1 次提交
-
-
由 Mark Rutland 提交于
In kvm_arch_vcpu_ioctl_run() we enter an RCU extended quiescent state (EQS) by calling guest_enter_irqoff(), and unmasked IRQs prior to exiting the EQS by calling guest_exit(). As the IRQ entry code will not wake RCU in this case, we may run the core IRQ code and IRQ handler without RCU watching, leading to various potential problems. Additionally, we do not inform lockdep or tracing that interrupts will be enabled during guest execution, which caan lead to misleading traces and warnings that interrupts have been enabled for overly-long periods. This patch fixes these issues by using the new timing and context entry/exit helpers to ensure that interrupts are handled during guest vtime but with RCU watching, with a sequence: guest_timing_enter_irqoff(); guest_state_enter_irqoff(); < run the vcpu > guest_state_exit_irqoff(); < take any pending IRQs > guest_timing_exit_irqoff(); Since instrumentation may make use of RCU, we must also ensure that no instrumented code is run during the EQS. I've split out the critical section into a new kvm_arm_enter_exit_vcpu() helper which is marked noinstr. Fixes: 1b3d546d ("arm/arm64: KVM: Properly account for guest CPU time") Reported-by: NNicolas Saenz Julienne <nsaenzju@redhat.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NNicolas Saenz Julienne <nsaenzju@redhat.com> Cc: Alexandru Elisei <alexandru.elisei@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Frederic Weisbecker <frederic@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Message-Id: <20220201132926.3301912-3-mark.rutland@arm.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 24 1月, 2022 1 次提交
-
-
由 Marc Zyngier 提交于
Injecting an exception into a guest with non-VHE is risky business. Instead of writing in the shadow register for the switch code to restore it, we override the CPU register instead. Which gets overriden a few instructions later by said restore code. The result is that although the guest correctly gets the exception, it will return to the original context in some random state, depending on what was there the first place... Boo. Fix the issue by writing to the shadow register. The original code is absolutely fine on VHE, as the state is already loaded, and writing to the shadow register in that case would actually be a bug. Fixes: bb666c47 ("KVM: arm64: Inject AArch64 exceptions from HYP") Cc: stable@vger.kernel.org Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NFuad Tabba <tabba@google.com> Link: https://lore.kernel.org/r/20220121184207.423426-1-maz@kernel.org
-
- 22 1月, 2022 1 次提交
-
-
由 Marc Zyngier 提交于
Contrary to what df652bcf ("KVM: arm64: vgic-v3: Work around GICv3 locally generated SErrors") was asserting, there is at least one other system out there (Cavium ThunderX2) implementing SEIS, and not in an obviously broken way. So instead of imposing the M1 workaround on an innocent bystander, let's limit it to the two known broken Apple implementations. Fixes: df652bcf ("KVM: arm64: vgic-v3: Work around GICv3 locally generated SErrors") Reported-by: NArd Biesheuvel <ardb@kernel.org> Tested-by: NArd Biesheuvel <ardb@kernel.org> Acked-by: NArd Biesheuvel <ardb@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20220122103912.795026-1-maz@kernel.org
-
- 14 1月, 2022 1 次提交
-
-
由 Marc Zyngier 提交于
CMOs issued from EL2 cannot directly use the kernel helpers, as EL2 doesn't have a mapping of the guest pages. Oops. Instead, use the mm_ops indirection to use helpers that will perform a mapping at EL2 and allow the CMO to be effective. Fixes: 25aa2869 ("KVM: arm64: Move guest CMOs to the fault handlers") Reviewed-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220114125038.1336965-1-maz@kernel.org
-
- 04 1月, 2022 1 次提交
-
-
由 Zenghui Yu 提交于
kvm_arm_init_arch_resources() was renamed to kvm_arm_init_sve() in commit a3be836d ("KVM: arm/arm64: Demote kvm_arm_init_arch_resources() to just set up SVE"). Fix the function name in comment of kvm_vcpu_finalize_sve(). Signed-off-by: NZenghui Yu <yuzenghui@huawei.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211230141535.1389-1-yuzenghui@huawei.com
-
- 20 12月, 2021 2 次提交
-
-
由 Fuad Tabba 提交于
The barrier is there for power_off rather than power_state. Probably typo in commit 358b28f0 ("arm/arm64: KVM: Allow a VCPU to fully reset itself"). Signed-off-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211208193257.667613-3-tabba@google.com
-
由 Fuad Tabba 提交于
The comment for kvm_reset_vcpu() refers to the sysreg table as being the table above, probably because of the code extracted at commit f4672752 ("arm64: KVM: virtual CPU reset"). Fix the comment to remove the potentially confusing reference. Signed-off-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211208193257.667613-2-tabba@google.com
-
- 17 12月, 2021 1 次提交
-
-
由 Marc Zyngier 提交于
Ganapatrao reported that the kvm_pgtable->mmu pointer is more or less hardcoded to the main S2 mmu structure, while the nested code needs it to point to other instances (as we have one instance per nested context). Rework the initialisation of the kvm_pgtable structure so that this assumtion doesn't hold true anymore. This requires some minor changes to the order in which things are initialised (the mmu->arch pointer being the critical one). Reported-by: NGanapatrao Kulkarni <gankulkarni@os.amperecomputing.com> Reviewed-by: NGanapatrao Kulkarni <gankulkarni@os.amperecomputing.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211129200150.351436-5-maz@kernel.org
-
- 16 12月, 2021 16 次提交
-
-
由 Quentin Perret 提交于
Make use of the newly introduced unshare hypercall during guest teardown to unmap guest-related data structures from the hyp stage-1. Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211215161232.1480836-15-qperret@google.com
-
由 Will Deacon 提交于
Introduce an unshare hypercall which can be used to unmap memory from the hypervisor stage-1 in nVHE protected mode. This will be useful to update the EL2 ownership state of pages during guest teardown, and avoids keeping dangling mappings to unreferenced portions of memory. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211215161232.1480836-14-qperret@google.com
-
由 Will Deacon 提交于
Tearing down a previously shared memory region results in the borrower losing access to the underlying pages and returning them to the "owned" state in the owner. Implement a do_unshare() helper, along the same lines as do_share(), to provide this functionality for the host-to-hyp case. Reviewed-by: NAndrew Walbran <qwandor@google.com> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211215161232.1480836-13-qperret@google.com
-
由 Will Deacon 提交于
__pkvm_host_share_hyp() shares memory between the host and the hypervisor so implement it as an invocation of the new do_share() mechanism. Note that double-sharing is no longer permitted (as this allows us to reduce the number of page-table walks significantly), but is thankfully no longer relied upon by the host. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211215161232.1480836-12-qperret@google.com
-
由 Will Deacon 提交于
By default, protected KVM isolates memory pages so that they are accessible only to their owner: be it the host kernel, the hypervisor at EL2 or (in future) the guest. Establishing shared-memory regions between these components therefore involves a transition for each page so that the owner can share memory with a borrower under a certain set of permissions. Introduce a do_share() helper for safely sharing a memory region between two components. Currently, only host-to-hyp sharing is implemented, but the code is easily extended to handle other combinations and the permission checks for each component are reusable. Reviewed-by: NAndrew Walbran <qwandor@google.com> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211215161232.1480836-11-qperret@google.com
-
由 Will Deacon 提交于
In preparation for adding additional locked sections for manipulating page-tables at EL2, introduce some simple wrappers around the host and hypervisor locks so that it's a bit easier to read and bit more difficult to take the wrong lock (or even take them in the wrong order). Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211215161232.1480836-10-qperret@google.com
-
由 Will Deacon 提交于
Explicitly name the combination of SW0 | SW1 as reserved in the pte and introduce a new PKVM_NOPAGE meta-state which, although not directly stored in the software bits of the pte, can be used to represent an entry for which there is no underlying page. This is distinct from an invalid pte, as stage-2 identity mappings for the host are created lazily and so an invalid pte there is the same as a valid mapping for the purposes of ownership information. This state will be used for permission checking during page transitions in later patches. Reviewed-by: NAndrew Walbran <qwandor@google.com> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211215161232.1480836-9-qperret@google.com
-
由 Quentin Perret 提交于
In order to simplify the page tracking infrastructure at EL2 in nVHE protected mode, move the responsibility of refcounting pages that are shared multiple times on the host. In order to do so, let's create a red-black tree tracking all the PFNs that have been shared, along with a refcount. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211215161232.1480836-8-qperret@google.com
-
由 Quentin Perret 提交于
The create_hyp_mappings() function can currently be called at any point in time. However, its behaviour in protected mode changes widely depending on when it is being called. Prior to KVM init, it is used to create the temporary page-table used to bring-up the hypervisor, and later on it is transparently turned into a 'share' hypercall when the kernel has lost control over the hypervisor stage-1. In order to prepare the ground for also unsharing pages with the hypervisor during guest teardown, introduce a kvm_share_hyp() function to make it clear in which places a share hypercall should be expected, as we will soon need a matching unshare hypercall in all those places. Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211215161232.1480836-7-qperret@google.com
-
由 Will Deacon 提交于
Implement kvm_pgtable_hyp_unmap() which can be used to remove hypervisor stage-1 mappings at EL2. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211215161232.1480836-6-qperret@google.com
-
由 Will Deacon 提交于
kvm_pgtable_hyp_unmap() relies on the ->page_count() function callback being provided by the memory-management operations for the page-table. Wire up this callback for the hypervisor stage-1 page-table. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211215161232.1480836-5-qperret@google.com
-
由 Quentin Perret 提交于
In nVHE-protected mode, the hyp stage-1 page-table refcount is broken due to the lack of refcount support in the early allocator. Fix-up the refcount in the finalize walker, once the 'hyp_vmemmap' is up and running. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211215161232.1480836-4-qperret@google.com
-
由 Quentin Perret 提交于
To prepare the ground for allowing hyp stage-1 mappings to be removed at run-time, update the KVM page-table code to maintain a correct refcount using the ->{get,put}_page() function callbacks. Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211215161232.1480836-3-qperret@google.com
-
由 Quentin Perret 提交于
In nVHE protected mode, the EL2 code uses a temporary allocator during boot while re-creating its stage-1 page-table. Unfortunately, the hyp_vmmemap is not ready to use at this stage, so refcounting pages is not possible. That is not currently a problem because hyp stage-1 mappings are never removed, which implies refcounting of page-table pages is unnecessary. In preparation for allowing hypervisor stage-1 mappings to be removed, provide stub implementations for {get,put}_page() in the early allocator. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211215161232.1480836-2-qperret@google.com
-
由 Marc Zyngier 提交于
Running the KVM selftests results in these messages being dumped in the kernel console: [ 188.051073] kvm [469]: VGIC redist and dist frames overlap [ 188.056820] kvm [469]: VGIC redist and dist frames overlap [ 188.076199] kvm [469]: VGIC redist and dist frames overlap Being amle to trigger this from userspace is definitely not on, so demote these warnings to kvm_debug(). Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211216104507.1482017-1-maz@kernel.org
-
由 Marc Zyngier 提交于
When handling an error at the point where we try and register all the redistributors, we unregister all the previously registered frames by counting down from the failing index. However, the way the code is written relies on that index being a signed value. Which won't be true once we switch to an xarray-based vcpu set. Since this code is pretty awkward the first place, and that the failure mode is hard to spot, rewrite this loop to iterate over the vcpus upwards rather than downwards. Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211216104526.1482124-1-maz@kernel.org
-
- 15 12月, 2021 8 次提交
-
-
由 Quentin Perret 提交于
The kvm_host_owns_hyp_mappings() function should return true if and only if the host kernel is responsible for creating the hypervisor stage-1 mappings. That is only possible in standard non-VHE mode, or during boot in protected nVHE mode. But either way, none of this makes sense in VHE, so make sure to catch this case as well, hence making the function return sensible values in any context (VHE or not). Suggested-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211208152300.2478542-7-qperret@google.com
-
由 Quentin Perret 提交于
Now that GICv2 is disabled in nVHE protected mode there should be no other reason for the host to use create_hyp_io_mappings() or kvm_phys_addr_ioremap(). Add sanity checks to make sure that assumption remains true looking forward. Signed-off-by: NQuentin Perret <qperret@google.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211208152300.2478542-6-qperret@google.com
-
由 Quentin Perret 提交于
The __io_map_base variable is used at EL2 to track the end of the hypervisor's "private" VA range in nVHE protected mode. However it doesn't need to be used outside of mm.c, so let's make it static to keep all the hyp VA allocation logic in one place. Signed-off-by: NQuentin Perret <qperret@google.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211208152300.2478542-5-qperret@google.com
-
由 Quentin Perret 提交于
The hyp memory pool struct is sized to fit exactly the needs of the hypervisor stage-1 page-table allocator, so it is important it is not used for anything else. As it is currently used only from setup.c, reduce its visibility by marking it static. Signed-off-by: NQuentin Perret <qperret@google.com> Reviewed-by: NAndrew Walbran <qwandor@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211208152300.2478542-4-qperret@google.com
-
由 Quentin Perret 提交于
GICv2 requires having device mappings in guests and the hypervisor, which is incompatible with the current pKVM EL2 page ownership model which only covers memory. While it would be desirable to support pKVM with GICv2, this will require a lot more work, so let's make the current assumption clear until then. Co-developed-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NQuentin Perret <qperret@google.com> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20211208152300.2478542-3-qperret@google.com
-
由 Quentin Perret 提交于
The EL2 page allocator in protected mode maintains a per-pool max order value to optimize allocations when the memory region it covers is small. However, the max order value is currently under-estimated whenever the number of pages in the region is a power of two. Fix the estimation. Signed-off-by: NQuentin Perret <qperret@google.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211208152300.2478542-2-qperret@google.com
-
由 Kefeng Wang 提交于
This patch enables KCSAN for arm64, with updates to build rules to not use KCSAN for several incompatible compilation units. Recent GCC version(at least GCC10) made outline-atomics as the default option(unlike Clang), which will cause linker errors for kernel/kcsan/core.o. Disables the out-of-line atomics by no-outline-atomics to fix the linker errors. Meanwhile, as Mark said[1], some latent issues are needed to be fixed which isn't just a KCSAN problem, we make the KCSAN depends on EXPERT for now. Tested selftest and kcsan_test(built with GCC11 and Clang 13), and all passed. [1] https://lkml.kernel.org/r/YadiUPpJ0gADbiHQ@FVFF77S0Q05N Acked-by: Marco Elver <elver@google.com> # kernel/kcsan Tested-by: NJoey Gouly <joey.gouly@arm.com> Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Link: https://lore.kernel.org/r/20211211131734.126874-1-wangkefeng.wang@huawei.com [catalin.marinas@arm.com: added comment to justify EXPERT] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Brown 提交于
In preparation for adding SME support update the bulk of the implementation for the vector length configuration prctl() calls to be independent of vector type. Signed-off-by: NMark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20211210184133.320748-3-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 14 12月, 2021 1 次提交
-
-
由 Joey Gouly 提交于
This is a new ID register, introduced in 8.7. Signed-off-by: NJoey Gouly <joey.gouly@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Alexandru Elisei <alexandru.elisei@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Reiji Watanabe <reijiw@google.com> Acked-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211210165432.8106-3-joey.gouly@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 10 12月, 2021 1 次提交
-
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk> Acked-by: NMarc Zyngier <maz@kernel.org> Message-Id: <20211121125451.9489-8-dwmw2@infradead.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 08 12月, 2021 2 次提交
-
-
由 Sean Christopherson 提交于
Add helpers to wake and query a blocking vCPU. In addition to providing nice names, the helpers reduce the probability of KVM neglecting to use kvm_arch_vcpu_get_wait(). No functional change intended. Signed-off-by: NSean Christopherson <seanjc@google.com> Message-Id: <20211009021236.4122790-20-seanjc@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Sean Christopherson 提交于
Rename kvm_vcpu_block() to kvm_vcpu_halt() in preparation for splitting the actual "block" sequences into a separate helper (to be named kvm_vcpu_block()). x86 will use the standalone block-only path to handle non-halt cases where the vCPU is not runnable. Rename block_ns to halt_ns to match the new function name. No functional change intended. Reviewed-by: NDavid Matlack <dmatlack@google.com> Reviewed-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NSean Christopherson <seanjc@google.com> Message-Id: <20211009021236.4122790-14-seanjc@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-