- 07 1月, 2022 19 次提交
-
-
由 David Woodhouse 提交于
This can be used in two modes. There is an atomic mode where the cached mapping is accessed while holding the rwlock, and a mode where the physical address is used by a vCPU in guest mode. For the latter case, an invalidation will wake the vCPU with the new KVM_REQ_GPC_INVALIDATE, and the architecture will need to refresh any caches it still needs to access before entering guest mode again. Only one vCPU can be targeted by the wake requests; it's simple enough to make it wake all vCPUs or even a mask but I don't see a use case for that additional complexity right now. Invalidation happens from the invalidate_range_start MMU notifier, which needs to be able to sleep in order to wake the vCPU and wait for it. This means that revalidation potentially needs to "wait" for the MMU operation to complete and the invalidate_range_end notifier to be invoked. Like the vCPU when it takes a page fault in that period, we just spin — fixing that in a future patch by implementing an actual *wait* may be another part of shaving this particularly hirsute yak. As noted in the comments in the function itself, the only case where the invalidate_range_start notifier is expected to be called *without* being able to sleep is when the OOM reaper is killing the process. In that case, we expect the vCPU threads already to have exited, and thus there will be nothing to wake, and no reason to wait. So we clear the KVM_REQUEST_WAIT bit and send the request anyway, then complain loudly if there actually *was* anything to wake up. Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk> Message-Id: <20211210163625.2886-3-dwmw2@infradead.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 David Woodhouse 提交于
The various kvm_write_guest() and mark_page_dirty() functions must only ever be called in the context of an active vCPU, because if dirty ring tracking is enabled it may simply oops when kvm_get_running_vcpu() returns NULL for the vcpu and then kvm_dirty_ring_get() dereferences it. This oops was reported by "butt3rflyh4ck" <butterflyhuangxx@gmail.com> in https://lore.kernel.org/kvm/CAFcO6XOmoS7EacN_n6v4Txk7xL7iqRa2gABg3F7E3Naf5uG94g@mail.gmail.com/ That actual bug will be fixed under separate cover but this warning should help to prevent new ones from being added. Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk> Message-Id: <20211210163625.2886-2-dwmw2@infradead.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 David Woodhouse 提交于
I made the actual CPU bringup go nice and fast... and then Linux spends half a minute printing stupid nonsense about clocks and steal time for each of 256 vCPUs. Don't do that. Nobody cares. Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk> Message-Id: <20211209150938.3518-12-dwmw2@infradead.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Eric Hankland 提交于
When KVM retires a guest branch instruction through emulation, increment any vPMCs that are configured to monitor "branch instructions retired," and update the sample period of those counters so that they will overflow at the right time. Signed-off-by: NEric Hankland <ehankland@google.com> [jmattson: - Split the code to increment "branch instructions retired" into a separate commit. - Moved/consolidated the calls to kvm_pmu_trigger_event() in the emulation of VMLAUNCH/VMRESUME to accommodate the evolution of that code. ] Fixes: f5132b01 ("KVM: Expose a version 2 architectural PMU to a guests") Signed-off-by: NJim Mattson <jmattson@google.com> Message-Id: <20211130074221.93635-7-likexu@tencent.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Eric Hankland 提交于
When KVM retires a guest instruction through emulation, increment any vPMCs that are configured to monitor "instructions retired," and update the sample period of those counters so that they will overflow at the right time. Signed-off-by: NEric Hankland <ehankland@google.com> [jmattson: - Split the code to increment "branch instructions retired" into a separate commit. - Added 'static' to kvm_pmu_incr_counter() definition. - Modified kvm_pmu_incr_counter() to check pmc->perf_event->state == PERF_EVENT_STATE_ACTIVE. ] Fixes: f5132b01 ("KVM: Expose a version 2 architectural PMU to a guests") Signed-off-by: NJim Mattson <jmattson@google.com> [likexu: - Drop checks for pmc->perf_event or event state or event type - Increase a counter once its umask bits and the first 8 select bits are matched - Rewrite kvm_pmu_incr_counter() with a less invasive approach to the host perf; - Rename kvm_pmu_record_event to kvm_pmu_trigger_event; - Add counter enable and CPL check for kvm_pmu_trigger_event(); ] Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: NLike Xu <likexu@tencent.com> Message-Id: <20211130074221.93635-6-likexu@tencent.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Like Xu 提交于
Depending on whether intr should be triggered or not, KVM registers two different event overflow callbacks in the perf_event context. The code skeleton of these two functions is very similar, so the pmc->intr can be stored into pmc from pmc_reprogram_counter() which provides smaller instructions footprint against the u-architecture branch predictor. The __kvm_perf_overflow() can be called in non-nmi contexts and a flag is needed to distinguish the caller context and thus avoid a check on kvm_is_in_guest(), otherwise we might get warnings from suspicious RCU or check_preemption_disabled(). Suggested-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NLike Xu <likexu@tencent.com> Message-Id: <20211130074221.93635-5-likexu@tencent.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Like Xu 提交于
Since we set the same semantic event value for the fixed counter in pmc->eventsel, returning the perf_hw_id for the fixed counter via find_fixed_event() can be painlessly replaced by pmc_perf_hw_id() with the help of pmc_is_fixed() check. Signed-off-by: NLike Xu <likexu@tencent.com> Message-Id: <20211130074221.93635-4-likexu@tencent.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Like Xu 提交于
The find_arch_event() returns a "unsigned int" value, which is used by the pmc_reprogram_counter() to program a PERF_TYPE_HARDWARE type perf_event. The returned value is actually the kernel defined generic perf_hw_id, let's rename it to pmc_perf_hw_id() with simpler incoming parameters for better self-explanation. Signed-off-by: NLike Xu <likexu@tencent.com> Message-Id: <20211130074221.93635-3-likexu@tencent.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Like Xu 提交于
The current pmc->eventsel for fixed counter is underutilised. The pmc->eventsel can be setup for all known available fixed counters since we have mapping between fixed pmc index and the intel_arch_events array. Either gp or fixed counter, it will simplify the later checks for consistency between eventsel and perf_hw_id. Signed-off-by: NLike Xu <likexu@tencent.com> Message-Id: <20211130074221.93635-2-likexu@tencent.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Because IceLake has 4 fixed performance counters but KVM only supports 3, it is possible for reprogram_fixed_counters to pass to reprogram_fixed_counter an index that is out of bounds for the fixed_pmc_events array. Ultimately intel_find_fixed_event, which is the only place that uses fixed_pmc_events, handles this correctly because it checks against the size of fixed_pmc_events anyway. Every other place operates on the fixed_counters[] array which is sized according to INTEL_PMC_MAX_FIXED. However, it is cleaner if the unsupported performance counters are culled early on in reprogram_fixed_counters. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Lai Jiangshan 提交于
When !CR0_PG -> CR0_PG, vcpu->arch.cr3 becomes active, but GUEST_CR3 is still vmx->ept_identity_map_addr if EPT + !URG. So VCPU_EXREG_CR3 is considered to be dirty and GUEST_CR3 needs to be updated in this case. Reported-by: NMaxim Levitsky <mlevitsk@redhat.com> Suggested-by: NSean Christopherson <seanjc@google.com> Signed-off-by: NLai Jiangshan <laijs@linux.alibaba.com> Message-Id: <20211216021938.11752-4-jiangshanlai@gmail.com> Fixes: c62c7bd4 ("KVM: VMX: Update vmcs.GUEST_CR3 only when the guest CR3 is dirty") Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Lai Jiangshan 提交于
For shadow paging, the page table needs to be reconstructed before the coming VMENTER if the guest PDPTEs is changed. But not all paths that call load_pdptrs() will cause the page tables to be reconstructed. Normally, kvm_mmu_reset_context() and kvm_mmu_free_roots() are used to launch later reconstruction. The commit d81135a5("KVM: x86: do not reset mmu if CR0.CD and CR0.NW are changed") skips kvm_mmu_reset_context() after load_pdptrs() when changing CR0.CD and CR0.NW. The commit 21823fbd("KVM: x86: Invalidate all PGDs for the current PCID on MOV CR3 w/ flush") skips kvm_mmu_free_roots() after load_pdptrs() when rewriting the CR3 with the same value. The commit a91a7c70("KVM: X86: Don't reset mmu context when toggling X86_CR4_PGE") skips kvm_mmu_reset_context() after load_pdptrs() when changing CR4.PGE. Guests like linux would keep the PDPTEs unchanged for every instance of pagetable, so this missing reconstruction has no problem for linux guests. Fixes: d81135a5("KVM: x86: do not reset mmu if CR0.CD and CR0.NW are changed") Fixes: 21823fbd("KVM: x86: Invalidate all PGDs for the current PCID on MOV CR3 w/ flush") Fixes: a91a7c70("KVM: X86: Don't reset mmu context when toggling X86_CR4_PGE") Suggested-by: NSean Christopherson <seanjc@google.com> Signed-off-by: NLai Jiangshan <laijs@linux.alibaba.com> Message-Id: <20211216021938.11752-3-jiangshanlai@gmail.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Lai Jiangshan 提交于
The host CR3 in the vcpu thread can only be changed when scheduling, so commit 15ad9762 ("KVM: VMX: Save HOST_CR3 in vmx_prepare_switch_to_guest()") changed vmx.c to only save it in vmx_prepare_switch_to_guest(). However, it also has to be synced in vmx_sync_vmcs_host_state() when switching VMCS. vmx_set_host_fs_gs() is called in both places, so rename it to vmx_set_vmcs_host_state() and make it update HOST_CR3. Fixes: 15ad9762 ("KVM: VMX: Save HOST_CR3 in vmx_prepare_switch_to_guest()") Signed-off-by: NLai Jiangshan <laijs@linux.alibaba.com> Message-Id: <20211216021938.11752-2-jiangshanlai@gmail.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
This reverts commit 24cd19a2. Sean Christopherson reports: "Commit 24cd19a2 ('KVM: X86: Update mmu->pdptrs only when it is changed') breaks nested VMs with EPT in L0 and PAE shadow paging in L2. Reproducing is trivial, just disable EPT in L1 and run a VM. I haven't investigating how it breaks things." Reviewed-by: NSean Christopherson <seanjc@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Peter Gonda 提交于
Add tests to confirm mirror vms can only run correct subset of commands. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Marc Orr <marcorr@google.com> Signed-off-by: NPeter Gonda <pgonda@google.com> Message-Id: <20211208191642.3792819-4-pgonda@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Peter Gonda 提交于
TEST_ASSERT in SEV ioctl was allowing errors because it checked return value was good OR the FW error code was OK. This TEST_ASSERT should require both (aka. AND) values are OK. Removes the LAUNCH_START from the mirror VM because this call correctly fails because mirror VMs cannot call this command. Currently issues with the PSP driver functions mean the firmware error is not always reset to SEV_RET_SUCCESS when a call is successful. Mainly sev_platform_init() doesn't correctly set the fw error if the platform has already been initialized. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Marc Orr <marcorr@google.com> Signed-off-by: NPeter Gonda <pgonda@google.com> Message-Id: <20211208191642.3792819-3-pgonda@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Peter Gonda 提交于
Mirrors should not be able to call LAUNCH_START. Remove the call on the mirror to correct the test before fixing sev_ioctl() to correctly assert on this failed ioctl. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Marc Orr <marcorr@google.com> Signed-off-by: NPeter Gonda <pgonda@google.com> Message-Id: <20211208191642.3792819-2-pgonda@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
https://github.com/kvm-riscv/linux由 Paolo Bonzini 提交于
KVM/riscv changes for 5.17, take #1 - Use common KVM implementation of MMU memory caches - SBI v0.2 support for Guest - Initial KVM selftests support - Fix to avoid spurious virtual interrupts after clearing hideleg CSR - Update email address for Anup and Atish
-
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm由 Paolo Bonzini 提交于
KVM/arm64 updates for Linux 5.16 - Simplification of the 'vcpu first run' by integrating it into KVM's 'pid change' flow - Refactoring of the FP and SVE state tracking, also leading to a simpler state and less shared data between EL1 and EL2 in the nVHE case - Tidy up the header file usage for the nvhe hyp object - New HYP unsharing mechanism, finally allowing pages to be unmapped from the Stage-1 EL2 page-tables - Various pKVM cleanups around refcounting and sharing - A couple of vgic fixes for bugs that would trigger once the vcpu xarray rework is merged, but not sooner - Add minimal support for ARMv8.7's PMU extension - Rework kvm_pgtable initialisation ahead of the NV work - New selftest for IRQ injection - Teach selftests about the lack of default IPA space and page sizes - Expand sysreg selftest to deal with Pointer Authentication - The usual bunch of cleanups and doc update
-
- 06 1月, 2022 14 次提交
-
-
由 Anup Patel 提交于
I am no longer work at Western Digital so update my email address to personal one and add entries to .mailmap as well. Signed-off-by: NAnup Patel <anup@brainfault.org> Acked-by: NAtish Patra <atishp@rivosinc.com>
-
由 Vincent Chen 提交于
When the last VM is terminated, the host kernel will invoke function hardware_disable_nolock() on each CPU to disable the related virtualization functions. Here, RISC-V currently only clears hideleg CSR and hedeleg CSR. This behavior will cause the host kernel to receive spurious interrupts if hvip CSR has pending interrupts and the corresponding enable bits in vsie CSR are asserted. To avoid it, hvip CSR and vsie CSR must be cleared before clearing hideleg CSR. Fixes: 99cdc6c1 ("RISC-V: Add initial skeletal KVM support") Signed-off-by: NVincent Chen <vincent.chen@sifive.com> Reviewed-by: NAnup Patel <anup.patel@wdc.com> Signed-off-by: NAnup Patel <anup.patel@wdc.com>
-
由 Anup Patel 提交于
We add initial support for RISC-V 64-bit in KVM selftests using which we can cross-compile and run arch independent tests such as: demand_paging_test dirty_log_test kvm_create_max_vcpus, kvm_page_table_test set_memory_region_test kvm_binary_stats_test All VM guest modes defined in kvm_util.h require at least 48-bit guest virtual address so to use KVM RISC-V selftests hardware need to support at least Sv48 MMU for guest (i.e. VS-mode). Signed-off-by: NAnup Patel <anup.patel@wdc.com> Reviewed-and-tested-by: NAtish Patra <atishp@rivosinc.com>
-
由 Anup Patel 提交于
We add EXTRA_CFLAGS to the common CFLAGS of top-level Makefile which will allow users to pass additional compile-time flags such as "-static". Signed-off-by: NAnup Patel <anup.patel@wdc.com> Reviewed-and-tested-by: NAtish Patra <atishp@rivosinc.com> Reviewed-and-tested-by: NSean Christopherson <seanjc@google.com>
-
由 Anup Patel 提交于
The number of GPA bits supported for a RISC-V Guest/VM is based on the MMU mode used by the G-stage translation. The KVM RISC-V will detect and use the best possible MMU mode for the G-stage in kvm_arch_init(). We add a generic VM capability KVM_CAP_VM_GPA_BITS which can be used by the KVM userspace to get the number of GPA (guest physical address) bits supported for a Guest/VM. Signed-off-by: NAnup Patel <anup.patel@wdc.com> Reviewed-and-tested-by: NAtish Patra <atishp@rivosinc.com>
-
由 Anup Patel 提交于
The SBI experimental extension space is for temporary (or experimental) stuff whereas SBI vendor extension space is for hardware vendor specific stuff. Both these SBI extension spaces won't be standardized by the SBI specification so let's blindly forward such SBI calls to the userspace. Signed-off-by: NAnup Patel <anup.patel@wdc.com> Reviewed-and-tested-by: NAtish Patra <atishp@rivosinc.com>
-
由 Jisheng Zhang 提交于
There are no users outside vcpu_fp.c so make kvm_riscv_vcpu_fp_clean() static. Signed-off-by: NJisheng Zhang <jszhang@kernel.org> Signed-off-by: NAnup Patel <anup.patel@wdc.com>
-
由 Atish Patra 提交于
I am no longer employed by western digital. Update my email address to personal one and add entries to .mailmap as well. Signed-off-by: NAtish Patra <atishp@atishpatra.org> Signed-off-by: NAnup Patel <anup.patel@wdc.com>
-
由 Atish Patra 提交于
SBI HSM extension allows OS to start/stop harts any time. It also allows ordered booting of harts instead of random booting. Implement SBI HSM exntesion and designate the vcpu 0 as the boot vcpu id. All other non-zero non-booting vcpus should be brought up by the OS implementing HSM extension. If the guest OS doesn't implement HSM extension, only single vcpu will be available to OS. Signed-off-by: NAtish Patra <atish.patra@wdc.com> Signed-off-by: NAtish Patra <atishp@rivosinc.com> Signed-off-by: NAnup Patel <anup.patel@wdc.com>
-
由 Atish Patra 提交于
The SBI v0.2 contains some of the improved versions of required v0.1 extensions such as remote fence, timer and IPI. This patch implements those extensions. Signed-off-by: NAtish Patra <atish.patra@wdc.com> Signed-off-by: NAtish Patra <atishp@rivosinc.com> Signed-off-by: NAnup Patel <anup.patel@wdc.com>
-
由 Atish Patra 提交于
SBI v0.2 base extension defined to allow backward compatibility and probing of future extensions. This is also the only mandatory SBI extension that must be implemented by SBI implementors. Signed-off-by: NAtish Patra <atish.patra@wdc.com> Signed-off-by: NAtish Patra <atishp@rivosinc.com> Signed-off-by: NAnup Patel <anup.patel@wdc.com>
-
由 Atish Patra 提交于
With SBI v0.2, there may be more SBI extensions in future. It makes more sense to group related extensions in separate files. Guest kernel will choose appropriate SBI version dynamically. Move the existing implementation to a separate file so that it can be removed in future without much conflict. Signed-off-by: NAtish Patra <atish.patra@wdc.com> Signed-off-by: NAtish Patra <atishp@rivosinc.com> Signed-off-by: NAnup Patel <anup.patel@wdc.com>
-
由 Atish Patra 提交于
The existing SBI specification impelementation follows v0.1 specification. The latest specification allows more scalability and performance improvements. Rename the existing implementation as v0.1 and provide a way to allow future extensions. Signed-off-by: NAtish Patra <atish.patra@wdc.com> Signed-off-by: NAtish Patra <atishp@rivosinc.com> Signed-off-by: NAnup Patel <anup.patel@wdc.com>
-
由 Sean Christopherson 提交于
Use common KVM's implementation of the MMU memory caches, which for all intents and purposes is semantically identical to RISC-V's version, the only difference being that the common implementation will fall back to an atomic allocation if there's a KVM bug that triggers a cache underflow. RISC-V appears to have based its MMU code on arm64 before the conversion to the common caches in commit c1a33aeb ("KVM: arm64: Use common KVM implementation of MMU memory caches"), despite having also copy-pasted the definition of KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE in kvm_types.h. Opportunistically drop the superfluous wrapper kvm_riscv_stage2_flush_cache(), whose name is very, very confusing as "cache flush" in the context of MMU code almost always refers to flushing hardware caches, not freeing unused software objects. No functional change intended. Signed-off-by: NSean Christopherson <seanjc@google.com> Signed-off-by: NAnup Patel <anup.patel@wdc.com>
-
- 05 1月, 2022 2 次提交
-
-
由 Marc Zyngier 提交于
* kvm-arm64/misc-5.17: : . : Misc fixes and improvements: : - Add minimal support for ARMv8.7's PMU extension : - Constify kvm_io_gic_ops : - Drop kvm_is_transparent_hugepage() prototype : - Drop unused workaround_flags field : - Rework kvm_pgtable initialisation : - Documentation fixes : - Replace open-coded SCTLR_EL1.EE useage with its defined macro : - Sysreg list selftest update to handle PAuth : - Include cleanups : . KVM: arm64: vgic: Replace kernel.h with the necessary inclusions KVM: arm64: Fix comment typo in kvm_vcpu_finalize_sve() KVM: arm64: selftests: get-reg-list: Add pauth configuration KVM: arm64: Fix comment on barrier in kvm_psci_vcpu_on() KVM: arm64: Fix comment for kvm_reset_vcpu() KVM: arm64: Use defined value for SCTLR_ELx_EE KVM: arm64: Rework kvm_pgtable initialisation KVM: arm64: Drop unused workaround_flags vcpu field Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Andy Shevchenko 提交于
arm_vgic.h does not require all the stuff that kernel.h provides. Replace kernel.h inclusion with the list of what is really being used. Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220104151940.55399-1-andriy.shevchenko@linux.intel.com
-
- 04 1月, 2022 4 次提交
-
-
由 Marc Zyngier 提交于
* kvm-arm64/selftest/irq-injection: : . : New tests from Ricardo Koller: : "This series adds a new test, aarch64/vgic-irq, that validates the injection of : different types of IRQs from userspace using various methods and configurations" : . KVM: selftests: aarch64: Add test for restoring active IRQs KVM: selftests: aarch64: Add ISPENDR write tests in vgic_irq KVM: selftests: aarch64: Add tests for IRQFD in vgic_irq KVM: selftests: Add IRQ GSI routing library functions KVM: selftests: aarch64: Add test_inject_fail to vgic_irq KVM: selftests: aarch64: Add tests for LEVEL_INFO in vgic_irq KVM: selftests: aarch64: Level-sensitive interrupts tests in vgic_irq KVM: selftests: aarch64: Add preemption tests in vgic_irq KVM: selftests: aarch64: Cmdline arg to set EOI mode in vgic_irq KVM: selftests: aarch64: Cmdline arg to set number of IRQs in vgic_irq test KVM: selftests: aarch64: Abstract the injection functions in vgic_irq KVM: selftests: aarch64: Add vgic_irq to test userspace IRQ injection KVM: selftests: aarch64: Add vGIC library functions to deal with vIRQ state KVM: selftests: Add kvm_irq_line library function KVM: selftests: aarch64: Add GICv3 register accessor library functions KVM: selftests: aarch64: Add function for accessing GICv3 dist and redist registers KVM: selftests: aarch64: Move gic_v3.h to shared headers Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
* kvm-arm64/selftest/ipa: : . : Expand the KVM/arm64 selftest infrastructure to discover : supported page sizes at runtime, support 16kB pages, and : find out about the original M1 stupidly small IPA space. : . KVM: selftests: arm64: Add support for various modes with 16kB page size KVM: selftests: arm64: Add support for VM_MODE_P36V48_{4K,64K} KVM: selftests: arm64: Rework TCR_EL1 configuration KVM: selftests: arm64: Check for supported page sizes KVM: selftests: arm64: Introduce a variable default IPA size KVM: selftests: arm64: Initialise default guest mode at test startup time Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Zenghui Yu 提交于
kvm_arm_init_arch_resources() was renamed to kvm_arm_init_sve() in commit a3be836d ("KVM: arm/arm64: Demote kvm_arm_init_arch_resources() to just set up SVE"). Fix the function name in comment of kvm_vcpu_finalize_sve(). Signed-off-by: NZenghui Yu <yuzenghui@huawei.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211230141535.1389-1-yuzenghui@huawei.com
-
由 Marc Zyngier 提交于
The get-reg-list test ignores the Pointer Authentication features, which is a shame now that we have relatively common HW with this feature. Define two new configurations (with and without PMU) that exercise the KVM capabilities. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NAndrew Jones <drjones@redhat.com> Link: https://lore.kernel.org/r/20211228121414.1013250-1-maz@kernel.org
-
- 29 12月, 2021 1 次提交
-
-
由 Ricardo Koller 提交于
Add a test that restores multiple IRQs in active state, it does it by writing into ISACTIVER from the guest and using KVM ioctls. This test tries to emulate what would happen during a live migration: restore active IRQs. Signed-off-by: NRicardo Koller <ricarkol@google.com> Acked-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211109023906.1091208-18-ricarkol@google.com
-