- 17 3月, 2020 1 次提交
-
-
由 Peter Xu 提交于
Remove includes of asm/kvm_host.h from files that already include linux/kvm_host.h to make it more obvious that there is no ordering issue between the two headers. linux/kvm_host.h includes asm/kvm_host.h to pick up architecture specific settings, and this will never change, i.e. including asm/kvm_host.h after linux/kvm_host.h may seem problematic, but in practice is simply redundant. Signed-off-by: NPeter Xu <peterx@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 28 1月, 2020 3 次提交
-
-
由 Sean Christopherson 提交于
Remove kvm_arch_vcpu_init() and kvm_arch_vcpu_uninit() now that all arch specific implementations are nops. Acked-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Reviewed-by: NCornelia Huck <cohuck@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Sean Christopherson 提交于
Add an arm specific hook to free the arm64-only sve_state. Doing so eliminates the last functional code from kvm_arch_vcpu_uninit() across all architectures and paves the way for removing kvm_arch_vcpu_init() and kvm_arch_vcpu_uninit() entirely. Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Sean Christopherson 提交于
Remove kvm_arch_vcpu_setup() now that all arch specific implementations are nops. Acked-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Reviewed-by: NCornelia Huck <cohuck@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 23 1月, 2020 2 次提交
-
-
由 Mark Brown 提交于
In an effort to clarify and simplify the annotations of assembly functions in the kernel new macros have been introduced replacing ENTRY and ENDPROC. There are separate annotations SYM_FUNC_ for normal C functions and SYM_CODE_ for other code. Currently __guest_enter and __guest_exit are annotated as standard functions but this is not entirely correct as the former doesn't do a normal return and the latter is not entered in a normal fashion. From the point of view of the hypervisor the guest entry/exit may be viewed as a single function which happens to have an eret in the middle of it so let's annotate it as such. Suggested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMark Brown <broonie@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200120124706.8681-1-broonie@kernel.org
-
由 Andrew Murray 提交于
On VHE systems arch.mdcr_el2 is written to mdcr_el2 at vcpu_load time to set options for self-hosted debug and the performance monitors extension. Unfortunately the value of arch.mdcr_el2 is not calculated until kvm_arm_setup_debug() in the run loop after the vcpu has been loaded. This means that the initial brief iterations of the run loop use a zero value of mdcr_el2 - until the vcpu is preempted. This also results in a delay between changes to vcpu->guest_debug taking effect. Fix this by writing to mdcr_el2 in kvm_arm_setup_debug() on VHE systems when a change to arch.mdcr_el2 has been detected. Fixes: d5a21bcc ("KVM: arm64: Move common VHE/non-VHE trap config in separate functions") Cc: <stable@vger.kernel.org> # 4.17.x- Suggested-by: NJames Morse <james.morse@arm.com> Acked-by: NWill Deacon <will@kernel.org> Reviewed-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NAndrew Murray <andrew.murray@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 20 1月, 2020 2 次提交
-
-
由 Mark Rutland 提交于
When KVM injects an exception into a guest, it generates the PSTATE value from scratch, configuring PSTATE.{M[4:0],DAIF}, and setting all other bits to zero. This isn't correct, as the architecture specifies that some PSTATE bits are (conditionally) cleared or set upon an exception, and others are unchanged from the original context. This patch adds logic to match the architectural behaviour. To make this simple to follow/audit/extend, documentation references are provided, and bits are configured in order of their layout in SPSR_EL2. This layout can be seen in the diagram on ARM DDI 0487E.a page C5-429. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NAlexandru Elisei <alexandru.elisei@arm.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200108134324.46500-2-mark.rutland@arm.com
-
由 Russell King 提交于
Booting 5.4 on LX2160A reveals that KVM is non-functional: kvm: Limiting the IPA size due to kernel Virtual Address limit kvm [1]: IPA Size Limit: 43bits kvm [1]: IDMAP intersecting with HYP VA, unable to continue kvm [1]: error initializing Hyp mode: -22 Debugging shows: kvm [1]: IDMAP page: 81a26000 kvm [1]: HYP VA range: 0:22ffffffff as RAM is located at: 80000000-fbdfffff : System RAM 2080000000-237fffffff : System RAM Comparing this with the same kernel on Armada 8040 shows: kvm: Limiting the IPA size due to kernel Virtual Address limit kvm [1]: IPA Size Limit: 43bits kvm [1]: IDMAP page: 2a26000 kvm [1]: HYP VA range: 4800000000:493fffffff ... kvm [1]: Hyp mode initialized successfully which indicates that hyp_va_msb is set, and is always set to the opposite value of the idmap page to avoid the overlap. This does not happen with the LX2160A. Further debugging shows vabits_actual = 39, kva_msb = 38 on LX2160A and kva_msb = 33 on Armada 8040. Looking at the bit layout of the HYP VA, there is still one bit available for hyp_va_msb. Set this bit appropriately. This allows KVM to be functional on the LX2160A, but without any HYP VA randomisation: kvm: Limiting the IPA size due to kernel Virtual Address limit kvm [1]: IPA Size Limit: 43bits kvm [1]: IDMAP page: 81a24000 kvm [1]: HYP VA range: 4000000000:62ffffffff ... kvm [1]: Hyp mode initialized successfully Fixes: ed57cac8 ("arm64: KVM: Introduce EL2 VA randomisation") Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> [maz: small additional cleanups, preserved case where the tag is legitimately 0 and we can just use the mask, Fixes tag] Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/E1ilAiY-0000MA-RG@rmk-PC.armlinux.org.uk
-
- 17 1月, 2020 1 次提交
-
-
由 Ard Biesheuvel 提交于
In preparation of reserving x18, stop treating it as caller save in the KVM guest entry/exit code. Currently, the code assumes there is no need to preserve it for the host, given that it would have been assumed clobbered anyway by the function call to __guest_enter(). Instead, preserve its value and restore it upon return. Link: https://patchwork.kernel.org/patch/9836891/Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> [Sami: updated commit message, switched from x18 to x29 for the guest context] Signed-off-by: NSami Tolvanen <samitolvanen@google.com> Reviewed-by: NKees Cook <keescook@chromium.org> Reviewed-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 16 1月, 2020 3 次提交
-
-
由 Steven Price 提交于
Cortex-A55 erratum 1530923 allows TLB entries to be allocated as a result of a speculative AT instruction. This may happen in the middle of a guest world switch while the relevant VMSA configuration is in an inconsistent state, leading to erroneous content being allocated into TLBs. The same workaround as is used for Cortex-A76 erratum 1165522 (WORKAROUND_SPECULATIVE_AT_VHE) can be used here. Note that this mandates the use of VHE on affected parts. Acked-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NSteven Price <steven.price@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Steven Price 提交于
To match SPECULATIVE_AT_VHE let's also have a generic name for the NVHE variant. Acked-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NSteven Price <steven.price@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Steven Price 提交于
Cortex-A55 is affected by a similar erratum, so rename the existing workaround for errarum 1165522 so it can be used for both errata. Acked-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NSteven Price <steven.price@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 15 1月, 2020 2 次提交
-
-
由 Anshuman Khandual 提交于
This adds basic building blocks required for ID_ISAR6 CPU register which identifies support for various instruction implementation on AArch32 state. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-kernel@vger.kernel.org Cc: kvmarm@lists.cs.columbia.edu Acked-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> [will: Ensure SPECRES is treated the same as on A64] Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Suzuki K Poulose 提交于
We detect the absence of FP/SIMD after an incapable CPU is brought up, and by then we have kernel threads running already with TIF_FOREIGN_FPSTATE set which could be set for early userspace applications (e.g, modprobe triggered from initramfs) and init. This could cause the applications to loop forever in do_nofity_resume() as we never clear the TIF flag, once we now know that we don't support FP. Fix this by making sure that we clear the TIF_FOREIGN_FPSTATE flag for tasks which may have them set, as we would have done in the normal case, but avoiding touching the hardware state (since we don't support any). Also to make sure we handle the cases seemlessly we categorise the helper functions to two : 1) Helpers for common core code, which calls into take appropriate actions without knowing the current FPSIMD state of the CPU/task. e.g fpsimd_restore_current_state(), fpsimd_flush_task_state(), fpsimd_save_and_flush_cpu_state(). We bail out early for these functions, taking any appropriate actions (e.g, clearing the TIF flag) where necessary to hide the handling from core code. 2) Helpers used when the presence of FP/SIMD is apparent. i.e, save/restore the FP/SIMD register state, modify the CPU/task FP/SIMD state. e.g, fpsimd_save(), task_fpsimd_load() - save/restore task FP/SIMD registers fpsimd_bind_task_to_cpu() \ - Update the "state" metadata for CPU/task. fpsimd_bind_state_to_cpu() / fpsimd_update_current_state() - Update the fp/simd state for the current task from memory. These must not be called in the absence of FP/SIMD. Put in a WARNING to make sure they are not invoked in the absence of FP/SIMD. KVM also uses the TIF_FOREIGN_FPSTATE flag to manage the FP/SIMD state on the CPU. However, without FP/SIMD support we trap all accesses and inject undefined instruction. Thus we should never "load" guest state. Add a sanity check to make sure this is valid. Fixes: 82e0191a ("arm64: Support systems without FP/ASIMD") Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: NArd Biesheuvel <ardb@kernel.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 12 12月, 2019 1 次提交
-
-
由 Will Deacon 提交于
Commit 4b927b94 ("KVM: arm/arm64: vgic: Introduce find_reg_by_id()") introduced 'find_reg_by_id()', which looks up a system register only if the 'id' index parameter identifies a valid system register. As part of the patch, existing callers of 'find_reg()' were ported over to the new interface, but this breaks 'index_to_sys_reg_desc()' in the case that the initial lookup in the vCPU target table fails because we will then call into 'find_reg()' for the system register table with an uninitialised 'param' as the key to the lookup. GCC 10 is bright enough to spot this (amongst a tonne of false positives, but hey!): | arch/arm64/kvm/sys_regs.c: In function ‘index_to_sys_reg_desc.part.0.isra’: | arch/arm64/kvm/sys_regs.c:983:33: warning: ‘params.Op2’ may be used uninitialized in this function [-Wmaybe-uninitialized] | 983 | (u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2); | [...] Revert the hunk of 4b927b94 which breaks 'index_to_sys_reg_desc()' so that the old behaviour of checking the index upfront is restored. Fixes: 4b927b94 ("KVM: arm/arm64: vgic: Introduce find_reg_by_id()") Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20191212094049.12437-1-will@kernel.org
-
- 07 12月, 2019 1 次提交
-
-
由 Mark Rutland 提交于
We don't intend to support IMPLEMENATION DEFINED system registers, but have to trap them (and emulate them as UNDEFINED). These traps aren't interesting to the system administrator or to the KVM developers, so let's not bother logging when we do so. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20191205180652.18671-3-mark.rutland@arm.com
-
- 06 12月, 2019 2 次提交
-
-
compute_layout() is invoked as part of an alternative fixup under stop_machine(). This function invokes get_random_long() which acquires a sleeping lock on -RT which can not be acquired in this context. Rename compute_layout() to kvm_compute_layout() and invoke it before stop_machine() applies the alternatives. Add a __init prefix to kvm_compute_layout() because the caller has it, too (and so the code can be discarded after boot). Reviewed-by: NJames Morse <james.morse@arm.com> Acked-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Currently kvm_pr_unimpl() is ratelimited, so print_sys_reg_instr() won't spam the console. However, someof its callers try to print some contextual information with kvm_err(), which is not ratelimited. This means that in some cases the context may be printed without the sysreg encoding, which isn't all that useful. Let's ensure that both are consistently printed together and ratelimited, by refactoring print_sys_reg_instr() so that some callers can provide it with an arbitrary format string. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20191205180652.18671-2-mark.rutland@arm.com
-
- 28 10月, 2019 1 次提交
-
-
由 Christian Borntraeger 提交于
ARM/ARM64 has counters halt_successful_poll, halt_attempted_poll, halt_poll_invalid, and halt_wakeup but never exposed those in debugfs. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/1572164390-5851-1-git-send-email-borntraeger@de.ibm.com
-
- 26 10月, 2019 3 次提交
-
-
由 Marc Zyngier 提交于
When handling erratum 1319367, we must ensure that the page table walker cannot parse the S1 page tables while the guest is in an inconsistent state. This is done as follows: On guest entry: - TCR_EL1.EPD{0,1} are set, ensuring that no PTW can occur - all system registers are restored, except for TCR_EL1 and SCTLR_EL1 - stage-2 is restored - SCTLR_EL1 and TCR_EL1 are restored On guest exit: - SCTLR_EL1.M and TCR_EL1.EPD{0,1} are set, ensuring that no PTW can occur - stage-2 is disabled - All host system registers are restored Reviewed-by: NJames Morse <james.morse@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
When erratum 1319367 is being worked around, special care must be taken not to allow the page table walker to populate TLBs while we have the stage-2 translation enabled (which would otherwise result in a bizare mix of the host S1 and the guest S2). We enforce this by setting TCR_EL1.EPD{0,1} before restoring the S2 configuration, and clear the same bits after having disabled S2. Reviewed-by: NJames Morse <james.morse@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
In order to prepare for handling erratum 1319367, we need to make sure that all system registers (and most importantly the registers configuring the virtual memory) are set before we enable stage-2 translation. This results in a minor reorganisation of the load sequence, without any functional change. Reviewed-by: NJames Morse <james.morse@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 24 10月, 2019 1 次提交
-
-
由 Steven Price 提交于
SCHEDSTATS requires DEBUG_KERNEL (and PROC_FS) and therefore isn't a good choice for enabling the scheduling statistics required for stolen time. Instead match the x86 configuration and select TASK_DELAY_ACCT and TASKSTATS. This adds the dependencies of NET && MULTIUSER for arm64 KVM. Suggested-by: NMarc Zyngier <maz@kernel.org> Fixes: 8564d637 ("KVM: arm64: Support stolen time reporting via shared structure") Signed-off-by: NSteven Price <steven.price@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 22 10月, 2019 5 次提交
-
-
由 Steven Price 提交于
Allow user space to inform the KVM host where in the physical memory map the paravirtualized time structures should be located. User space can set an attribute on the VCPU providing the IPA base address of the stolen time structure for that VCPU. This must be repeated for every VCPU in the VM. The address is given in terms of the physical address visible to the guest and must be 64 byte aligned. The guest will discover the address via a hypercall. Signed-off-by: NSteven Price <steven.price@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Steven Price 提交于
Implement the service call for configuring a shared structure between a VCPU and the hypervisor in which the hypervisor can write the time stolen from the VCPU's execution time by other tasks on the host. User space allocates memory which is placed at an IPA also chosen by user space. The hypervisor then updates the shared structure using kvm_put_guest() to ensure single copy atomicity of the 64-bit value reporting the stolen time in nanoseconds. Whenever stolen time is enabled by the guest, the stolen time counter is reset. The stolen time itself is retrieved from the sched_info structure maintained by the Linux scheduler code. We enable SCHEDSTATS when selecting KVM Kconfig to ensure this value is meaningful. Signed-off-by: NSteven Price <steven.price@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Steven Price 提交于
This provides a mechanism for querying which paravirtualized time features are available in this hypervisor. Also add the header file which defines the ABI for the paravirtualized time features we're about to add. Signed-off-by: NSteven Price <steven.price@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Christoffer Dall 提交于
We currently intertwine the KVM PSCI implementation with the general dispatch of hypercall handling, which makes perfect sense because PSCI is the only category of hypercalls we support. However, as we are about to support additional hypercalls, factor out this functionality into a separate hypercall handler file. Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com> [steven.price@arm.com: rebased] Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NSteven Price <steven.price@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Christoffer Dall 提交于
In some scenarios, such as buggy guest or incorrect configuration of the VMM and firmware description data, userspace will detect a memory access to a portion of the IPA, which is not mapped to any MMIO region. For this purpose, the appropriate action is to inject an external abort to the guest. The kernel already has functionality to inject an external abort, but we need to wire up a signal from user space that lets user space tell the kernel to do this. It turns out, we already have the set event functionality which we can perfectly reuse for this. Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 20 10月, 2019 1 次提交
-
-
由 Marc Zyngier 提交于
Of PMCR_EL0.LC, the ARMv8 ARM says: "In an AArch64 only implementation, this field is RES 1." So be it. Fixes: ab946834 ("arm64: KVM: Add access handler for PMCR register") Reviewed-by: NAndrew Murray <andrew.murray@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 15 10月, 2019 1 次提交
-
-
由 Marc Zyngier 提交于
The GICv3 architecture specification is incredibly misleading when it comes to PMR and the requirement for a DSB. It turns out that this DSB is only required if the CPU interface sends an Upstream Control message to the redistributor in order to update the RD's view of PMR. This message is only sent when ICC_CTLR_EL1.PMHE is set, which isn't the case in Linux. It can still be set from EL3, so some special care is required. But the upshot is that in the (hopefuly large) majority of the cases, we can drop the DSB altogether. This relies on a new static key being set if the boot CPU has PMHE set. The drawback is that this static key has to be exported to modules. Cc: Will Deacon <will@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 08 10月, 2019 1 次提交
-
-
由 Marc Zyngier 提交于
In order to workaround the TX2-219 erratum, it is necessary to trap TTBRx_EL1 accesses to EL2. This is done by setting HCR_EL2.TVM on guest entry, which has the side effect of trapping all the other VM-related sysregs as well. To minimize the overhead, a fast path is used so that we don't have to go all the way back to the main sysreg handling code, unless the rest of the hypervisor expects to see these accesses. Cc: <stable@vger.kernel.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 10 9月, 2019 2 次提交
-
-
由 Marc Zyngier 提交于
Given that the TLB invalidation path is pretty rarely used, there was never any advantage to using hyp_alternate_select() here. has_vhe(), being a glorified static key, is the right tool for the job. Off you go. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NChristoffer Dall <christoffer.dall@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com>
-
由 Marc Zyngier 提交于
There is no reason for using hyp_alternate_select when checking for ARM64_WORKAROUND_834220, as each of the capabilities is also backed by a static key. Just replace the KVM-specific construct with cpus_have_const_cap(ARM64_WORKAROUND_834220). Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NChristoffer Dall <christoffer.dall@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com>
-
- 28 8月, 2019 1 次提交
-
-
由 Will Deacon 提交于
Now that we have a definition for the 'F' field of PAR_EL1, use that instead of coding the immediate directly. Acked-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 19 8月, 2019 1 次提交
-
-
由 Mark Rutland 提交于
For VPIPT I-caches, we need I-cache maintenance on VMID rollover to avoid an ABA problem. Consider a single vCPU VM, with a pinned stage-2, running with an idmap VA->IPA and idmap IPA->PA. If we don't do maintenance on rollover: // VMID A Writes insn X to PA 0xF Invalidates PA 0xF (for VMID A) I$ contains [{A,F}->X] [VMID ROLLOVER] // VMID B Writes insn Y to PA 0xF Invalidates PA 0xF (for VMID B) I$ contains [{A,F}->X, {B,F}->Y] [VMID ROLLOVER] // VMID A I$ contains [{A,F}->X, {B,F}->Y] Unexpectedly hits stale I$ line {A,F}->X. However, for PIPT and VIPT I-caches, the VMID doesn't affect lookup or constrain maintenance. Given the VMID doesn't affect PIPT and VIPT I-caches, and given VMID rollover is independent of changes to stage-2 mappings, I-cache maintenance cannot be necessary on VMID rollover for PIPT or VIPT I-caches. This patch removes the maintenance on rollover for VIPT and PIPT I-caches. At the same time, the unnecessary colons are removed from the asm statement to make it more legible. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Christoffer Dall <christoffer.dall@arm.com> Reviewed-by: NJames Morse <james.morse@arm.com> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 09 8月, 2019 2 次提交
-
-
由 Steve Capper 提交于
In order to support 52-bit kernel addresses detectable at boot time, one needs to know the actual VA_BITS detected. A new variable vabits_actual is introduced in this commit and employed for the KVM hypervisor layout, KASAN, fault handling and phys-to/from-virt translation where there would normally be compile time constants. In order to maintain performance in phys_to_virt, another variable physvirt_offset is introduced. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Marc Zyngier 提交于
At the moment, the way we reset system registers is mildly insane: We write junk to them, call the reset functions, and then check that we have something else in them. The "fun" thing is that this can happen while the guest is running (PSCI, for example). If anything in KVM has to evaluate the state of a system register while junk is in there, bad thing may happen. Let's stop doing that. Instead, we track that we have called a reset function for that register, and assume that the reset function has done something. This requires fixing a couple of sysreg refinition in the trap table. In the end, the very need of this reset check is pretty dubious, as it doesn't check everything (a lot of the sysregs leave outside of the sys_regs[] array). It may well be axed in the near future. Tested-by: NZenghui Yu <yuzenghui@huawei.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 29 7月, 2019 1 次提交
-
-
由 Anders Roxell 提交于
When fall-through warnings was enabled by default the following warnings was starting to show up: ../arch/arm64/kvm/hyp/debug-sr.c: In function ‘__debug_save_state’: ../arch/arm64/kvm/hyp/debug-sr.c:20:19: warning: this statement may fall through [-Wimplicit-fallthrough=] case 15: ptr[15] = read_debug(reg, 15); \ ../arch/arm64/kvm/hyp/debug-sr.c:113:2: note: in expansion of macro ‘save_debug’ save_debug(dbg->dbg_bcr, dbgbcr, brps); ^~~~~~~~~~ ../arch/arm64/kvm/hyp/debug-sr.c:21:2: note: here case 14: ptr[14] = read_debug(reg, 14); \ ^~~~ ../arch/arm64/kvm/hyp/debug-sr.c:113:2: note: in expansion of macro ‘save_debug’ save_debug(dbg->dbg_bcr, dbgbcr, brps); ^~~~~~~~~~ ../arch/arm64/kvm/hyp/debug-sr.c:21:19: warning: this statement may fall through [-Wimplicit-fallthrough=] case 14: ptr[14] = read_debug(reg, 14); \ ../arch/arm64/kvm/hyp/debug-sr.c:113:2: note: in expansion of macro ‘save_debug’ save_debug(dbg->dbg_bcr, dbgbcr, brps); ^~~~~~~~~~ ../arch/arm64/kvm/hyp/debug-sr.c:22:2: note: here case 13: ptr[13] = read_debug(reg, 13); \ ^~~~ ../arch/arm64/kvm/hyp/debug-sr.c:113:2: note: in expansion of macro ‘save_debug’ save_debug(dbg->dbg_bcr, dbgbcr, brps); ^~~~~~~~~~ Rework to add a 'Fall through' comment where the compiler warned about fall-through, hence silencing the warning. Fixes: d93512ef0f0e ("Makefile: Globally enable fall-through warning") Signed-off-by: NAnders Roxell <anders.roxell@linaro.org> [maz: fixed commit message] Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 26 7月, 2019 1 次提交
-
-
由 Anders Roxell 提交于
When fall-through warnings was enabled by default, commit d93512ef0f0e ("Makefile: Globally enable fall-through warning"), the following warnings was starting to show up: In file included from ../arch/arm64/include/asm/kvm_emulate.h:19, from ../arch/arm64/kvm/regmap.c:13: ../arch/arm64/kvm/regmap.c: In function ‘vcpu_write_spsr32’: ../arch/arm64/include/asm/kvm_hyp.h:31:3: warning: this statement may fall through [-Wimplicit-fallthrough=] asm volatile(ALTERNATIVE(__msr_s(r##nvh, "%x0"), \ ^~~ ../arch/arm64/include/asm/kvm_hyp.h:46:31: note: in expansion of macro ‘write_sysreg_elx’ #define write_sysreg_el1(v,r) write_sysreg_elx(v, r, _EL1, _EL12) ^~~~~~~~~~~~~~~~ ../arch/arm64/kvm/regmap.c:180:3: note: in expansion of macro ‘write_sysreg_el1’ write_sysreg_el1(v, SYS_SPSR); ^~~~~~~~~~~~~~~~ ../arch/arm64/kvm/regmap.c:181:2: note: here case KVM_SPSR_ABT: ^~~~ In file included from ../arch/arm64/include/asm/cputype.h:132, from ../arch/arm64/include/asm/cache.h:8, from ../include/linux/cache.h:6, from ../include/linux/printk.h:9, from ../include/linux/kernel.h:15, from ../include/asm-generic/bug.h:18, from ../arch/arm64/include/asm/bug.h:26, from ../include/linux/bug.h:5, from ../include/linux/mmdebug.h:5, from ../include/linux/mm.h:9, from ../arch/arm64/kvm/regmap.c:11: ../arch/arm64/include/asm/sysreg.h:837:2: warning: this statement may fall through [-Wimplicit-fallthrough=] asm volatile("msr " __stringify(r) ", %x0" \ ^~~ ../arch/arm64/kvm/regmap.c:182:3: note: in expansion of macro ‘write_sysreg’ write_sysreg(v, spsr_abt); ^~~~~~~~~~~~ ../arch/arm64/kvm/regmap.c:183:2: note: here case KVM_SPSR_UND: ^~~~ Rework to add a 'break;' in the swich-case since it didn't have that, leading to an interresting set of bugs. Cc: stable@vger.kernel.org # v4.17+ Fixes: a8928195 ("KVM: arm64: Prepare to handle deferred save/restore of 32-bit registers") Signed-off-by: NAnders Roxell <anders.roxell@linaro.org> [maz: reworked commit message, fixed stable range] Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 05 7月, 2019 1 次提交
-
-
由 Dave Martin 提交于
Currently, the {read,write}_sysreg_el*() accessors for accessing particular ELs' sysregs in the presence of VHE rely on some local hacks and define their system register encodings in a way that is inconsistent with the core definitions in <asm/sysreg.h>. As a result, it is necessary to add duplicate definitions for any system register that already needs a definition in sysreg.h for other reasons. This is a bit of a maintenance headache, and the reasons for the _el*() accessors working the way they do is a bit historical. This patch gets rid of the shadow sysreg definitions in <asm/kvm_hyp.h>, converts the _el*() accessors to use the core __msr_s/__mrs_s interface, and converts all call sites to use the standard sysreg #define names (i.e., upper case, with SYS_ prefix). This patch will conflict heavily anyway, so the opportunity to clean up some bad whitespace in the context of the changes is taken. The change exposes a few system registers that have no sysreg.h definition, due to msr_s/mrs_s being used in place of msr/mrs: additions are made in order to fill in the gaps. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Link: https://www.spinics.net/lists/kvm-arm/msg31717.html [Rebased to v4.21-rc1] Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> [Rebased to v5.2-rc5, changelog updates] Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-