- 21 4月, 2016 3 次提交
-
-
由 Suzuki K Poulose 提交于
Now that we can handle stage-2 page tables independent of the host page table levels, wire up the 16K page support. Cc: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
-
由 Suzuki K Poulose 提交于
We share most of the bits for VTCR_EL2 for different page sizes, except for the TG0 value and the entry level value. This patch makes the definitions a bit more cleaner to reflect this fact. Also cleans up the VTTBR_X calculation. No functional changes. Cc: Marc Zyngier <marc.zyngier@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
-
由 Suzuki K Poulose 提交于
TCR_EL1, TCR_EL2 and VTCR_EL2, all share some field positions (TG0, ORGN0, IRGN0 and SH0) and their corresponding value definitions. This patch makes the TCR_EL1 definitions reusable and uses them for TCR_EL2 and VTCR_EL2 fields. This also fixes a bug where we assume TG0 in {V}TCR_EL2 is 1bit field. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
-
- 06 4月, 2016 1 次提交
-
-
由 Marc Zyngier 提交于
We always thought that 40bits of PA range would be the minimum people would actually build. Anything less is terrifyingly small. Turns out that we were both right and wrong. Nobody has ever built such a system, but the ARM Foundation Model has a PARange set to 36bits. Just because we can. Oh well. Now, the KVM API explicitely says that we offer a 40bit PA space to the VM, so we shouldn't run KVM on the Foundation Model at all. That being said, this patch offers a less agressive alternative, and loudly warns about the configuration being unsupported. You'll still be able to run VMs (at your own risks, though). This is just a workaround until we have a proper userspace API where we report the PARange to userspace. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 31 3月, 2016 1 次提交
-
-
由 Suzuki K Poulose 提交于
When we detect support for 16bit VMID in ID_AA64MMFR1, we set the VTCR_EL2_VS field to 1 to make use of 16bit vmids. But, with commit 3a3604bc ("arm64: KVM: Switch to C-based stage2 init") this is broken and we corrupt VTCR_EL2:T0SZ instead of updating the VS field. VTCR_EL2_VS was actually defined to the field shift (19) and not the real value for VS. This patch fixes the issue. Fixes: commit 3a3604bc ("arm64: KVM: Switch to C-based stage2 init") Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 05 3月, 2016 1 次提交
-
-
由 Adam Buchbinder 提交于
Signed-off-by: NAdam Buchbinder <adam.buchbinder@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 01 3月, 2016 2 次提交
-
-
由 Marc Zyngier 提交于
Running the kernel in HYP mode requires the HCR_E2H bit to be set at all times, and the HCR_TGE bit to be set when running as a host (and cleared when running as a guest). At the same time, the vector must be set to the current role of the kernel (either host or hypervisor), and a couple of system registers differ between VHE and non-VHE. We implement these by using another set of alternate functions that get dynamically patched. Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
As non-VHE and VHE have different ways to express the trapping of FPSIMD registers to EL2, make __fpsimd_enabled a patchable predicate and provide a VHE implementation. Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 11 2月, 2016 1 次提交
-
-
由 Tirumalesh Chalamarla 提交于
Setting TCR_EL2.PS to 40 bits is wrong on systems with less that less than 40 bits of physical addresses. and breaks KVM on systems where the RAM is above 40 bits. This patch uses ID_AA64MMFR0_EL1.PARange to set TCR_EL2.PS dynamically, just like we already do for VTCR_EL2.PS. [Marc: rewrote commit message, patch tidy up] Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NTirumalesh Chalamarla <tchalamarla@caviumnetworks.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 25 1月, 2016 1 次提交
-
-
由 Dave Martin 提交于
Some bits in CPTR are defined as RES1 in the architecture. Setting these bits to zero may unintentionally enable future architecture extensions, allowing guests to use them without supervision by the host. This would be bad: for forwards compatibility, this patch makes sure the affected bits are always written with 1, not 0. This patch only addresses CPTR_EL2. Initialisation of other system registers may still need review. Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 18 12月, 2015 1 次提交
-
-
由 Vladimir Murzin 提交于
The ARMv8.1 architecture extension allows to choose between 8-bit and 16-bit of VMID, so use this capability for KVM. Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 23 10月, 2015 1 次提交
-
-
由 Christoffer Dall 提交于
The ARM architecture only saves the exit class to the HSR (ESR_EL2 for arm64) on synchronous exceptions, not on asynchronous exceptions like an IRQ. However, we only report the exception class on kvm_exit, which is confusing because an IRQ looks like it exited at some PC with the same reason as the previous exit. Add a lookup table for the exception index and prepend the kvm_exit tracepoint text with the exception type to clarify this situation. Also resolve the exception class (EC) to a human-friendly text version so the trace output becomes immediately usable for debugging this code. Cc: Wei Huang <wei@redhat.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 17 9月, 2015 1 次提交
-
-
由 Will Deacon 提交于
Although the ThumbEE registers and traps were present in earlier versions of the v8 architecture, it was retrospectively removed and so we can do the same. Whilst this breaks migrating a guest started on a previous version of the kernel, it is much better to kill these (non existent) registers as soon as possible. Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> [maz: added commend about migration] Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 04 9月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
Currently we don't set the RES1 bits of TCR_EL2 and VTCR_EL2 when configuring them, which could lead to unexpected behaviour when an architectural meaning is defined for those bits. Set the RES1 bits to avoid issues. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 20 8月, 2015 1 次提交
-
-
由 Mario Smarduch 提交于
This patch only saves and restores FP/SIMD registers on Guest access. To do this cptr_el2 FP/SIMD trap is set on Guest entry and later checked on exit. lmbench, hackbench show significant improvements, for 30-50% exits FP/SIMD context is not saved/restored [chazy/maz: fixed save/restore logic for 32bit guests] Signed-off-by: NMario Smarduch <m.smarduch@samsung.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 13 3月, 2015 1 次提交
-
-
由 Marc Zyngier 提交于
Until now, KVM/arm didn't care much for page aging (who was swapping anyway?), and simply provided empty hooks to the core KVM code. With server-type systems now being available, things are quite different. This patch implements very simple support for page aging, by clearing the Access flag in the Stage-2 page tables. On access fault, the current fault handling will write the PTE or PMD again, putting the Access flag back on. It should be possible to implement a much faster handling for Access faults, but that's left for a later patch. With this in place, performance in VMs is degraded much more gracefully. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 11 3月, 2015 1 次提交
-
-
由 Marc Zyngier 提交于
Commit 87366d8c ("arm64: Add boot time configuration of Intermediate Physical Address size") removed the hardcoded setting of VTCR_EL2.PS to use ID_AA64MMFR0_EL1.PARange instead, but didn't remove the (now rather misleading) comment. Fix the comments to match reality (at least for the next few minutes). Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 15 1月, 2015 2 次提交
-
-
由 Mark Rutland 提交于
Now that all users have been moved over to the common ESR_ELx_* macros, remove the redundant ESR_EL2 macros. To maintain compatibility with the fault handling code shared with 32-bit, the FSC_{FAULT,PERM} macros are retained as aliases for the common ESR_ELx_FSC_{FAULT,PERM} definitions. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
-
由 Wei Huang 提交于
arm64 uses its own copy of exit handler (arm64/kvm/handle_exit.c). Currently this file doesn't hook up with any trace points. As a result users might not see certain events (e.g. HVC & WFI) while using ftrace with arm64 KVM. This patch fixes this issue by adding a new trace file and defining two trace events (one of which is shared by wfi and wfe) for arm64. The new trace points are then linked with related functions in handle_exit.c. Signed-off-by: NWei Huang <wei@redhat.com> Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 07 11月, 2014 1 次提交
-
-
由 Geoff Levand 提交于
Some of the macros defined in kvm_arm.h are useful in assembly files, but are not compatible with the assembler. Change any C language integer constant definitions using appended U, UL, or ULL to the UL() preprocessor macro. Also, add a preprocessor include of the asm/memory.h file which defines the UL() macro. Fixes build errors like these when using kvm_arm.h in assembly source files: Error: unexpected characters following instruction at operand 3 -- `and x0,x1,#((1U<<25)-1)' Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NGeoff Levand <geoff@infradead.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 26 9月, 2014 1 次提交
-
-
由 Joel Schopp 提交于
The current aarch64 calculation for VTTBR_BADDR_MASK masks only 39 bits and not all the bits in the PA range. This is clearly a bug that manifests itself on systems that allocate memory in the higher address space range. [ Modified from Joel's original patch to be based on PHYS_MASK_SHIFT instead of a hard-coded value and to move the alignment check of the allocation to mmu.c. Also added a comment explaining why we hardcode the IPA range and changed the stage-2 pgd allocation to be based on the 40 bit IPA range instead of the maximum possible 48 bit PA range. - Christoffer ] Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJoel Schopp <joel.schopp@amd.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 11 7月, 2014 1 次提交
-
-
由 Marc Zyngier 提交于
GICv3 requires the IMO and FMO bits to be tightly coupled with some of the interrupt controller's register switch. In order to have similar code paths, move the manipulation of these bits to the GICv2 switch code. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 13 3月, 2014 1 次提交
-
-
由 Radha Mohan Chintakuntla 提交于
ARMv8 supports a range of physical address bit sizes. The PARange bits from ID_AA64MMFR0_EL1 register are read during boot-time and the intermediate physical address size bits are written in the translation control registers (TCR_EL1 and VTCR_EL2). There is no change in the VA bits and levels of translation. Signed-off-by: NRadha Mohan Chintakuntla <rchintakuntla@cavium.com> Reviewed-by: NWill Deacon <Will.deacon@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 03 3月, 2014 1 次提交
-
-
由 Marc Zyngier 提交于
In order to be able to detect the point where the guest enables its MMU and caches, trap all the VM related system registers. Once we see the guest enabling both the MMU and the caches, we can go back to a saner mode of operation, which is to leave these registers in complete control of the guest. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 05 2月, 2014 1 次提交
-
-
由 Mark Rutland 提交于
Somehow SERROR has acquired an additional 'R' in a couple of headers. This patch removes them before they spread further. As neither instance is in use yet, no other sites need to be fixed up. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 30 10月, 2013 1 次提交
-
-
由 Marc Zyngier 提交于
On an (even slightly) oversubscribed system, spinlocks are quickly becoming a bottleneck, as some vcpus are spinning, waiting for a lock to be released, while the vcpu holding the lock may not be running at all. The solution is to trap blocking WFEs and tell KVM that we're now spinning. This ensures that other vpus will get a scheduling boost, allowing the lock to be released more quickly. Also, using CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT slightly improves the performance when the VM is severely overcommited. Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 07 6月, 2013 1 次提交
-
-
由 Marc Zyngier 提交于
Define all the useful bitfields for EL2 registers. Reviewed-by: NChristopher Covington <cov@codeaurora.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-