- 15 5月, 2018 1 次提交
-
-
由 Colin Ian King 提交于
Trivial fix to spelling mistake in debugfs_entries text. Fixes: 669e846e ("KVM/MIPS32: MIPS arch specific APIs for KVM") Signed-off-by: NColin Ian King <colin.king@canonical.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kernel-janitors@vger.kernel.org Cc: <stable@vger.kernel.org> # 3.10+ Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
- 14 12月, 2017 5 次提交
-
-
由 Paolo Bonzini 提交于
After the vcpu_load/vcpu_put pushdown, the handling of asynchronous VCPU ioctl is already much clearer in that it is obvious that they bypass vcpu_load and vcpu_put. However, it is still not perfect in that the different state of the VCPU mutex is still hidden in the caller. Separate those ioctls into a new function kvm_arch_vcpu_async_ioctl that returns -ENOIOCTLCMD for more "traditional" synchronous ioctls. Cc: James Hogan <jhogan@kernel.org> Cc: Paul Mackerras <paulus@ozlabs.org> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Reviewed-by: NCornelia Huck <cohuck@redhat.com> Suggested-by: NCornelia Huck <cohuck@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Christoffer Dall 提交于
Move the calls to vcpu_load() and vcpu_put() in to the architecture specific implementations of kvm_arch_vcpu_ioctl() which dispatches further architecture-specific ioctls on to other functions. Some architectures support asynchronous vcpu ioctls which cannot call vcpu_load() or take the vcpu->mutex, because that would prevent concurrent execution with a running VCPU, which is the intended purpose of these ioctls, for example because they inject interrupts. We repeat the separate checks for these specifics in the architecture code for MIPS, S390 and PPC, and avoid taking the vcpu->mutex and calling vcpu_load for these ioctls. Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Christoffer Dall 提交于
Move vcpu_load() and vcpu_put() into the architecture specific implementations of kvm_arch_vcpu_ioctl_set_regs(). Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Reviewed-by: NDavid Hildenbrand <david@redhat.com> Reviewed-by: NCornelia Huck <cohuck@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Christoffer Dall 提交于
Move vcpu_load() and vcpu_put() into the architecture specific implementations of kvm_arch_vcpu_ioctl_get_regs(). Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Reviewed-by: NDavid Hildenbrand <david@redhat.com> Reviewed-by: NCornelia Huck <cohuck@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Christoffer Dall 提交于
Move vcpu_load() and vcpu_put() into the architecture specific implementations of kvm_arch_vcpu_ioctl_run(). Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> # s390 parts Reviewed-by: NCornelia Huck <cohuck@redhat.com> [Rebased. - Paolo] Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 28 11月, 2017 1 次提交
-
-
由 Jan H. Schönherr 提交于
KVM API says for the signal mask you set via KVM_SET_SIGNAL_MASK, that "any unblocked signal received [...] will cause KVM_RUN to return with -EINTR" and that "the signal will only be delivered if not blocked by the original signal mask". This, however, is only true, when the calling task has a signal handler registered for a signal. If not, signal evaluation is short-circuited for SIG_IGN and SIG_DFL, and the signal is either ignored without KVM_RUN returning or the whole process is terminated. Make KVM_SET_SIGNAL_MASK behave as advertised by utilizing logic similar to that in do_sigtimedwait() to avoid short-circuiting of signals. Signed-off-by: NJan H. Schönherr <jschoenh@amazon.de> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 02 11月, 2017 1 次提交
-
-
由 Greg Kroah-Hartman 提交于
Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: NKate Stewart <kstewart@linuxfoundation.org> Reviewed-by: NPhilippe Ombredanne <pombredanne@nexb.com> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 15 9月, 2017 1 次提交
-
-
由 Davidlohr Bueso 提交于
For example, the following could occur, making us miss a wakeup: CPU0 CPU1 kvm_vcpu_block kvm_mips_comparecount_func [L] swait_active(&vcpu->wq) [S] prepare_to_swait(&vcpu->wq) [L] if (!kvm_vcpu_has_pending_timer(vcpu)) schedule() [S] queue_timer_int(vcpu) Ensure that the swait_active() check is not hoisted over the interrupt. Signed-off-by: NDavidlohr Bueso <dbueso@suse.de> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 08 8月, 2017 1 次提交
-
-
由 Longpeng(Mike) 提交于
If a vcpu exits due to request a user mode spinlock, then the spinlock-holder may be preempted in user mode or kernel mode. (Note that not all architectures trap spin loops in user mode, only AMD x86 and ARM/ARM64 currently do). But if a vcpu exits in kernel mode, then the holder must be preempted in kernel mode, so we should choose a vcpu in kernel mode as a more likely candidate for the lock holder. This introduces kvm_arch_vcpu_in_kernel() to decide whether the vcpu is in kernel-mode when it's preempted. kvm_vcpu_on_spin's new argument says the same of the spinning VCPU. Signed-off-by: NLongpeng(Mike) <longpeng2@huawei.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 20 6月, 2017 1 次提交
-
-
由 James Cowgill 提交于
This commit fixes a "maybe-uninitialized" build failure in arch/mips/kvm/tlb.c when KVM, DYNAMIC_DEBUG and JUMP_LABEL are all enabled. The failure is: In file included from ./include/linux/printk.h:329:0, from ./include/linux/kernel.h:13, from ./include/asm-generic/bug.h:15, from ./arch/mips/include/asm/bug.h:41, from ./include/linux/bug.h:4, from ./include/linux/thread_info.h:11, from ./include/asm-generic/current.h:4, from ./arch/mips/include/generated/asm/current.h:1, from ./include/linux/sched.h:11, from arch/mips/kvm/tlb.c:13: arch/mips/kvm/tlb.c: In function ‘kvm_mips_host_tlb_inv’: ./include/linux/dynamic_debug.h:126:3: error: ‘idx_kernel’ may be used uninitialized in this function [-Werror=maybe-uninitialized] __dynamic_pr_debug(&descriptor, pr_fmt(fmt), \ ^~~~~~~~~~~~~~~~~~ arch/mips/kvm/tlb.c:169:16: note: ‘idx_kernel’ was declared here int idx_user, idx_kernel; ^~~~~~~~~~ There is a similar error relating to "idx_user". Both errors were observed with GCC 6. As far as I can tell, it is impossible for either idx_user or idx_kernel to be uninitialized when they are later read in the calls to kvm_debug, but to satisfy the compiler, add zero initializers to both variables. Signed-off-by: NJames Cowgill <James.Cowgill@imgtec.com> Fixes: 57e3869c ("KVM: MIPS/TLB: Generalise host TLB invalidate to kernel ASID") Cc: <stable@vger.kernel.org> # 4.11+ Acked-by: NJames Hogan <james.hogan@imgtec.com> Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
-
- 04 6月, 2017 1 次提交
-
-
由 Radim Krčmář 提交于
A first step in vcpu->requests encapsulation. Additionally, we now use READ_ONCE() when accessing vcpu->requests, which ensures we always load vcpu->requests when it's accessed. This is important as other threads can change it any time. Also, READ_ONCE() documents that vcpu->requests is used with other threads, likely requiring memory barriers, which it does. Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com> [ Documented the new use of READ_ONCE() and converted another check in arch/mips/kvm/vz.c ] Signed-off-by: NAndrew Jones <drjones@redhat.com> Acked-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
- 27 4月, 2017 1 次提交
-
-
由 Radim Krčmář 提交于
Users were expected to use kvm_check_request() for testing and clearing, but request have expanded their use since then and some users want to only test or do a faster clear. Make sure that requests are not directly accessed with bit operations. Reviewed-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 07 4月, 2017 1 次提交
-
-
由 Paolo Bonzini 提交于
Remove code from architecture files that can be moved to virt/kvm, since there is already common code for coalesced MMIO. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NDavid Hildenbrand <david@redhat.com> [Removed a pointless 'break' after 'return'.] Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
-
- 28 3月, 2017 26 次提交
-
-
由 James Hogan 提交于
Properly implement emulation of the TLBR instruction for Trap & Emulate. This instruction reads the TLB entry pointed at by the CP0_Index register into the other TLB registers, which may have the side effect of changing the current ASID. Therefore abstract the CP0_EntryHi and ASID changing code into a common function in the process. A comment indicated that Linux doesn't use TLBR, which is true during normal use, however dumping of the TLB does use it (for example with the relatively recent 'x' magic sysrq key), as does a wired TLB entries test case in my KVM tests. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Acked-by: NRalf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
Octeon III implements a read-only guest CP0_PRid register, so add cases to the KVM register access API for Octeon to ensure the correct value is read and writes are ignored. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Daney <david.daney@cavium.com> Cc: Andreas Herrmann <andreas.herrmann@caviumnetworks.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org -
由 James Hogan 提交于
Octeon III doesn't implement the optional GuestCtl0.CG bit to allow guest mode to execute virtual address based CACHE instructions, so implement emulation of a few important ones specifically for Octeon III in response to a GPSI exception. Currently the main reason to perform these operations is for icache synchronisation, so they are implemented as a simple icache flush with local_flush_icache_range(). Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Daney <david.daney@cavium.com> Cc: Andreas Herrmann <andreas.herrmann@caviumnetworks.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org -
由 James Hogan 提交于
Set up hardware virtualisation on Octeon III cores, configuring guest interrupt routing and carving out half of the root TLB for guest use, restoring it back again afterwards. We need to be careful to inhibit TLB shutdown machine check exceptions while invalidating guest TLB entries, since TLB invalidation is not available so guest entries must be invalidated by setting them to unique unmapped addresses, which could conflict with mappings set by the guest or root if recently repartitioned. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Daney <david.daney@cavium.com> Cc: Andreas Herrmann <andreas.herrmann@caviumnetworks.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org -
由 James Hogan 提交于
Octeon CPUs don't report the correct dcache line size in CP0_Config1.DL, so encode the correct value for the guest CP0_Config1.DL based on cpu_dcache_line_size(). Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Daney <david.daney@cavium.com> Cc: Andreas Herrmann <andreas.herrmann@caviumnetworks.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org -
由 James Hogan 提交于
When TLB entries are invalidated in the presence of a virtually tagged icache, such as that found on Octeon CPUs, flush the icache so that we don't get a reserved instruction exception even though the TLB mapping is removed. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Daney <david.daney@cavium.com> Cc: Andreas Herrmann <andreas.herrmann@caviumnetworks.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org -
由 James Hogan 提交于
Cache management is implemented separately for Cavium Octeon CPUs, so r4k_blast_[id]cache aren't available. Instead for Octeon perform a local icache flush using local_flush_icache_range(), and for other platforms which don't use c-r4k.c use __flush_cache_all() / flush_icache_all(). Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Daney <david.daney@cavium.com> Cc: Andreas Herrmann <andreas.herrmann@caviumnetworks.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org -
由 James Hogan 提交于
Create a trace event for guest mode changes, and enable VZ's GuestCtl0.MC bit after the trace event is enabled to trap all guest mode changes. The MC bit causes Guest Hardware Field Change (GHFC) exceptions whenever a guest mode change occurs (such as an exception entry or return from exception), so we need to handle this exception now. The MC bit is only enabled when restoring register state, so enabling the trace event won't take immediate effect. Tracing guest mode changes can be particularly handy when trying to work out what a guest OS gets up to before something goes wrong, especially if the problem occurs as a result of some previous guest userland exception which would otherwise be invisible in the trace. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org -
由 James Hogan 提交于
Transfer timer state to the VZ guest context (CP0_GTOffset & guest CP0_Count) when entering guest mode, enabling direct guest access to it, and transfer back to soft timer when saving guest register state. This usually allows guest code to directly read CP0_Count (via MFC0 and RDHWR) and read/write CP0_Compare, without trapping to the hypervisor for it to emulate the guest timer. Writing to CP0_Count or CP0_Cause.DC is much less common and still triggers a hypervisor GPSI exception, in which case the timer state is transferred back to an hrtimer before emulating the write. We are careful to prevent small amounts of drift from building up due to undeterministic time intervals between reading of the ktime and reading of CP0_Count. Some drift is expected however, since the system clocksource may use a different timer to the local CP0_Count timer used by VZ. This is permitted to prevent guest CP0_Count from appearing to go backwards. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org -
由 James Hogan 提交于
Add emulation of Memory Accessibility Attribute Registers (MAARs) when necessary. We can't actually do anything with whatever the guest provides, but it may not be possible to clear Guest.Config5.MRP so we have to emulate at least a pair of MAARs. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Cc: linux-doc@vger.kernel.org -
由 James Hogan 提交于
When restoring guest state after another VCPU has run, be sure to clear CP0_LLAddr.LLB in order to break any interrupted atomic critical section. Without this SMP guest atomics don't work when LLB is present as one guest can complete the atomic section started by another guest. MIPS VZ guest read of CP0_LLAddr causes Guest Privileged Sensitive Instruction (GPSI) exception due to the address being root physical. Handle this by reporting only the LLB bit, which contains the bit for whether a ll/sc atomic is in progress without any reason for failure. Similarly on P5600 a guest write to CP0_LLAddr also causes a GPSI exception. Handle this also by clearing the guest LLB bit from root mode. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org -
由 James Hogan 提交于
Add support for VZ guest CP0_PWBase, CP0_PWField, CP0_PWSize, and CP0_PWCtl registers for controlling the guest hardware page table walker (HTW) present on P5600 and P6600 cores. These guest registers need initialising on R6, context switching, and exposing via the KVM ioctl API when they are present. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Cc: linux-doc@vger.kernel.org -
由 James Hogan 提交于
Add support for VZ guest CP0_SegCtl0, CP0_SegCtl1, and CP0_SegCtl2 registers, as found on P5600 and P6600 cores. These guest registers need initialising, context switching, and exposing via the KVM ioctl API when they are present. They also require the GVA -> GPA translation code for handling a GVA root exception to be updated to interpret the segmentation registers and decode the faulting instruction enough to detect EVA memory access instructions. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Cc: linux-doc@vger.kernel.org -
由 James Hogan 提交于
Add support for VZ guest CP0_ContextConfig and CP0_XContextConfig (MIPS64 only) registers, as found on P5600 and P6600 cores. These guest registers need initialising, context switching, and exposing via the KVM ioctl API when they are present. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Cc: linux-doc@vger.kernel.org -
由 James Hogan 提交于
Add support for VZ guest CP0_BadInstr and CP0_BadInstrP registers, as found on most VZ capable cores. These guest registers need context switching, and exposing via the KVM ioctl API when they are present. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Cc: linux-doc@vger.kernel.org -
由 James Hogan 提交于
Add support for the MIPS Virtualization (VZ) ASE to the MIPS KVM build system. For now KVM can only be configured for T&E or VZ and not both, but the design of the user facing APIs support the possibility of having both available, so this could change in future. Note that support for various optional guest features (some of which can't be turned off) are implemented in immediately following commits, so although it should now be possible to build VZ support, it may not work yet on your hardware. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org -
由 James Hogan 提交于
Add the main support for the MIPS Virtualization ASE (A.K.A. VZ) to MIPS KVM. The bulk of this work is in vz.c, with various new state and definitions elsewhere. Enough is implemented to be able to run on a minimal VZ core. Further patches will fill out support for guest features which are optional or can be disabled. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Acked-by: NRalf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Cc: linux-doc@vger.kernel.org
-
由 James Hogan 提交于
The general guest exit handler needs a few tweaks for VZ compared to trap & emulate, which for now are made directly depending on CONFIG_KVM_MIPS_VZ: - There is no need to re-enable the hardware page table walker (HTW), as it can be left enabled during guest mode operation with VZ. - There is no need to perform a privilege check, as any guest privilege violations should have already been detected by the hardware and triggered the appropriate guest exception. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org -
由 James Hogan 提交于
Ifdef out the trap & emulate CACHE instruction emulation functions for VZ. We will provide separate CACHE instruction emulation in vz.c, and we need to avoid linker errors due to the use of T&E specific MMU helpers. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org -
由 James Hogan 提交于
Update emulation of guest writes to CP0_Compare for VZ. There are two main differences compared to trap & emulate: - Writing to CP0_Compare in the VZ hardware guest context acks any pending timer, clearing CP0_Cause.TI. If we don't want an ack to take place we must carefully restore the TI bit if it was previously set. - Even with guest timer access disabled in CP0_GuestCtl0.GT, if the guest CP0_Count reaches the guest CP0_Compare the timer interrupt will assert. To prevent this we must set CP0_GTOffset to move the guest CP0_Count out of the way of the new guest CP0_Compare, either before or after depending on whether it is a forwards or backwards change. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org -
由 James Hogan 提交于
Add functions for MIPS VZ TLB management to tlb.c. kvm_vz_host_tlb_inv() will be used for invalidating root TLB entries after GPA page tables have been modified due to a KVM page fault. It arranges for a root GPA mapping to be flushed from the TLB, using the gpa_mm ASID or the current GuestID to do the probe. kvm_vz_local_flush_roottlb_all_guests() and kvm_vz_local_flush_guesttlb_all() flush all TLB entries in the corresponding TLB for guest mappings (GPA->RPA for root TLB with GuestID, and all entries for guest TLB). They will be used when starting a new GuestID cycle, when VZ hardware is enabled/disabled, and also when switching to a guest when the guest TLB contents may be stale or belong to a different VM. kvm_vz_guest_tlb_lookup() converts a guest virtual address to a guest physical address using the guest TLB. This will be used to decode guest virtual addresses which are sometimes provided by VZ hardware in CP0_BadVAddr for certain exceptions when the guest physical address is unavailable. kvm_vz_save_guesttlb() and kvm_vz_load_guesttlb() will be used to preserve wired guest VTLB entries while a guest isn't running. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Acked-by: NRalf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
Update MIPS KVM entry code to support VZ: - We need to set GuestCtl0.GM while in guest mode. - For cores supporting GuestID, we need to set the root GuestID to match the main GuestID while in guest mode so that the root TLB refill handler writes the correct GuestID into the TLB. - For cores without GuestID where the root ASID dealiases RVA/GPA mappings, we need to load that ASID from the gpa_mm rather than the per-VCPU guest_kernel_mm or guest_user_mm, since the root TLB maps guest physical addresses. We also need to restore the normal process ASID on exit. - The normal linux process pgd needs restoring on exit, as we can't leave the GPA mappings active for kernel code. - GuestCtl0 needs saving on exit for the GExcCode field, as it may be clobbered if a preemption occurs. We also need to move the TLB refill handler to the XTLB vector at offset 0x80 on 64-bit VZ kernels, as hardware will use Root.Status.KX to determine whether a TLB refill or XTLB Refill exception is to be taken on a root TLB miss from guest mode, and KX needs to be set for kernel code to be able to access the 64-bit segments. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org -
由 James Hogan 提交于
Abstract the MIPS KVM guest CP0 register access macros into inline functions which are generated by macros. This allows them to be generated differently for VZ, where they will usually need to access the hardware guest CP0 context rather than the saved values in RAM. Accessors for each individual register are generated using these macros: - __BUILD_KVM_*_SW() for registers which are not present in the VZ hardware guest context, so kvm_{read,write}_c0_guest_##name() will access the saved value in RAM regardless of whether VZ is enabled. - __BUILD_KVM_*_HW() for registers which are present in the VZ hardware guest context, so kvm_{read,write}_c0_guest_##name() will access the hardware register when VZ is enabled. These build the underlying accessors using further macros: - __BUILD_KVM_*_SAVED() builds e.g. kvm_{read,write}_sw_gc0_##name() functions for accessing the saved versions of the registers in RAM. This is used for implementing the common kvm_{read,write}_c0_guest_##name() accessors with T&E where registers are always stored in RAM, but are also available with VZ HW registers to allow them to be accessed while saved. - __BUILD_KVM_*_VZ() builds e.g. kvm_{read,write}_vz_gc0_##name() functions for accessing the VZ hardware guest context registers directly. This is used for implementing the common kvm_{read,write}_c0_guest_##name() accessors with VZ. - __BUILD_KVM_*_WRAP() builds wrappers with different names, which allows the common kvm_{read,write}_c0_guest_##name() functions to be implemented using the VZ accessors while still having the SAVED accessors available too. - __BUILD_KVM_SAVE_VZ() builds functions for saving and restoring VZ hardware guest context register state to RAM, improving conciseness of VZ context saving and restoring. Similar macros exist for generating modifiers (set, clear, change), either with a normal unlocked read/modify/write, or using atomic LL/SC sequences. These changes change the types of 32-bit registers to u32 instead of unsigned long, which requires some changes to printk() functions in MIPS KVM. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org -
由 James Hogan 提交于
Add a callback for MIPS KVM implementations to handle the VZ guest exit exception. Currently the trap & emulate implementation contains a stub which reports an internal error, but the callback will be used properly by the VZ implementation. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org -
由 James Hogan 提交于
Add an implementation callback for the kvm_arch_hardware_enable() and kvm_arch_hardware_disable() architecture functions, with simple stubs for trap & emulate. This is in preparation for VZ which will make use of them. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org -
由 James Hogan 提交于
Add an implementation callback for checking presence of KVM extensions. This allows implementation specific extensions to be provided without ifdefs in mips.c. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-