- 03 2月, 2017 21 次提交
-
-
由 James Hogan 提交于
Current guest physical memory is mapped to host physical addresses using a single linear array (guest_pmap of length guest_pmap_npages). This was only really meant to be temporary, and isn't sparse, so its wasteful of memory. A small amount of RAM at GPA 0 and a small boot exception vector at GPA 0x1fc00000 cannot be represented without a full 128KiB guest_pmap allocation (MIPS32 with 16KiB pages), which is one reason why QEMU currently runs its boot code at the top of RAM instead of the usual boot exception vector address. Instead use the existing infrastructure for host virtual page table management to allocate a page table for guest physical memory too. This should be sufficient for now, assuming the size of physical memory doesn't exceed the size of virtual memory. It may need extending in future to handle XPA (eXtended Physical Addressing) in 32-bit guests, as supported by VZ guests on P5600. Some of this code is based loosely on Cavium's VZ KVM implementation. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
When exiting from the guest, store the values of the CP0_BadInstr and CP0_BadInstrP registers if they exist, which contain the encodings of the instructions which caused the last synchronous exception. When the instruction is needed for emulation, kvm_get_badinstr() and kvm_get_badinstrp() are used instead of calling kvm_get_inst() directly, to decide whether to read the saved CP0_BadInstr/CP0_BadInstrP registers (if they exist), or read the instruction from memory (if not). The use of these registers should be more robust than using kvm_get_inst(), as it actually gives the instruction encoding seen by the hardware rather than relying on user accessors after the fact, which can be fooled by incoherent icache or a racing code modification. It will also work with VZ, where the guest virtual memory isn't directly accessible by the host with user accessors. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
Currently kvm_get_inst() returns KVM_INVALID_INST in the event of a fault reading the guest instruction. This has the rather arbitrary magic value 0xdeadbeef. This API isn't very robust, and in fact 0xdeadbeef is a valid MIPS64 instruction encoding, namely "ld t1,-16657(s5)". Therefore change the kvm_get_inst() API to return 0 or -EFAULT, and to return the instruction via a u32 *out argument. We can then drop the KVM_INVALID_INST definition entirely. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
In order to make use of the CP0_BadInstr & CP0_BadInstrP registers we need to be a bit more careful not to treat code fetch faults as MMIO, lest we hit an UNPREDICTABLE register value when we try to emulate the MMIO load instruction but there was no valid instruction word available to the hardware. Add a kvm_is_ifetch_fault() helper to try to figure out whether a load fault was due to a code fetch, and prevent MMIO instruction emulation in that case. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
MIPS KVM uses its own variation of get_new_mmu_context() which takes an extra vcpu pointer (unused) and does exactly the same thing. Switch to just using get_new_mmu_context() directly and drop KVM's version of it as it doesn't really serve any purpose. The nearby declarations of kvm_mips_alloc_new_mmu_context(), kvm_mips_vcpu_load() and kvm_mips_vcpu_put() are also removed from kvm_host.h, as no definitions or users exist. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
When exceptions are injected into the MIPS KVM guest, the whole host TLB is flushed (except any entries in the guest KSeg0 range). This is certainly not mandated by the architecture when exceptions are taken (userland can't directly change TLB mappings anyway), and is a pretty heavyweight operation: - There may be hundreds of TLB entries especially when a 512 entry FTLB is present. These are walked and read and conditionally invalidated, so the TLBINV feature can't be used either. - It'll indiscriminately wipe out entries belonging to other memory spaces. A simple ASID regeneration would be much faster to perform, although it'd wipe out the guest KSeg0 mappings too. My suspicion is that this was simply to plaster over the fact that kvm_mips_host_tlb_inv() incorrectly only invalidated TLB entries in the ASID for guest usermode, and not the ASID for guest kernelmode. Now that the recent commit "KVM: MIPS/TLB: Flush host TLB entry in kernel ASID" fixes kvm_mips_host_tlb_inv() to flush TLB entries in the kernelmode ASID when the guest TLB changes, lets drop these calls and the otherwise unused kvm_mips_flush_host_tlb(). Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
Now that KVM no longer uses wired entries we can safely use local_flush_tlb_all() when we need to flush the entire TLB (on the start of a new ASID cycle). This doesn't flush wired entries, which allows other code to use them without KVM clobbering them all the time. It also is more up to date, knowing about the tlbinv architectural feature, flushing of micro TLB on cores where that is necessary (Loongson I believe), and knows to stop the HTW while doing so. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
Now that we have GVA page tables, use standard user accesses with page faults disabled to read & modify guest instructions. This should be more robust (than the rather dodgy method of accessing guest mapped segments by just directly addressing them) and will also work with Enhanced Virtual Addressing (EVA) host kernel configurations where dedicated instructions are needed for accessing user mode memory. For simplicity and speed we do this regardless of the guest segment the address resides in, rather than handling guest KSeg0 specially with kmap_atomic() as before. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
Now that the commpage doesn't use wired TLB entries, the per-CPU vm_init() callback is the only work done by kvm_mips_init_vm_percpu(). The trap & emulate implementation doesn't actually need to do anything from vm_init(), and the future VZ implementation would be better served by a kvm_arch_hardware_enable callback anyway. Therefore drop the vm_init() callback entirely, allowing the kvm_mips_init_vm_percpu() function to also be dropped, along with the kvm_mips_instance atomic counter. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
Now that we have GVA page tables and an optimised TLB refill handler in place, convert the handling of commpage faults from the guest kernel to fill the GVA page table and invalidate the TLB entry, rather than filling the wired TLB entry directly. For simplicity we no longer use a wired entry for the commpage (refill should be much cheaper with the fast-path handler anyway). Since we don't need to manipulate the TLB directly any longer, move the function from tlb.c to mmu.c. This puts it closer to the similar functions handling KSeg0 and TLB mapped page faults from the guest. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
Now that we have GVA page tables and an optimised TLB refill handler in place, convert the handling of page faults in TLB mapped segment from the guest to fill a single GVA page table entry and invalidate the TLB entry, rather than filling a TLB entry pair directly. Also remove the now unused kvm_mips_get_{kernel,user}_asid() functions in mmu.c and kvm_mips_host_tlb_write() in tlb.c. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
Implement invalidation of specific pairs of GVA page table entries in one or both of the GVA page tables. This is used when existing mappings are replaced in the guest TLB by emulated TLBWI/TLBWR instructions. Due to the sharing of page tables in the host kernel range, we should be careful not to allow host pages to be invalidated. Add a helper kvm_mips_walk_pgd() which can be used when walking of either GPA (future patches) or GVA page tables is needed, optionally with allocation of page tables along the way when they don't exist. GPA page table walking will need to be protected by the kvm->mmu_lock, so we also add a small MMU page cache in each KVM VCPU, like that found for other architectures but smaller. This allows enough pages to be pre-allocated to handle a single fault without holding the lock, allowing the helper to run with the lock held without having to handle allocation failures. Using the same mechanism for GVA allows the same code to be used, and allows it to use the same cache of allocated pages if the GPA walk didn't need to allocate any new tables. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
Implement invalidation of large ranges of virtual addresses from GVA page tables in response to a guest ASID change (immediately for guest kernel page table, lazily for guest user page table). We iterate through a range of page tables invalidating entries and freeing fully invalidated tables. To minimise overhead the exact ranges invalidated depends on the flags argument to kvm_mips_flush_gva_pt(), which also allows it to be used in future KVM_CAP_SYNC_MMU patches in response to GPA changes, which unlike guest TLB mapping changes affects guest KSeg0 mappings. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
Refactor kvm_mips_host_tlb_inv() to also be able to invalidate any matching TLB entry in the kernel ASID rather than assuming only the TLB entries in the user ASID can change. Two new bool user/kernel arguments allow the caller to indicate whether the mapping should affect each of the ASIDs for guest user/kernel mode. - kvm_mips_invalidate_guest_tlb() (used by TLBWI/TLBWR emulation) can now invalidate any corresponding TLB entry in both the kernel ASID (guest kernel may have accessed any guest mapping), and the user ASID if the entry being replaced is in guest USeg (where guest user may also have accessed it). - The tlbmod fault handler (and the KSeg0 / TLB mapped / commpage fault handlers in later patches) can now invalidate the corresponding TLB entry in whichever ASID is currently active, since only a single page table will have been updated anyway. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
Use functions from the general MIPS TLB exception vector generation code (tlbex.c) to construct a fast path TLB refill handler similar to the general one, but cut down and capable of preserving K0 and K1. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
Wire up a vcpu uninit implementation callback. This will be used for the clean up of GVA->HPA page tables. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
Set init_mm as the active_mm and update mm_cpumask(current->mm) to reflect that it isn't active when in guest context. This prevents cache management code from attempting cache flushes on host virtual addresses while in guest context, for example due to a cache management IPIs or later when writing of dynamically translated code hits copy on write. We do this using helpers in static kernel code to avoid having to export init_mm to modules. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
Add implementation callbacks for entering the guest (vcpu_run()) and reentering the guest (vcpu_reenter()), allowing implementation specific operations to be performed before entering the guest or after returning to the host without cluttering kvm_arch_vcpu_ioctl_run(). This allows the T&E specific lazy user GVA flush to be moved into trap_emul.c, along with disabling of the HTW. We also move kvm_mips_deliver_interrupts() as VZ will need to restore the guest timer state prior to delivering interrupts. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
The kvm_vcpu_arch structure contains both mm_structs for allocating MMU contexts (primarily the ASID) but it also copies the resulting ASIDs into guest_{user,kernel}_asid[] arrays which are referenced from uasm generated code. This duplication doesn't seem to serve any purpose, and it gets in the way of generalising the ASID handling across guest kernel/user modes, so lets just extract the ASID straight out of the mm_struct on demand, and in fact there are convenient cpu_context() and cpu_asid() macros for doing so. To reduce the verbosity of this code we do also add kern_mm and user_mm local variables where the kernel and user mm_structs are used. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
Convert the get_regs() and set_regs() callbacks to vcpu_load() and vcpu_put(), which provide a cpu argument and more closely match the kvm_arch_vcpu_load() / kvm_arch_vcpu_put() that they are called by. This is in preparation for moving ASID management into the implementations. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
KVM T&E uses an ASID for guest kernel mode and an ASID for guest user mode. The current ASID is saved when the guest is scheduled out, and restored when scheduling back in, with checks for whether the ASID needs to be regenerated. This isn't really necessary as the ASID can be easily determined by the current guest mode, so lets simplify it to just read the required ASID from guest_kernel_asid or guest_user_asid even if the ASID hasn't been regenerated. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
- 26 10月, 2016 1 次提交
-
-
由 James Hogan 提交于
The advancing of the PC when completing an MMIO load is done before re-entering the guest, i.e. before restoring the guest ASID. However if the load is in a branch delay slot it may need to access guest code to read the prior branch instruction. This isn't safe in TLB mapped code at the moment, nor in the future when we'll access unmapped guest segments using direct user accessors too, as it could read the branch from host user memory instead. Therefore calculate the resume PC in advance while we're still in the right context and save it in the new vcpu->arch.io_pc (replacing the no longer needed vcpu->arch.pending_load_cause), and restore it on MMIO completion. Fixes: e685c689 ("KVM/MIPS32: Privileged instruction/target branch emulation.") Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Cc: <stable@vger.kernel.org> # 3.10.x- Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 29 9月, 2016 1 次提交
-
-
由 James Hogan 提交于
Invalidate host TLB mappings when the guest ASID is changed by regenerating ASIDs, rather than flushing the entire host TLB except entries in the guest KSeg0 range. For the guest kernel mode ASID we regenerate on the spot when the guest ASID is changed, as that will always take place while the guest is in kernel mode. However when the guest invalidates TLB entries the ASID will often by changed temporarily as part of writing EntryHi without the guest returning to user mode in between. We therefore regenerate the user mode ASID lazily before entering the guest in user mode, if and only if the guest ASID has actually changed since the last guest user mode entry. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
- 09 9月, 2016 1 次提交
-
-
由 James Hogan 提交于
MIPS Enhanced Virtual Addressing (EVA) allows the user mode and kernel mode address spaces to overlap, breaking the assumption that PAGE_OFFSET is an appropriate KVM HVA error value, since PAGE_OFFSET may be as low as zero. Fix this in the same way that s390 does in commit bf640876 ("KVM: s390: Make KVM_HVA_ERR_BAD usable on s390"), by overriding KVM_HVA_ERR_[RO_]BAD and kvm_is_error_hva() in asm/kvm_host.h. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
- 08 9月, 2016 1 次提交
-
-
由 Suraj Jitindar Singh 提交于
vms and vcpus have statistics associated with them which can be viewed within the debugfs. Currently it is assumed within the vcpu_stat_get() and vm_stat_get() functions that all of these statistics are represented as u32s, however the next patch adds some u64 vcpu statistics. Change all vcpu statistics to u64 and modify vcpu_stat_get() accordingly. Since vcpu statistics are per vcpu, they will only be updated by a single vcpu at a time so this shouldn't present a problem on 32-bit machines which can't atomically increment 64-bit numbers. However vm statistics could potentially be updated by multiple vcpus from that vm at a time. To avoid the overhead of atomics make all vm statistics ulong such that they are 64-bit on 64-bit systems where they can be atomically incremented and are 32-bit on 32-bit systems which may not be able to atomically increment 64-bit numbers. Modify vm_stat_get() to expect ulongs. Signed-off-by: NSuraj Jitindar Singh <sjitindarsingh@gmail.com> Reviewed-by: NDavid Matlack <dmatlack@google.com> Acked-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
-
- 05 7月, 2016 4 次提交
-
-
由 James Hogan 提交于
The atomic KVM register access macros in kvm_host.h (for the guest Cause register with KVM in trap & emulate mode) use ll/sc instructions, however they still .set mips3, which causes pre-MIPSr6 instruction encodings to be emitted, even for a MIPSr6 build. Fix it to use MIPS_ISA_ARCH_LEVEL as other parts of arch/mips already do. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim KrÄmář <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
Use a relative branch to get from the individual exception vectors to the common guest exit handler, rather than loading the address of the exit handler and jumping to it. This is made easier due to the fact we are now generating the entry code dynamically. This will also allow the exception code to be further reduced in future patches. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim KrÄmář <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
Scratch cop0 registers are needed by KVM to be able to save/restore all the GPRs, including k0/k1, and for storing the VCPU pointer. However no registers are universally suitable for these purposes, so the decision should be made at runtime. Until now, we've used DDATA_LO to store the VCPU pointer, and ErrorEPC as a temporary. It could be argued that this is abuse of those registers, and DDATA_LO is known not to be usable on certain implementations (Cavium Octeon). If KScratch registers are present, use them instead. We save & restore the temporary register in addition to the VCPU pointer register when using a KScratch register for it, as it may be used for normal host TLB handling too. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim KrÄmář <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
Convert the whole of locore.S (assembly to enter guest and handle exception entry) to be generated dynamically with uasm. This is done with minimal changes to the resulting code. The main changes are: - Some constants are generated by uasm using LUI+ADDIU instead of LUI+ORI. - Loading of lo and hi are swapped around in vcpu_run but not when resuming the guest after an exit. Both bits of logic are now generated by the same code. - Register MOVEs in uasm use different ADDU operand ordering to GNU as, putting zero register into rs instead of rt. - The JALR.HB to call the C exit handler is switched to JALR, since the hazard barrier would appear to be unnecessary. This will allow further optimisation in the future to dynamically handle the capabilities of the CPU. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim KrÄmář <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 16 6月, 2016 6 次提交
-
-
由 James Hogan 提交于
Convert MIPS KVM guest register state initialisation to use the standard <asm/mipsregs.h> register field definitions for Config registers, and drop the custom definitions in kvm_host.h which it was using before. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
The comm page which is mapped into the guest kernel address space at 0x0 has the unfortunate side effect of allowing guest kernel NULL pointer dereferences to succeed. The only constraint on this address is that it must be within 32KiB of 0x0, so that single lw/sw instructions (which have 16-bit signed offset fields) can be used to access it, using the zero register as a base. So lets move the comm page as high as possible within that constraint so that 0x0 can be left unmapped, at least for page sizes < 32KiB. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
Allow up to 6 KVM guest KScratch registers to be enabled and accessed via the KVM guest register API and from the guest itself (the fallback reading and writing of commpage registers is sufficient for KScratch registers to work as expected). User mode can expose the registers by setting the appropriate bits of the guest Config4.KScrExist field. KScratch registers that aren't usable won't be writeable via the KVM Ioctl API. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
We need to use kvm_mips_guest_can_have_fpu() when deciding which registers to list with KVM_GET_REG_LIST, however it causes warnings with preemption since it uses cpu_has_fpu. KVM is only really supported on CPUs which have symmetric FPUs, so switch to raw_cpu_has_fpu to avoid the warning. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
Make the implementation of KVM_GET_REG_LIST more dynamic so that only the subset of registers actually available can be exposed to user mode. This is important for VZ where some of the guest register state may not be possible to prevent the guest from accessing, therefore the user process may need to be aware of the state even if it doesn't understand what the state is for. This also allows different MIPS KVM implementations to provide different registers to one another, by way of new num_regs(vcpu) and copy_reg_indices(vcpu, indices) callback functions, currently just stubbed for trap & emulate. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
Convert various MIPS KVM guest instruction emulation functions to decode instructions (and encode translations) using the union mips_instruction and related enumerations in asm/inst.h rather than #defines and hardcoded values. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Acked-by: NRalf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 14 6月, 2016 5 次提交
-
-
由 James Hogan 提交于
Clean up the MIPS kvm_exit trace event so that the exit reasons are specified in a trace friendly way (via __print_symbolic), and so that the exit reasons that derive straight from Cause.ExcCode values map directly, allowing a single trace_kvm_exit() call to replace a bunch of individual ones. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: kvm@vger.kernel.org Cc: linux-mips@linux-mips.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
Rename fpu_inuse and the related definitions to aux_inuse so it can be used for lazy context management of other auxiliary processor state too, such as VZ guest timer, watchpoints and performance counters. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
Convert KVM to use the MIPS_ENTRYLO_* definitions from <asm/mipsregs.h> rather than custom definitions in kvm_host.h Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
Simplify some of the TLB_ macros making use of the arrayification of tlb_lo. Basically we index the array by the bit of the virtual address which determines whether the even or odd entry is used, instead of having a conditional. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
The values of the EntryLo0 and EntryLo1 registers for a TLB entry are stored in separate members of struct kvm_mips_tlb called tlb_lo0 and tlb_lo1 respectively. To allow future code which needs to manipulate arbitrary EntryLo data in the TLB entry to be simpler and less conditional, replace these members with an array of two elements. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-