- 13 10月, 2007 15 次提交
-
-
由 Rusty Russell 提交于
This patch converts the vcpus array in "struct kvm" to a pointer array, and changes the "vcpu_create" and "vcpu_setup" hooks into one "vcpu_create" call which does the allocation and initialization of the vcpu (calling back into the kvm_vcpu_init core helper). Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Gregory Haskins 提交于
struct kvm_vcpu has vmx-specific members; remove them to a private structure. Signed-off-by: NGregory Haskins <ghaskins@novell.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Rusty Russell 提交于
load_pdptrs can be handed an invalid cr3, and it should not oops. This can happen because we injected #gp in set_cr3() after we set vcpu->cr3 to the invalid value, or from kvm_vcpu_ioctl_set_sregs(), or memory configuration changes after the guest did set_cr3(). We should also copy the pdpte array once, before checking and assigning, otherwise an SMP guest can potentially alter the values between the check and the set. Finally one nitpick: ret = 1 should be done as late as possible: this allows GCC to check for unset "ret" should the function change in future. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Shaohua Li 提交于
gfn_to_page might sleep with swap support. Move it out of the kmap calls. Signed-off-by: NShaohua Li <shaohua.li@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Rusty Russell 提交于
Don't fall through and turn on PAE in this case. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Rusty Russell 提交于
Intel manual (and KVM definition) say the TPR is 4 bits wide. Also fix CR8_RESEVED_BITS typo. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Acked-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Jeff Dike 提交于
Signed-off-by: NJeff Dike <jdike@linux.intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Rusty Russell 提交于
Creating one's own BITMAP macro seems suboptimal: if we use manual arithmetic in the one place exposed to userspace, we can use standard macros elsewhere. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Rusty Russell 提交于
On this machine (Intel), writing to the CR4 bits 0x00000800 and 0x00001000 cause a GPF. The Intel manual is a little unclear, but AFIACT they're reserved, too. Also fix spelling of CR4_RESEVED_BITS. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Rusty Russell 提交于
The kernel now has asm/cpu-features.h: use those macros instead of inventing our own. Also spell out definition of CR3_RESEVED_BITS, fix spelling and tighten it for the non-PAE case. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Rusty Russell 提交于
The kernel now has asm/cpu-features.h: use those macros instead of inventing our own. Also spell out definition of CR0_RESEVED_BITS (no code change) and fix typo. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Rusty Russell 提交于
Don't pre-declare hardware_disable: shuffle the reboot hook down. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Eddie Dong 提交于
Add string pio write support to support some version of Windows. Signed-off-by: NYaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Qing He 提交于
This patch adds a `vcpu_id' field in `struct vcpu', so we can differentiate BSP and APs without pointer comparison or arithmetic. Signed-off-by: NQing He <qing.he@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Nguyen Anh Quynh 提交于
*nopage() in kvm_main.c should only store the type of mmap() fault if the pointers are not NULL. This patch fixes the problem. Signed-off-by: NNguyen Anh Quynh <aquynh@gmail.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 20 8月, 2007 1 次提交
-
-
由 Avi Kivity 提交于
When taking a cpu down, we need to hardware_disable() it. Unfortunately, the CPU_DYING notifier is called with interrupts disabled, which means we can't use smp_call_function_single(). Fortunately, the CPU_DYING notifier is always called on the dying cpu, so we don't need to use the function at all and can simply call hardware_disable() directly. Tested-by: NPaolo Ornati <ornati@fastwebnet.it> Signed-off-by: NAvi Kivity <avi@qumranet.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 7月, 2007 4 次提交
-
-
由 Avi Kivity 提交于
Testing the wrong bit caused kvm not to disable nx on the guest when it is disabled on the host (an mmu optimization relies on the nx bits being the same in the guest and host). This allows Windows to boot when nx is disabled on te host (e.g. when host pae is disabled). Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
This reverts commit a3c870bd. While it does save useless updates, it (probably) defeats the fork detector, causing a massive performance loss. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Rusty Russell 提交于
We add the kvm to the vm_list before initializing the vcpu mutexes, which can be mutex_trylock()'ed by decache_vcpus_on_cpu(). Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Writes that are contiguous in virtual memory may not be contiguous in physical memory; so split writes that straddle a page boundary. Thanks to Aurelien for reporting the bug, patient testing, and a fix to this very patch. Signed-off-by: NAurelien Jarno <aurelien@aurel32.net> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 21 7月, 2007 2 次提交
-
-
由 Avi Kivity 提交于
Allow real-mode emulation of rdmsr and wrmsr. This allows smp Windows to boot, presumably for its sipi trampoline. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
The memory slot management functions were oriented against vcpu 0, where they should be kvm-wide. This causes hangs starting X on guest smp. Fix by making the functions (and resultant tail in the mmu) non-vcpu-specific. Unfortunately this reduces the efficiency of the mmu object cache a bit. We may have to revisit this later. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 16 7月, 2007 18 次提交
-
-
由 Avi Kivity 提交于
Only at the CPU_DYING stage can we be sure that no user process will be scheduled onto the cpu and oops when trying to use virtualization extensions. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
The hotplug IPIs can be called from the cpu on which we are currently running on, so use on_cpu(). Similarly, drop on_each_cpu() for the suspend/resume callbacks, as we're in atomic context here and only one cpu is up anyway. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
By keeping track of which cpus have virtualization enabled, we prevent double-enable or double-disable during hotplug, which is a very fatal oops. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Remove unnecessary ones, and rearange the remaining in the standard order. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
kvm uses a pseudo filesystem, kvmfs, to generate inodes, a job that the new anonymous inodes source does much better. Cc: Davide Libenzi <davidel@xmailserver.org> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Luca Tettamanti 提交于
When writing to normal memory and the memory area is unchanged the write can be safely skipped, avoiding the costly kvm_mmu_pte_write. Signed-Off-By: NLuca Tettamanti <kronos.it@gmail.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Eddie Dong 提交于
Useful for the PIC and PIT. Signed-off-by: NYaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Gregory Haskins 提交于
Signed-off-by: NGregory Haskins <ghaskins@novell.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
When a vcpu causes a shadow tlb entry to have reduced permissions, it must also clear the tlb on remote vcpus. We do that by: - setting a bit on the vcpu that requests a tlb flush before the next entry - if the vcpu is currently executing, we send an ipi to make sure it exits before we continue Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
That way, we don't need to loop for KVM_MAX_VCPUS for a single vcpu vm. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Will soon have a thid user. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
If we add the vm once per vcpu, we corrupt the list if the guest has multiple vcpus. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
A vcpu can pin up to four mmu shadow pages, which means the freeing loop will never terminate. Fix by first unpinning shadow pages on all vcpus, then freeing shadow pages. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Nguyen Anh Quynh 提交于
Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Use slab caches instead of a simple custom list. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Eddie Dong 提交于
MSR_EFER.LME/LMA bits are automatically save/restored by VMX hardware, KVM only needs to save NX/SCE bits at time of heavy weight VM Exit. But clearing NX bits in host envirnment may cause system hang if the host page table is using EXB bits, thus we leave NX bits as it is. If Host NX=1 and guest NX=0, we can do guest page table EXB bits check before inserting a shadow pte (though no guest is expecting to see this kind of gp fault). If host NX=0, we present guest no Execute-Disable feature to guest, thus no host NX=0, guest NX=1 combination. This patch reduces raw vmexit time by ~27%. Me: fix compile warnings on i386. Signed-off-by: NYaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Matthew Gregan 提交于
Attempting to boot the default 'bsd' kernel of OpenBSD 4.1 i386 in a guest fails early in the kernel init inside p3_get_bus_clock while trying to read the IA32_EBL_CR_POWERON MSR. KVM logs an 'unhandled MSR' message and the guest kernel faults. This patch is sufficient to allow OpenBSD to boot, after which it seems to run fine. I'm not sure if this is the correct solution for dealing with this particular MSR, but it works for me. Signed-off-by: NMatthew Gregan <kinetik@flim.org> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Instead of calling two functions and repeating expensive checks, call one function and provide it with before/after information. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-