- 21 7月, 2007 5 次提交
-
-
由 Avi Kivity 提交于
__free_page() wants a struct page, not a virtual address. Signed-off-by: NAvi Kivity <avi@qumranet.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Avi Kivity 提交于
The kvm mmu uses page->private on shadow page tables; so does slub, and an oops result. Fix by allocating regular pages for shadows instead of using slub. Tested-by: NS.Çağlar Onur <caglar@pardus.org.tr> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Allow real-mode emulation of rdmsr and wrmsr. This allows smp Windows to boot, presumably for its sipi trampoline. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
The memory slot management functions were oriented against vcpu 0, where they should be kvm-wide. This causes hangs starting X on guest smp. Fix by making the functions (and resultant tail in the mmu) non-vcpu-specific. Unfortunately this reduces the efficiency of the mmu object cache a bit. We may have to revisit this later. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
We need to distinguish between large page shadows which have the nx bit set and those which don't. The problem shows up when booting a newer smp Linux kernel, where the trampoline page (which is in real mode, which uses the same shadow pages as large pages) is using the same mapping as a kernel data page, which is mapped using nx, causing kvm to spin on that page. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 20 7月, 2007 2 次提交
-
-
由 Paul Mundt 提交于
Slab destructors were no longer supported after Christoph's c59def9f change. They've been BUGs for both slab and slub, and slob never supported them either. This rips out support for the dtor pointer from kmem_cache_create() completely and fixes up every single callsite in the kernel (there were about 224, not including the slab allocator definitions themselves, or the documentation references). Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Avi Kivity 提交于
Currently, CONFIG_X86_CMPXCHG64 both enables boot-time checking of the cmpxchg64b feature and enables compilation of the set_64bit() family. Since the option is dependent on PAE, and since KVM depends on set_64bit(), this effectively disables KVM on i386 nopae. Simplify by removing the config option altogether: the boot check is made dependent on CONFIG_X86_PAE directly, and the set_64bit() family is exposed without constraints. It is up to users to check for the feature flag (KVM does not as virtualiation extensions imply its existence). Signed-off-by: NAvi Kivity <avi@qumranet.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 16 7月, 2007 33 次提交
-
-
由 Avi Kivity 提交于
Only at the CPU_DYING stage can we be sure that no user process will be scheduled onto the cpu and oops when trying to use virtualization extensions. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
The hotplug IPIs can be called from the cpu on which we are currently running on, so use on_cpu(). Similarly, drop on_each_cpu() for the suspend/resume callbacks, as we're in atomic context here and only one cpu is up anyway. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
By keeping track of which cpus have virtualization enabled, we prevent double-enable or double-disable during hotplug, which is a very fatal oops. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Remove unnecessary ones, and rearange the remaining in the standard order. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
kvm uses a pseudo filesystem, kvmfs, to generate inodes, a job that the new anonymous inodes source does much better. Cc: Davide Libenzi <davidel@xmailserver.org> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Joerg Roedel 提交于
This patch adds an implementation to the svm is_disabled function to detect reliably if the BIOS disabled the SVM feature in the CPU. This fixes the issues with kernel panics when loading the kvm-amd module on machines where SVM is available but disabled. Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
A vmexit implicitly flushes the tlb; the code is bogus. Noted by Shaohua Li. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Shaohua Li 提交于
Need to flush the tlb after updating a pte, not before. Signed-off-by: NShaohua Li <shaohua.li@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Protected mode code may have corrupted the real-mode tss, so re-initialize it when switching to real mode. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Luca Tettamanti 提交于
When writing to normal memory and the memory area is unchanged the write can be safely skipped, avoiding the costly kvm_mmu_pte_write. Signed-Off-By: NLuca Tettamanti <kronos.it@gmail.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Luca Tettamanti 提交于
When the old value and new one are the same the emulator skips the write; this is undesirable when the destination is a MMIO area and the write shall be performed regardless of the previous value. This optimization breaks e.g. a Linux guest APIC compiled without X86_GOOD_APIC. Remove the check and perform the writeback stage in the emulation unless it's explicitly disabled (currently push and some 2 bytes instructions may disable the writeback). Signed-Off-By: NLuca Tettamanti <kronos.it@gmail.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Eddie Dong 提交于
Useful for the PIC and PIT. Signed-off-by: NYaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Gregory Haskins 提交于
With kernel-injected interrupts, we need to check for interrupts on lightweight exits too. Signed-off-by: NGregory Haskins <ghaskins@novell.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Gregory Haskins 提交于
Signed-off-by: NGregory Haskins <ghaskins@novell.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Nitin A Kamble 提交于
Signed-off-by: NNitin A Kamble <nitin.a.kamble@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Nitin A Kamble 提交于
For use in real mode. Signed-off-by: NNitin A Kamble <nitin.a.kamble@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
If the time stamp counter goes backwards, a guest delay loop can become infinite. This can happen if a vcpu is migrated to another cpu, where the counter has a lower value than the first cpu. Since we're doing an IPI to the first cpu anyway, we can use that to pick up the old tsc, and use that to calculate the adjustment we need to make to the tsc offset. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Needs to be set on vcpu 0 only. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Shani Moideen 提交于
Signed-off-by: NShani Moideen <shani.moideen@wipro.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Shani Moideen 提交于
Signed-off-by: NShani Moideen <shani.moideen@wipro.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
When a vcpu causes a shadow tlb entry to have reduced permissions, it must also clear the tlb on remote vcpus. We do that by: - setting a bit on the vcpu that requests a tlb flush before the next entry - if the vcpu is currently executing, we send an ipi to make sure it exits before we continue Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
That way, we don't need to loop for KVM_MAX_VCPUS for a single vcpu vm. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
This has two use cases: the bios can't boot from disk, and guest smp bootstrap. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Will soon have a thid user. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
As we don't support guest tlb shootdown yet, this is only reliable for real-mode guests. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
If we add the vm once per vcpu, we corrupt the list if the guest has multiple vcpus. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
A vcpu can pin up to four mmu shadow pages, which means the freeing loop will never terminate. Fix by first unpinning shadow pages on all vcpus, then freeing shadow pages. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Nguyen Anh Quynh 提交于
Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Robert P. J. Day 提交于
Signed-off-by: NRobert P. J. Day <rpjday@mindspring.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Switch guest paging context may require us to allocate memory, which might fail. Instead of wiring up error paths everywhere, make context switching lazy and actually do the switch before the next guest entry, where we can return an error if allocation fails. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
This has not been used for some time, as the same information is available in the page header. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
This was once used to avoid accessing the guest pte when upgrading the shadow pte from read-only to read-write. But usually we need to set the guest pte dirty or accessed bits anyway, so this wasn't really exploited. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Always set the accessed and dirty bit (since having them cleared causes a read-modify-write cycle), always set the present bit, and copy the nx bit from the guest. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-