- 30 1月, 2008 40 次提交
-
-
由 Izik Eidus 提交于
Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
If the guest requests just a tlb flush, don't take the vm lock and drop the mmu context pointlessly. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
If all we're doing is increasing permissions on a pte (typical for demand paging), then there's not need to flush remote tlbs. Worst case they'll get a spurious page fault. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
I spent an hour worrying why I see so many guest page faults on FC6 i386. Turns out bypass wasn't implemented for nonpae. Implement it so it doesn't happen again. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Split kvm_arch_vcpu_create() into kvm_arch_vcpu_create() and kvm_arch_vcpu_setup(), enabling preemption notification between the two. This mean that we can now do vcpu_load() within kvm_arch_vcpu_setup(). Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Moving !user_alloc case to kvm_arch to avoid unnecessary code logic in non-x86 platform. Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Instead of incrementally changing the mmu cache size for every memory slot operation, recalculate it from scratch. This is simpler and safer. Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Instead of fetching one byte at a time, prefetch 15 bytes (or until the next page boundary) to avoid guest page table walks. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Theoretically used to acccess memory known to be ordinary RAM, it was never implemented. It is questionable whether it is possible to implement it correctly. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
Improve dirty bit setting for pages that kvm release, until now every page that we released we marked dirty, from now only pages that have potential to get dirty we mark dirty. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
When we map a page, we check whether some other vcpu mapped it for us and if so, bail out. But we should decrease the refcount on the page as we do so. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Jerone Young 提交于
This patch moves structures: kvm_cpuid_entry kvm_cpuid from include/linux/kvm.h to include/asm-x86/kvm.h Signed-off-by: NJerone Young <jyoung5@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Jerone Young 提交于
Move structures: kvm_sregs kvm_msr_entry kvm_msrs kvm_msr_list from include/linux/kvm.h to include/asm-x86/kvm.h Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Jerone Young 提交于
This patch moves structures: kvm_segment kvm_dtable from include/linux/kvm.h to include/asm-x86/kvm.h Signed-off-by: NJerone Young <jyoung5@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Jerone Young 提交于
This patch moves structure lapic_state from include/linux/kvm.h to include/asm-x86/kvm.h Signed-off-by: NJerone Young <jyoung5@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Jerone Young 提交于
This patch moves structure kvm_regs to include/asm-x86/kvm.h. Each architecture will need to create there own version of this structure. Signed-off-by: NJerone Young <jyoung5@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Jerone Young 提交于
This patch moves structures: kvm_pic_state kvm_ioapic_state to inclue/asm-x86/kvm.h. Signed-off-by: NJerone Young <jyoung5@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Jerone Young 提交于
This patch moves sturct kvm_memory_alias from include/linux/kvm.h to include/asm-x86/kvm.h. Also have include/linux/kvm.h include include/asm/kvm.h. Signed-off-by: NJerone Young <jyoung5@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Hollis Blanchard 提交于
Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Hollis Blanchard 提交于
Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Hollis Blanchard 提交于
Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
Use kvm_write_guest_page() with empty_zero_page, instead of doing kmap and memset. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
Things are simpler and more regular this way. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Jan Kiszka 提交于
Ensure that segment.base == segment.selector << 4 when entering the real mode on Intel so that the CPU will not bark at us. This fixes some old protected mode demo from http://www.x86.org/articles/pmbasics/tspec_a1_doc.htm. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Move out kvm_mmu init and exit functionality from kvm_main.c Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Meanwhile keep the interface in common, and leave as more logic in common as possible. Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Amit Shah 提交于
Instead of having each architecture do it individually, we do this in the arch-independent code (just x86 as of now). [avi: add svm to the mix, which was added to mainline during the 2.6.24-rc process] Signed-off-by: NAmit Shah <amit.shah@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
This is in addition to the current virtual cpu statistics. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
-
由 Avi Kivity 提交于
Measure the number of times we switch the fpu state. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
This is a little more accurate (since it counts actual reloads, not potential reloads), and reverses the sense of the statistic to measure a bad event like most of the other stats (e.g. we want to minimize all counters). Signed-off-by: NAvi Kivity <avi@qumranet.com>
-