- 30 1月, 2008 40 次提交
-
-
由 Avi Kivity 提交于
Instead of fetching one byte at a time, prefetch 15 bytes (or until the next page boundary) to avoid guest page table walks. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Theoretically used to acccess memory known to be ordinary RAM, it was never implemented. It is questionable whether it is possible to implement it correctly. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
Improve dirty bit setting for pages that kvm release, until now every page that we released we marked dirty, from now only pages that have potential to get dirty we mark dirty. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
When we map a page, we check whether some other vcpu mapped it for us and if so, bail out. But we should decrease the refcount on the page as we do so. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Hollis Blanchard 提交于
Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Hollis Blanchard 提交于
Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Hollis Blanchard 提交于
Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
Use kvm_write_guest_page() with empty_zero_page, instead of doing kmap and memset. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
Things are simpler and more regular this way. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Jan Kiszka 提交于
Ensure that segment.base == segment.selector << 4 when entering the real mode on Intel so that the CPU will not bark at us. This fixes some old protected mode demo from http://www.x86.org/articles/pmbasics/tspec_a1_doc.htm. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Move out kvm_mmu init and exit functionality from kvm_main.c Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Meanwhile keep the interface in common, and leave as more logic in common as possible. Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Amit Shah 提交于
Instead of having each architecture do it individually, we do this in the arch-independent code (just x86 as of now). [avi: add svm to the mix, which was added to mainline during the 2.6.24-rc process] Signed-off-by: NAmit Shah <amit.shah@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
This is in addition to the current virtual cpu statistics. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
-
由 Avi Kivity 提交于
Measure the number of times we switch the fpu state. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
This is a little more accurate (since it counts actual reloads, not potential reloads), and reverses the sense of the statistic to measure a bad event like most of the other stats (e.g. we want to minimize all counters). Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Add two arch hooks to handle kvm_create_vm and kvm destroy_vm. Now, just put io_bus init and destory in common. Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Since their callers are not declared with __init. Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Joe Perches 提交于
Fix sparse warnings "Using plain integer as NULL pointer" Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Move kvm_vcpu_ioctl_translate to arch, since mmu would be put under arch. Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
We pass vcpu, vmx->fail, and vmx->launched to assembly code, but all three are fields within vmx. Consolidate by only passing in vmx and offsets for the rest. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Make KVM_CHECK_EXTENSION code into a function, all archs can define its capability independently. Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Acked-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Sheng Yang 提交于
The current 'lods' and 'stos' is depending on incoming CR2 rather than decode memory address from registers. Signed-off-by: NSheng Yang <sheng.yang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Acked-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Will be called once arch module registers itself. Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Acked-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Add the following hooks: void decache_vcpus_on_cpu(int cpu); int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu); void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu); struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id); void kvm_arch_vcpu_destory(struct kvm_vcpu *vcpu); int kvm_arch_vcpu_reset(struct kvm_vcpu *vcpu); void kvm_arch_hardware_enable(void *garbage); void kvm_arch_hardware_disable(void *garbage); int kvm_arch_hardware_setup(void); void kvm_arch_hardware_unsetup(void); void kvm_arch_check_processor_compat(void *rtn); Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Acked-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Move some includes to x86.c from kvm_main.c, since the related functions have been moved to x86.c Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Acked-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
This changes kvm_write_guest_page/kvm_read_guest_page to use copy_to_user/read_from_user, as a result we get better speed and better dirty bit tracking. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
Convert a guest frame number to the corresponding host virtual address. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
Check for the "error hva", an address outside the user address space that signals a bad gfn. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@qumranet.com>
-