- 30 1月, 2008 36 次提交
-
-
由 Carsten Otte 提交于
This patch moves implementation of the following functions from kvm_main.c to x86.c: free_pio_guest_pages, vcpu_find_pio_dev, pio_copy_data, complete_pio, kernel_pio, pio_string_write, kvm_emulate_pio, kvm_emulate_pio_string The function inject_gp, which was duplicated by yesterday's patch series, is removed from kvm_main.c now because it is not needed anymore. Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Acked-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Carsten Otte 提交于
This patch moves the following functions to from kvm_main.c to x86.c: emulator_read/write_std, vcpu_find_pervcpu_dev, vcpu_find_mmio_dev, emulator_read/write_emulated, emulator_write_phys, emulator_write_emulated_onepage, emulator_cmpxchg_emulated, get_setment_base, emulate_invlpg, emulate_clts, emulator_get/set_dr, kvm_report_emulation_failure, emulate_instruction The following data type is moved to x86.c: struct x86_emulate_ops emulate_ops Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Acked-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Carsten Otte 提交于
This patch moves the implementation of the functions of kvm_get/set_msr, kvm_get/set_msr_common, and set_efer from kvm_main.c to x86.c. The definition of EFER_RESERVED_BITS is moved too. Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Acked-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Anthony Liguori 提交于
KVM's nopage handler calls gfn_to_page() which acquires the mmap_sem when calling out to get_user_pages(). nopage handlers are already invoked with the mmap_sem held though. Introduce a __gfn_to_page() for use by the nopage handler which requires the lock to already be held. This was noticed by tglx. Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Sheng Yang 提交于
This patch based on CR8/TPR patch, and enable the TPR shadow (FlexPriority) for 32bit Windows. Since TPR is accessed very frequently by 32bit Windows, especially SMP guest, with FlexPriority enabled, we saw significant performance gain. Signed-off-by: NSheng Yang <sheng.yang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Carsten Otte 提交于
This patch moves the definitions of CR0_RESERVED_BITS, CR4_RESERVED_BITS, and CR8_RESERVED_BITS along with the following functions from kvm_main.c to x86.c: set_cr0(), set_cr3(), set_cr4(), set_cr8(), get_cr8(), lmsw(), load_pdptrs() The static function wrapper inject_gp is duplicated in kvm_main.c and x86.c for now, the version in kvm_main.c should disappear once the last user of it is gone too. The function load_pdptrs is no longer static, and now defined in x86.h for the time being, until the last user of it is gone from kvm_main.c. Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Reviewed-by: NChristian Borntraeger <borntraeger@de.ibm.com> Acked-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Carsten Otte 提交于
This patch moves the implementation of get_apic_base and set_apic_base from kvm_main.c to x86.c Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Reviewed-by: NChristian Borntraeger <borntraeger@de.ibm.com> Acked-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Carsten Otte 提交于
This patch moves the definition of segment_descriptor_64 for AMD64 and EM64T from kvm_main.c to segment_descriptor.h. It also adds a proper #ifndef...#define...#endif around that header file. The implementation of segment_base is moved from kvm_main.c to x86.c. Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Reviewed-by: NChristian Borntraeger <borntraeger@de.ibm.com> Acked-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Carsten Otte 提交于
This patch splits kvm_vm_ioctl into archtecture independent parts, and x86 specific parts which go to kvm_arch_vcpu_ioctl in x86.c. The patch is unchanged since last submission. Common ioctls for all architectures are: KVM_CREATE_VCPU, KVM_GET_DIRTY_LOG, KVM_SET_USER_MEMORY_REGION x86 specific ioctls are: KVM_SET_MEMORY_REGION, KVM_GET/SET_NR_MMU_PAGES, KVM_SET_MEMORY_ALIAS, KVM_CREATE_IRQCHIP, KVM_CREATE_IRQ_LINE, KVM_GET/SET_IRQCHIP KVM_SET_TSS_ADDR Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Reviewed-by: NChristian Borntraeger <borntraeger@de.ibm.com> Acked-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
Currently kvm has a wart in that it requires three extra pages for use as a tss when emulating real mode on Intel. This patch moves the allocation internally, only requiring userspace to tell us where in the physical address space we can place the tss. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
Reserve a few memory slots for kernel internal use. This is good for case you have to register memory region and you want to be sure it was not registered from userspace, and for case you want to register a memory region that won't be seen from userspace. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
Remove kvm memory slot allocation mechanism from the ioctl and put it to exported function. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
kvm_vm_ioctl_set_memory_region() is able to remove memory in addition to adding it. Therefore when using kernel swapping support for old userspaces, we need to munmap the memory if the user request to remove it Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
This will help trap accesses to guest memory in atomic context. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Split guest reset code out of vmx_vcpu_setup(). Besides being cleaner, this moves the realmode tss setup (which can sleep) outside vmx_vcpu_setup() (which is executed with preemption enabled). [izik: remove unused variable] Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
First step to split kvm_vcpu. Currently, we just use an macro to define the common fields in kvm_vcpu for all archs, and all archs need to define its own kvm_vcpu struct. Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Anthony Liguori 提交于
Allocate a userspace buffer for older userspaces. Also eliminate phys_mem buffer. The memset() in kvmctl really kills initial memory usage but swapping works even with old userspaces. A side effect is that maximum guest side is reduced for older userspace on i386. Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
This allows guest memory to be swapped. Pages which are currently mapped via shadow page tables are pinned into memory, but all other pages can be freely swapped. The patch makes gfn_to_page() elevate the page's reference count, and introduces kvm_release_page() that pairs with it. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
In case the page is not present in the guest memory map, return a dummy page the guest can scribble on. This simplifies error checking in its users. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
We now have a new namespace, KVM_REQ_*, for bits in vcpu->requests. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Beside the obvious goodness of making code more common, this prevents a livelock with the next patch which moves interrupt injection out of the critical section. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Carsten Otte 提交于
This patch splits kvm_vcpu_ioctl into archtecture independent parts, and x86 specific parts which go to kvm_arch_vcpu_ioctl in x86.c. Common ioctls for all architectures are: KVM_RUN, KVM_GET/SET_(S-)REGS, KVM_TRANSLATE, KVM_INTERRUPT, KVM_DEBUG_GUEST, KVM_SET_SIGNAL_MASK, KVM_GET/SET_FPU Note that some PPC chips don't have an FPU, so we might need an #ifdef around KVM_GET/SET_FPU one day. x86 specific ioctls are: KVM_GET/SET_LAPIC, KVM_SET_CPUID, KVM_GET/SET_MSRS An interresting aspect is vcpu_load/vcpu_put. We now have a common vcpu_load/put which does the preemption stuff, and an architecture specific kvm_arch_vcpu_load/put. In the x86 case, this one calls the vmx/svm function defined in kvm_x86_ops. Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Reviewed-by: NChristian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: NChristian Ehrhardt <ehrhardt@linux.vnet.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Carsten Otte 提交于
Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
Instead of having the kernel allocate memory to the guest, let userspace allocate it and pass the address to the kernel. This is required for s390 support, but also enables features like memory sharing and using hugetlbfs backed memory. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Mike Day 提交于
Signed-off-by: NMike D. Day <ncmike@ncultra.org> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Rusty Russell 提交于
Move kvm_create_lapic() into kvm_vcpu_init(), rather than having svm and vmx do it. And make it return the error rather than a fairly random -ENOMEM. This also solves the problem that neither svm.c nor vmx.c actually handles the error path properly. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Rusty Russell 提交于
Instead of the asymetry of kvm_free_apic, implement kvm_free_lapic(). And guess what? I found a minor bug: we don't need to hrtimer_cancel() from kvm_main.c, because we do that in kvm_free_apic(). Also: 1) kvm_vcpu_uninit should be the reverse order from kvm_vcpu_init. 2) Don't set apic->regs_page to zero before freeing apic. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
The user is now able to set how many mmu pages will be allocated to the guest. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
When kvm uses user-allocated pages in the future for the guest, we won't be able to use page->private for rmap, since page->rmap is reserved for the filesystem. So we move the rmap base pointers to the memory slot. A side effect of this is that we need to store the gfn of each gpte in the shadow pages, since the memory slot is addressed by gfn, instead of hfn like struct page. Signed-off-by: NIzik Eidus <izik@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Laurent Vivier 提交于
The only valid case is on protected page access, other cases are errors. Signed-off-by: NLaurent Vivier <Laurent.Vivier@bull.net> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Ryan Harper 提交于
This patch removes the fault injected when the guest attempts to set reserved bits in cr3. X86 hardware doesn't generate a fault when setting reserved bits. The result of this patch is that vmware-server, running within a kvm guest, boots and runs memtest from an iso. Signed-off-by: NRyan Harper <ryanh@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
There are two classes of page faults trapped by kvm: - host page faults, where the fault is needed to allow kvm to install the shadow pte or update the guest accessed and dirty bits - guest page faults, where the guest has faulted and kvm simply injects the fault back into the guest to handle The second class, guest page faults, is pure overhead. We can eliminate some of it on vmx using the following evil trick: - when we set up a shadow page table entry, if the corresponding guest pte is not present, set up the shadow pte as not present - if the guest pte _is_ present, mark the shadow pte as present but also set one of the reserved bits in the shadow pte - tell the vmx hardware not to trap faults which have the present bit clear With this, normal page-not-present faults go directly to the guest, bypassing kvm entirely. Unfortunately, this trick only works on Intel hardware, as AMD lacks a way to discriminate among page faults based on error code. It is also a little risky since it uses reserved bits which might become unreserved in the future, so a module parameter is provided to disable it. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Laurent Vivier 提交于
Move emulate_ctxt to kvm_vcpu to keep emulate context when we exit from kvm module. Call x86_decode_insn() only when needed. Modify x86_emulate_insn() to not modify the context if it must be re-entered. Signed-off-by: NLaurent Vivier <Laurent.Vivier@bull.net> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Laurent Vivier 提交于
emulate_instruction() calls now x86_decode_insn() and x86_emulate_insn(). x86_emulate_insn() is x86_emulate_memop() without the decoding part. Signed-off-by: NLaurent Vivier <Laurent.Vivier@bull.net> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Anthony Liguori 提交于
This patch refactors the current hypercall infrastructure to better support live migration and SMP. It eliminates the hypercall page by trapping the UD exception that would occur if you used the wrong hypercall instruction for the underlying architecture and replacing it with the right one lazily. A fall-out of this patch is that the unhandled hypercalls no longer trap to userspace. There is very little reason though to use a hypercall to communicate with userspace as PIO or MMIO can be used. There is no code in tree that uses userspace hypercalls. [avi: fix #ud injection on vmx] Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 25 1月, 2008 1 次提交
-
-
由 Kay Sievers 提交于
All kobjects require a dynamically allocated name now. We no longer need to keep track if the name is statically assigned, we can just unconditionally free() all kobject names on cleanup. Signed-off-by: NKay Sievers <kay.sievers@vrfy.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 27 11月, 2007 1 次提交
-
-
由 Amit Shah 提交于
The clts code didn't use set_cr0 properly, so our lazy FPU processing wasn't being done by the clts instruction at all. (this isn't called on Intel as the hardware does the decode for us) Signed-off-by: NAmit Shah <amit.shah@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 22 10月, 2007 2 次提交
-
-
由 Laurent Vivier 提交于
In kvm_flush_remote_tlbs(), replace a loop using smp_call_function_single() by a single call to smp_call_function_mask() (which is new for x86_64). Signed-off-by: NLaurent Vivier <Laurent.Vivier@bull.net> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Laurent Vivier 提交于
We need to make sure that the timer interrupt happens before we clear PF_VCPU, so the accounting code actually sees guest mode. http://lkml.org/lkml/2007/10/15/114Signed-off-by: NLaurent Vivier <Laurent.Vivier@bull.net> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-