- 24 10月, 2010 9 次提交
-
-
由 Avi Kivity 提交于
vmx_complete_interrupts() does too much, split it up: - vmx_vcpu_run() gets the "cache important vmcs fields" part - a new vmx_complete_atomic_exit() gets the parts that must be done atomically - a new vmx_recover_nmi_blocking() does what its name says - vmx_complete_interrupts() retains the event injection recovery code This helps in reducing the work done in atomic context. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
Instead of blindly attempting to inject an event before each guest entry, check for a possible event first in vcpu->requests. Sites that can trigger event injection are modified to set KVM_REQ_EVENT: - interrupt, nmi window opening - ppr updates - i8259 output changes - local apic irr changes - rflags updates - gif flag set - event set on exit This improves non-injecting entry performance, and sets the stage for non-atomic injection. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Joerg Roedel 提交于
This function need to be able to load the pdptrs from any mmu context currently in use. So change this function to take an kvm_mmu parameter to fit these needs. As a side effect this patch also moves the cached pdptrs from vcpu_arch into the kvm_mmu struct. Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Joerg Roedel 提交于
This patch introduces a special set_tdp_cr3 function pointer in kvm_x86_ops which is only used for tpd enabled mmu contexts. This allows to remove some hacks from svm code. Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Zachary Amsden 提交于
Move the TSC control logic from the vendor backends into x86.c by adding adjust_tsc_offset to x86 ops. Now all TSC decisions can be done in one place. Signed-off-by: NZachary Amsden <zamsden@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Zachary Amsden 提交于
Also, ensure that the storing of the offset and the reading of the TSC are never preempted by taking a spinlock. While the lock is overkill now, it is useful later in this patch series. Signed-off-by: NZachary Amsden <zamsden@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Zachary Amsden 提交于
Change svm / vmx to be the same internally and write TSC offset instead of bare TSC in helper functions. Isolated as a single patch to contain code movement. Signed-off-by: NZachary Amsden <zamsden@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Zachary Amsden 提交于
This is used only by the VMX code, and is not done properly; if the TSC is indeed backwards, it is out of sync, and will need proper handling in the logic at each and every CPU change. For now, drop this test during init as misguided. Signed-off-by: NZachary Amsden <zamsden@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Avi Kivity 提交于
Now that we have the host gdt conveniently stored in a variable, make use of it instead of querying the cpu. Signed-off-by: NAvi Kivity <avi@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
- 20 10月, 2010 1 次提交
-
-
由 Avi Kivity 提交于
kvm reloads the host's fs and gs blindly, however the underlying segment descriptors may be invalid due to the user modifying the ldt after loading them. Fix by using the safe accessors (loadsegment() and load_gs_index()) instead of home grown unsafe versions. This is CVE-2010-3698. KVM-Stable-Tag. Signed-off-by: NAvi Kivity <avi@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
- 02 8月, 2010 2 次提交
-
-
由 Avi Kivity 提交于
vmx does not restore GDT.LIMIT to the host value, instead it sets it to 64KB. This means host userspace can learn a few bits of host memory. Fix by reloading GDTR when we load other host state. Signed-off-by: NAvi Kivity <avi@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Xiao Guangrong 提交于
Commit 341d9b535b6c simplify reload logic while entry guest mode, it can avoid unnecessary sync-root if KVM_REQ_MMU_RELOAD and KVM_REQ_MMU_SYNC both set. But, it cause a issue that when we handle 'KVM_REQ_TLB_FLUSH', the root is invalid, it is triggered during my test: Kernel BUG at ffffffffa00212b8 [verbose debug info unavailable] ...... Fixed by directly return if the root is not ready. Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
- 01 8月, 2010 22 次提交
-
-
由 Sheng Yang 提交于
Some guest device driver may leverage the "Non-Snoop" I/O, and explicitly WBINVD or CLFLUSH to a RAM space. Since migration may occur before WBINVD or CLFLUSH, we need to maintain data consistency either by: 1: flushing cache (wbinvd) when the guest is scheduled out if there is no wbinvd exit, or 2: execute wbinvd on all dirty physical CPUs when guest wbinvd exits. Signed-off-by: NYaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by: NSheng Yang <sheng@linux.intel.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Avi Kivity 提交于
Makes it a little more readable and hackable. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
On Intel, we call skip_emulated_instruction() even if we injected a #GP, resulting in the #GP pointing at the wrong address. Fix by injecting the exception and skipping the instruction at the same place, so we can do just one or the other. Signed-off-by: NAvi Kivity <avi@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Avi Kivity 提交于
On Intel, we call skip_emulated_instruction() even if we injected a #GP, resulting in the #GP pointing at the wrong address. Fix by injecting the exception and skipping the instruction at the same place, so we can do just one or the other. Signed-off-by: NAvi Kivity <avi@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Avi Kivity 提交于
On Intel, we call skip_emulated_instruction() even if we injected a #GP, resulting in the #GP pointing at the wrong address. Fix by injecting the exception and skipping the instruction at the same place, so we can do just one or the other. Signed-off-by: NAvi Kivity <avi@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Dexuan Cui 提交于
This patch enable guest to use XSAVE/XRSTOR instructions. We assume that host_xcr0 would use all possible bits that OS supported. And we loaded xcr0 in the same way we handled fpu - do it as late as we can. Signed-off-by: NDexuan Cui <dexuan.cui@intel.com> Signed-off-by: NSheng Yang <sheng@linux.intel.com> Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Xiao Guangrong 提交于
fix: [ INFO: suspicious rcu_dereference_check() usage. ] --------------------------------------------------- include/linux/kvm_host.h:258 invoked rcu_dereference_check() without protection! other info that might help us debug this: rcu_scheduler_active = 1, debug_locks = 1 1 lock held by qemu-system-x86/3796: #0: (&vcpu->mutex){+.+.+.}, at: [<ffffffffa0217fd8>] vcpu_load+0x1a/0x66 [kvm] stack backtrace: Pid: 3796, comm: qemu-system-x86 Not tainted 2.6.34 #25 Call Trace: [<ffffffff81070ed1>] lockdep_rcu_dereference+0x9d/0xa5 [<ffffffffa0214fdf>] gfn_to_memslot_unaliased+0x65/0xa0 [kvm] [<ffffffffa0216139>] gfn_to_hva+0x22/0x4c [kvm] [<ffffffffa0216217>] kvm_write_guest_page+0x2a/0x7f [kvm] [<ffffffffa0216286>] kvm_clear_guest_page+0x1a/0x1c [kvm] [<ffffffffa0278239>] init_rmode+0x3b/0x180 [kvm_intel] [<ffffffffa02786ce>] vmx_set_cr0+0x350/0x4d3 [kvm_intel] [<ffffffffa02274ff>] kvm_arch_vcpu_ioctl_set_sregs+0x122/0x31a [kvm] [<ffffffffa021859c>] kvm_vcpu_ioctl+0x578/0xa3d [kvm] [<ffffffff8106624c>] ? cpu_clock+0x2d/0x40 [<ffffffff810f7d86>] ? fget_light+0x244/0x28e [<ffffffff810709b9>] ? trace_hardirqs_off_caller+0x1f/0x10e [<ffffffff8110501b>] vfs_ioctl+0x32/0xa6 [<ffffffff81105597>] do_vfs_ioctl+0x47f/0x4b8 [<ffffffff813ae654>] ? sub_preempt_count+0xa3/0xb7 [<ffffffff810f7da8>] ? fget_light+0x266/0x28e [<ffffffff810f7c53>] ? fget_light+0x111/0x28e [<ffffffff81105617>] sys_ioctl+0x47/0x6a [<ffffffff81002c1b>] system_call_fastpath+0x16/0x1b Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Gui Jianfeng 提交于
The name "pid_sync_vcpu_all" isn't appropriate since it just affect a single vpid, so rename it to vpid_sync_vcpu_single(). Signed-off-by: NGui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Gui Jianfeng 提交于
Add all-context INVVPID type support. Signed-off-by: NGui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Gui Jianfeng 提交于
According to SDM, we need check whether single-context INVVPID type is supported before issuing invvpid instruction. Signed-off-by: NGui Jianfeng <guijianfeng@cn.fujitsu.com> Reviewed-by: NSheng Yang <sheng@linux.intel.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Sheng Yang 提交于
We only support 4 levels EPT pagetable now. Signed-off-by: NSheng Yang <sheng@linux.intel.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Mohammed Gamal 提交于
The vmexit handler returns KVM_EXIT_UNKNOWN since there is no handler for vmentry failures. This intercepts vmentry failures and returns KVM_FAIL_ENTRY to userspace instead. Signed-off-by: NMohammed Gamal <m.gamal005@gmail.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Jan Kiszka 提交于
Memory allocation may fail. Propagate such errors. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Reviewed-by: NSheng Yang <sheng@linux.intel.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Dongxiao Xu 提交于
SDM suggests VMXON should be called before VMPTRLD, and VMXOFF should be called after doing VMCLEAR. Therefore in vmm coexistence case, we should firstly call VMXON before any VMCS operation, and then call VMXOFF after the operation is done. Signed-off-by: NDongxiao Xu <dongxiao.xu@intel.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Dongxiao Xu 提交于
Originally VMCLEAR/VMPTRLD is called on vcpu migration. To support hosted VMM coexistance, VMCLEAR is executed on vcpu schedule out, and VMPTRLD is executed on vcpu schedule in. This could also eliminate the IPI when doing VMCLEAR. Signed-off-by: NDongxiao Xu <dongxiao.xu@intel.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Dongxiao Xu 提交于
Do some preparations for vmm coexistence support. Signed-off-by: NDongxiao Xu <dongxiao.xu@intel.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Dongxiao Xu 提交于
Define vmcs_load() and kvm_cpu_vmxon() to avoid direct call of asm code. Also move VMXE bit operation out of kvm_cpu_vmxoff(). Signed-off-by: NDongxiao Xu <dongxiao.xu@intel.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Gleb Natapov 提交于
Do not kill VM when instruction emulation fails. Inject #UD and report failure to userspace instead. Userspace may choose to reenter guest if vcpu is in userspace (cpl == 3) in which case guest OS will kill offending process and continue running. Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Avi Kivity 提交于
cr0.ts may change between entries, so we copy cr0 to HOST_CR0 before each entry. That is slow, so instead, set HOST_CR0 to have TS set unconditionally (which is a safe value), and issue a clts() just before exiting vcpu context if the task indeed owns the fpu. Saves ~50 cycles/exit. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
!! is not needed due to the cast to bool. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
- 22 7月, 2010 1 次提交
-
-
由 Brian Gerst 提交于
MSR_K6_EFER is unused, and MSR_K6_STAR is redundant with MSR_STAR. Signed-off-by: NBrian Gerst <brgerst@gmail.com> LKML-Reference: <1279371808-24804-1-git-send-email-brgerst@gmail.com> Reviewed-by: NPekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 06 7月, 2010 1 次提交
-
-
由 Avi Kivity 提交于
enter_lmode() and exit_lmode() modify the guest's EFER.LMA before calling vmx_set_efer(). However, the latter function depends on the value of EFER.LMA to determine whether MSR_KERNEL_GS_BASE needs reloading, via vmx_load_host_state(). With EFER.LMA changing under its feet, it took the wrong choice and corrupted userspace's %gs. This causes 32-on-64 host userspace to fault. Fix not touching EFER.LMA; instead ask vmx_set_efer() to change it. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
- 19 5月, 2010 4 次提交
-
-
由 Shane Wang 提交于
Per document, for feature control MSR: Bit 1 enables VMXON in SMX operation. If the bit is clear, execution of VMXON in SMX operation causes a general-protection exception. Bit 2 enables VMXON outside SMX operation. If the bit is clear, execution of VMXON outside SMX operation causes a general-protection exception. This patch is to enable this kind of check with SMX for VMXON in KVM. Signed-off-by: NShane Wang <shane.wang@intel.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
When EPT is enabled, we cannot emulate EFER.NX=0 through the shadow page tables. This causes accesses through ptes with bit 63 set to succeed instead of failing a reserved bit check. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
Some guest msr values cannot be used on the host (for example. EFER.NX=0), so we need to switch them atomically during guest entry or exit. Add a facility to program the vmx msr autoload registers accordingly. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
vmx and svm vcpus have different contents and therefore may have different alignmment requirements. Let each specify its required alignment. Signed-off-by: NAvi Kivity <avi@redhat.com>
-