- 08 11月, 2014 6 次提交
-
-
由 Owen Hofmann 提交于
kvm updates the version number for the guest paravirt clock structure by incrementing the version of its private copy. It does not read the guest version, so will write version = 2 in the first update for every new VM, including after restoring a saved state. If guest state is saved during reading the clock, it could read and accept struct fields and guest TSC from two different updates. This changes the code to increment the guest version and write it back. Signed-off-by: NOwen Hofmann <osh@google.com> Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
Commit 3b32004a ("KVM: x86: movnti minimum op size of 32-bit is not kept") did not fully fix the minimum operand size of MONTI emulation. Still, MOVNTI may be mistakenly performed using 16-bit opsize. This patch add No16 flag to mark an instruction does not support 16-bits operand size. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Marcelo Tosatti 提交于
When the guest writes to the TSC, the masterclock TSC copy must be updated as well along with the TSC_OFFSET update, otherwise a negative tsc_timestamp is calculated at kvm_guest_time_update. Once "if (!vcpus_matched && ka->use_master_clock)" is simplified to "if (ka->use_master_clock)", the corresponding "if (!ka->use_master_clock)" becomes redundant, so remove the do_request boolean and collapse everything into a single condition. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
Now that KVM injects #UD on "unhandlable" error, it makes better sense to return such error on sysenter instead of directly injecting #UD to the guest. This allows to track more easily the unhandlable cases the emulator does not support. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
APIC base relocation is unsupported by KVM. If anyone uses it, the least should be to report a warning in the hypervisor. Note that KVM-unit-tests uses this feature for some reason, so running the tests triggers the warning. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
Commit 7fe864dc (KVM: x86: Mark VEX-prefix instructions emulation as unimplemented, 2014-06-02) marked VEX instructions as such in protected mode. VEX-prefix instructions are not supported relevant on real-mode and VM86, but should cause #UD instead of being decoded as LES/LDS. Fix this behaviour to be consistent with real hardware. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> [Check for mod == 3, rather than 2 or 3. - Paolo] Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 07 11月, 2014 18 次提交
-
-
由 Nadav Amit 提交于
Task-switch emulation checks the privilege level prior to performing the task-switch. This check is incorrect in the case of task-gates, in which the tss.dpl is ignored, and can cause superfluous exceptions. Moreover this check is unnecassary, since the CPU checks the privilege levels prior to exiting. Intel SDM 25.4.2 says "If CALL or JMP accesses a TSS descriptor directly outside IA-32e mode, privilege levels are checked on the TSS descriptor" prior to exiting. AMD 15.14.1 says "The intercept is checked before the task switch takes place but after the incoming TSS and task gate (if one was involved) have been checked for correctness." This patch removes the CPL checks for CALL and JMP. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
When emulating LTR/LDTR/LGDT/LIDT, #GP should be injected if the base is non-canonical. Otherwise, VM-entry will fail. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
LGDT and LIDT emulation logic is almost identical. Merge the logic into a single point to avoid redundancy. This will be used by the next patch that will ensure the bases of the loaded GDTR and IDTR are canonical. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
If the emulation ends in fault, eflags should not be updated. However, several instruction emulations (actually all the fastops) currently update eflags, if the fault was detected afterwards (e.g., #PF during writeback). Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
Although Intel SDM mentions bit 63 is reserved, MOV to CR3 can have bit 63 set. As Intel SDM states in section 4.10.4 "Invalidation of TLBs and Paging-Structure Caches": " MOV to CR3. ... If CR4.PCIDE = 1 and bit 63 of the instruction’s source operand is 0 ..." In other words, bit 63 is not reserved. KVM emulator currently consider bit 63 as reserved. Fix it. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
According to Intel SDM push of segment selectors is done in the following manner: "if the operand size is 32-bits, either a zero-extended value is pushed on the stack or the segment selector is written on the stack using a 16-bit move. For the last case, all recent Core and Atom processors perform a 16-bit move, leaving the upper portion of the stack location unmodified." This patch modifies the behavior to match the core behavior. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
CMPS and SCAS instructions are evaluated in the wrong order. For reference (of CMPS), see http://www.fermimn.gov.it/linux/quarta/x86/cmps.htm : "Note that the direction of subtraction for CMPS is [SI] - [DI] or [ESI] - [EDI]. The left operand (SI or ESI) is the source and the right operand (DI or EDI) is the destination. This is the reverse of the usual Intel convention in which the left operand is the destination and the right operand is the source." Introducing em_cmp_r for this matter that performs comparison in reverse order using fastop infrastructure to avoid a wrapper function. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
SYSCALL emulation currently clears in 64-bit mode eflags according to MSR_SYSCALL_MASK. However, on bare-metal eflags[1] which is fixed to one cannot be cleared, even if MSR_SYSCALL_MASK masks the bit. This wrong behavior may result in failed VM-entry, as VT disallows entry with eflags[1] cleared. This patch sets the bit after masking eflags on syscall. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
In x86, you can only MOV-sreg to memory with either 16-bits or 64-bits size. In contrast, KVM may write to 32-bits memory on MOV-sreg. This patch fixes KVM behavior, and sets the destination operand size to two, if the destination is memory. When destination is registers, and the operand size is 32-bits, the high 16-bits in modern CPUs is filled with zero. This is handled correctly. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
x86 debug registers hold a linear address. Therefore, breakpoints detection should consider CS.base, and check whether instruction linear address equals (CS.base + RIP). This patch introduces a function to evaluate RIP linear address and uses it for breakpoints detection. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
DR6[0:3] (previous breakpoint indications) are cleared when #DB is injected during handle_exception, just as real hardware does. Similarily, handle_dr should clear DR6[0:3]. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
It should clear B0-B3 and set BD. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
Real-mode exceptions do not deliver error code. As can be seen in Intel SDM volume 2, real-mode exceptions do not have parentheses, which indicate error-code. To avoid significant changes of the code, the error code is "removed" during exception queueing. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
In one occassion, decode_modrm uses the rm field after it is extended with REX.B to determine the addressing mode. Doing so causes it not to read the offset for rip-relative addressing with REX.B=1. This patch moves the fetch where we already mask REX.B away instead. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Wei Wang 提交于
A bug was reported as follows: when running Windows 7 32-bit guests on qemu-kvm, sometimes the guests run into blue screen during reboot. The problem was that a guest's RVI was not cleared when it rebooted. This patch has fixed the problem. Signed-off-by: NWei Wang <wei.w.wang@intel.com> Signed-off-by: NYang Zhang <yang.z.zhang@intel.com> Tested-by: NRongrong Liu <rongrongx.liu@intel.com>, Da Chun <ngugc@qq.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Return a negative error code instead, and WARN() when we should be covering the entire 2-bit space of vmcs_field_type's return value. For increased robustness, add a BUILD_BUG_ON checking the range of vmcs_field_to_offset. Suggested-by: NTiejun Chen <tiejun.chen@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Tiejun Chen 提交于
Instead of vmx_init(), actually it would make reasonable sense to do anything specific to vmx hardware setting in vmx_x86_ops->hardware_setup(). Signed-off-by: NTiejun Chen <tiejun.chen@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Tiejun Chen 提交于
Just move this pair of functions down to make sure later we can add something dependent on others. Signed-off-by: NTiejun Chen <tiejun.chen@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 03 11月, 2014 16 次提交
-
-
由 Radim Krčmář 提交于
We mirror a subset of these registers in separate variables. Using them directly should be faster. Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Radim Krčmář 提交于
APIC-write VM exits are "trap-like": they save CS:RIP values for the instruction after the write, and more importantly, the handler will already see the new value in the virtual-APIC page. This means that apic_reg_write cannot use kvm_apic_get_reg to omit timer cancelation when mode changes. timer_mode_mask shouldn't be changing as it depends on cpuid. Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Radim Krčmář 提交于
APIC-write VM exits are "trap-like": they save CS:RIP values for the instruction after the write, and more importantly, the handler will already see the new value in the virtual-APIC page. This caused a bug if you used KVM_SET_IRQCHIP to set the SW-enabled bit in the SPIV register. The chain of events is as follows: * When the irqchip is added to the destination VM, the apic_sw_disabled static key is incremented (1) * When the KVM_SET_IRQCHIP ioctl is invoked, it is decremented (0) * When the guest disables the bit in the SPIV register, e.g. as part of shutdown, apic_set_spiv does not notice the change and the static key is _not_ incremented. * When the guest is destroyed, the static key is decremented (-1), resulting in this trace: WARNING: at kernel/jump_label.c:81 __static_key_slow_dec+0xa6/0xb0() jump label: negative count! [<ffffffff816bf898>] dump_stack+0x19/0x1b [<ffffffff8107c6f1>] warn_slowpath_common+0x61/0x80 [<ffffffff8107c76c>] warn_slowpath_fmt+0x5c/0x80 [<ffffffff811931e6>] __static_key_slow_dec+0xa6/0xb0 [<ffffffff81193226>] static_key_slow_dec_deferred+0x16/0x20 [<ffffffffa0637698>] kvm_free_lapic+0x88/0xa0 [kvm] [<ffffffffa061c63e>] kvm_arch_vcpu_uninit+0x2e/0xe0 [kvm] [<ffffffffa05ff301>] kvm_vcpu_uninit+0x21/0x40 [kvm] [<ffffffffa067cec7>] vmx_free_vcpu+0x47/0x70 [kvm_intel] [<ffffffffa061bc50>] kvm_arch_vcpu_free+0x50/0x60 [kvm] [<ffffffffa061ca22>] kvm_arch_destroy_vm+0x102/0x260 [kvm] [<ffffffff810b68fd>] ? synchronize_srcu+0x1d/0x20 [<ffffffffa06030d1>] kvm_put_kvm+0xe1/0x1c0 [kvm] [<ffffffffa06036f8>] kvm_vcpu_release+0x18/0x20 [kvm] [<ffffffff81215c62>] __fput+0x102/0x310 [<ffffffff81215f4e>] ____fput+0xe/0x10 [<ffffffff810ab664>] task_work_run+0xb4/0xe0 [<ffffffff81083944>] do_exit+0x304/0xc60 [<ffffffff816c8dfc>] ? _raw_spin_unlock_irq+0x2c/0x50 [<ffffffff810fd22d>] ? trace_hardirqs_on_caller+0xfd/0x1c0 [<ffffffff8108432c>] do_group_exit+0x4c/0xc0 [<ffffffff810843b4>] SyS_exit_group+0x14/0x20 [<ffffffff816d33a9>] system_call_fastpath+0x16/0x1b Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Chao Peng 提交于
Expose Intel AVX-512 feature bits to guest. Also add checks for xcr0 AVX512 related bits according to spec: http://download-software.intel.com/sites/default/files/managed/71/2e/319433-017.pdfSigned-off-by: NChao Peng <chao.p.peng@linux.intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Radim Krčmář 提交于
The check in kvm_set_lapic_tscdeadline_msr() was trying to prevent a situation where we lose a pending deadline timer in a MSR write. Losing it is fine, because it effectively occurs before the timer fired, so we should be able to cancel or postpone it. Another problem comes from interaction with QEMU, or other userspace that can set deadline MSR without a good reason, when timer is already pending: one guest's deadline request results in more than one interrupt because one is injected immediately on MSR write from userspace and one through hrtimer later. The solution is to remove the injection when replacing a pending timer and to improve the usual QEMU path, we inject without a hrtimer when the deadline has already passed. Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com> Reported-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Radim Krčmář 提交于
Make the code reusable. If the timer was already pending, we shouldn't be waiting in a queue, so wake_up can be skipped, simplifying the path. There is no 'reinject' case => the comment is removed. Current race behaves correctly. Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
If DR4/5 is accessed when it is unavailable (since CR4.DE is set), then #UD should be generated even if CPL>0. This is according to Intel SDM Table 6-2: "Priority Among Simultaneous Exceptions and Interrupts". Note, that this may happen on the first DR access, even if the host does not sets debug breakpoints. Obviously, it occurs when the host debugs the guest. This patch moves the DR4/5 checks from __kvm_set_dr/_kvm_get_dr to handle_dr. The emulator already checks DR4/5 availability in check_dr_read. Nested virutalization related calls to kvm_set_dr/kvm_get_dr would not like to inject exceptions to the guest. As for SVM, the patch follows the previous logic as much as possible. Anyhow, it appears the DR interception code might be buggy - even if the DR access may cause an exception, the instruction is skipped. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
When read access is performed using a readable code segment, the "conforming" and "non-conforming" checks should not be done. As a result, read using non-conforming readable code segment fails. This is according to Intel SDM 5.6.1 ("Accessing Data in Code Segments"). The fix is not to perform the "non-conforming" checks if the access is not a fetch; the relevant checks are already done when loading the segment. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Reviewed-by: NRadim Krčmář <rkrcmar@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
DR7.LE should be cleared during task-switch. This feature is poorly documented. For reference, see: http://pdos.csail.mit.edu/6.828/2005/readings/i386/s12_02.htm SDM [17.2.4]: This feature is not supported in the P6 family processors, later IA-32 processors, and Intel 64 processors. AMD [2:13.1.1.4]: This bit is ignored by implementations of the AMD64 architecture. Intel's formulation could mean that it isn't even zeroed, but current hardware indeed does not behave like that. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Reviewed-by: NRadim Krčmář <rkrcmar@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
In long-mode, when the address size is 4 bytes, the linear address is not truncated as the emulator mistakenly does. Instead, the offset within the segment (the ea field) should be truncated according to the address size. As Intel SDM says: "In 64-bit mode, the effective address components are added and the effective address is truncated ... before adding the full 64-bit segment base." Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
Intel SDM 17.2.4 (Debug Control Register (DR7)) says: "The processor clears the GD flag upon entering to the debug exception handler." This sentence may be misunderstood as if it happens only on #DB due to debug-register protection, but it happens regardless to the cause of the #DB. Fix the behavior to match both real hardware and Bochs. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
KVM does not deliver x2APIC broadcast messages with physical mode. Intel SDM (10.12.9 ICR Operation in x2APIC Mode) states: "A destination ID value of FFFF_FFFFH is used for broadcast of interrupts in both logical destination and physical destination modes." In addition, the local-apic enables cluster mode broadcast. As Intel SDM 10.6.2.2 says: "Broadcast to all local APICs is achieved by setting all destination bits to one." This patch enables cluster mode broadcast. The fix tries to combine broadcast in different modes through a unified code. One rare case occurs when the source of IPI has its APIC disabled. In such case, the source can still issue IPIs, but since the source is not obliged to have the same LAPIC mode as the enabled ones, we cannot rely on it. Since it is a rare case, it is unoptimized and done on the slow-path. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Reviewed-by: NRadim Krčmář <rkrcmar@redhat.com> Reviewed-by: NWanpeng Li <wanpeng.li@linux.intel.com> [As per Radim's review, use unsigned int for X2APIC_BROADCAST, return bool from kvm_apic_broadcast. - Paolo] Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Andy Lutomirski 提交于
CR4.TSD is guest-owned; don't trap writes to it in VMX guests. This avoids a VM exit on context switches into or out of a PR_TSC_SIGSEGV task. I think that this fixes an unintentional side-effect of: 4c38609a KVM: VMX: Make guest cr4 mask more conservative Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
If the operand size is not 64-bit, then the sysexit instruction should assign ECX to RSP and EDX to RIP. The current code assigns the full 64-bits. Fix it by masking. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
In 64-bit, stack operations default to 64-bits, but can be overriden (to 16-bit) using opsize override prefix. In contrast, near-branches are always 64-bit. This patch distinguish between the different behaviors. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Nadav Amit 提交于
Breaking grp45 to the relevant functions to speed up the emulation and simplify the code. In addition, it is necassary the next patch will distinguish between far and near branches according to the flags. Signed-off-by: NNadav Amit <namit@cs.technion.ac.il> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-