- 16 4月, 2013 6 次提交
-
-
由 Yang Zhang 提交于
Userspace may deliver RTC interrupt without query the status. So we want to track RTC EOI for this case. Signed-off-by: NYang Zhang <yang.z.zhang@Intel.com> Reviewed-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Yang Zhang 提交于
Need the EOI to track interrupt deliver status, so force vmexit on EOI for rtc interrupt when enabling virtual interrupt delivery. Signed-off-by: NYang Zhang <yang.z.zhang@Intel.com> Reviewed-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Yang Zhang 提交于
restore rtc_status from migration or save/restore Signed-off-by: NYang Zhang <yang.z.zhang@Intel.com> Reviewed-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Yang Zhang 提交于
Add a new parameter to know vcpus who received the interrupt. Signed-off-by: NYang Zhang <yang.z.zhang@Intel.com> Reviewed-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Yang Zhang 提交于
rtc_status is used to track RTC interrupt delivery status. The pending_eoi will be increased by vcpu who received RTC interrupt and will be decreased when EOI to this interrupt. Also, we use dest_map to record the destination vcpu to avoid the case that vcpu who didn't get the RTC interupt, but issued EOI with same vector of RTC and descreased pending_eoi by mistake. Signed-off-by: NYang Zhang <yang.z.zhang@Intel.com> Reviewed-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Yang Zhang 提交于
Add vcpu info to ioapic_update_eoi, so we can know which vcpu issued this EOI. Signed-off-by: NYang Zhang <yang.z.zhang@Intel.com> Reviewed-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
- 14 4月, 2013 8 次提交
-
-
由 Jan Kiszka 提交于
We only need to update vm_exit_intr_error_code if there is a valid exit interruption information and it comes with a valid error code. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Jan Kiszka 提交于
If we are entering guest mode, we do not want L0 to interrupt this vmentry with all its side effects on the vmcs. Therefore, injection shall be disallowed during L1->L2 transitions, as in the previous version. However, this check is conceptually independent of nested_exit_on_intr, so decouple it. If L1 traps external interrupts, we can kick the guest from L2 to L1, also just like the previous code worked. But we no longer need to consider L1's idt_vectoring_info_field. It will always be empty at this point. Instead, if L2 has pending events, those are now found in the architectural queues and will, thus, prevent vmx_interrupt_allowed from being called at all. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Jan Kiszka 提交于
The basic idea is to always transfer the pending event injection on vmexit into the architectural state of the VCPU and then drop it from there if it turns out that we left L2 to enter L1, i.e. if we enter prepare_vmcs12. vmcs12_save_pending_events takes care to transfer pending L0 events into the queue of L1. That is mandatory as L1 may decide to switch the guest state completely, invalidating or preserving the pending events for later injection (including on a different node, once we support migration). This concept is based on the rule that a pending vmlaunch/vmresume is not canceled. Otherwise, we would risk to lose injected events or leak them into the wrong queues. Encode this rule via a WARN_ON_ONCE at the entry of nested_vmx_vmexit. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Jan Kiszka 提交于
Check if the interrupt or NMI window exit is for L1 by testing if it has the corresponding controls enabled. This is required when we allow direct injection from L0 to L2 Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Reviewed-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Gleb Natapov 提交于
Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Gleb Natapov 提交于
Emulation of undefined opcode should inject #UD instead of causing emulation failure. Do that by moving Undefined flag check to emulation stage and injection #UD there. Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Gleb Natapov 提交于
During invalid guest state emulation vcpu cannot enter guest mode to try to reexecute instruction that emulator failed to emulate, so emulation will happen again and again. Prevent that by telling the emulator that instruction reexecution should not be attempted. Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Gleb Natapov 提交于
Unimplemented instruction detection is broken for group instructions since it relies on "flags" field of opcode to be zero, but all instructions in a group inherit flags from a group encoding. Fix that by having a separate flag for unimplemented instructions. Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
- 11 4月, 2013 1 次提交
-
-
由 Kevin Wolf 提交于
This fixes a regression introduced in commit 03ebebeb ("KVM: x86 emulator: Leave segment limit and attributs alone in real mode"). The mentioned commit changed the segment descriptors for both real mode and VM86 to only update the segment base instead of creating a completely new descriptor with limit 0xffff so that unreal mode keeps working across a segment register reload. This leads to an invalid segment descriptor in the eyes of VMX, which seems to be okay for real mode because KVM will fix it up before the next VM entry or emulate the state, but it doesn't do this if the guest is in VM86, so we end up with: KVM: entry failed, hardware error 0x80000021 Fix this by effectively reverting commit 03ebebeb for VM86 and leaving it only in place for real mode, which is where it's really needed. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
- 08 4月, 2013 6 次提交
-
-
由 Geoff Levand 提交于
The variable kvm_rebooting is a common kvm variable, so move its declaration from arch/x86/include/asm/kvm_host.h to include/asm/kvm_host.h. Fixes this sparse warning when building on arm64: virt/kvm/kvm_main.c:warning: symbol 'kvm_rebooting' was not declared. Should it be static? Signed-off-by: NGeoff Levand <geoff@infradead.org> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Geoff Levand 提交于
The routine kvm_spurious_fault() is an x86 specific routine, so move it from virt/kvm/kvm_main.c to arch/x86/kvm/x86.c. Fixes this sparse warning when building on arm64: virt/kvm/kvm_main.c:warning: symbol 'kvm_spurious_fault' was not declared. Should it be static? Signed-off-by: NGeoff Levand <geoff@infradead.org> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Geoff Levand 提交于
The routines get_user_page_nowait(), kvm_io_bus_sort_cmp(), kvm_io_bus_insert_dev() and kvm_io_bus_get_first_dev() are only referenced within kvm_main.c, so give them static linkage. Fixes sparse warnings like these: virt/kvm/kvm_main.c: warning: symbol 'get_user_page_nowait' was not declared. Should it be static? Signed-off-by: NGeoff Levand <geoff@infradead.org> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Geoff Levand 提交于
The variables vm_list and kvm_lock are common to all architectures, so move the declarations from arch/x86/include/asm/kvm_host.h to include/linux/kvm_host.h. Fixes sparse warnings like these when building for arm64: virt/kvm/kvm_main.c: warning: symbol 'kvm_lock' was not declared. Should it be static? virt/kvm/kvm_main.c: warning: symbol 'vm_list' was not declared. Should it be static? Signed-off-by: NGeoff Levand <geoff@infradead.org> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Jan Kiszka 提交于
The code was already properly aligned, now also add the braces to avoid that err is checked even if alloc_apic_access_page didn't run and change it. Found via Coccinelle by Fengguang Wu. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Yang Zhang 提交于
Free vmx_msr_bitmap_longmode_x2apic and vmx_msr_bitmap_longmode if kvm_init() fails. Signed-off-by: NYang Zhang <yang.z.zhang@Intel.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
- 07 4月, 2013 5 次提交
-
-
由 Michael S. Tsirkin 提交于
PIO and MMIO are separate address spaces, but ioeventfd registration code mistakenly detected two eventfds as duplicate if they use the same address, even if one is PIO and another one MMIO. Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Jan Kiszka 提交于
Obviously a copy&paste mistake: prepare_vmcs12 has to check L1's exit controls for VM_EXIT_SAVE_IA32_PAT. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Yang Zhang 提交于
For a given vcpu, kvm_apic_match_dest() will tell you whether the vcpu in the destination list quickly. Drop kvm_calculate_eoi_exitmap() and use kvm_apic_match_dest() instead. Signed-off-by: NYang Zhang <yang.z.zhang@Intel.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Cornelia Huck 提交于
ccw_io_helper neglected to reset vcdev->err after a new channel program had been successfully started, resulting in stale errors delivered after one I/O failed. Reset the error after a new channel program has been successfully started with no old I/O pending. Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Takuya Yoshikawa 提交于
With the following commit, shadow pages can be zapped at random during a shadow page talbe walk: KVM: MMU: Move kvm_mmu_free_some_pages() into kvm_mmu_alloc_page() 7ddca7e4 This patch reverts it and fixes __direct_map() and FNAME(fetch)(). Signed-off-by: NTakuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
- 02 4月, 2013 12 次提交
-
-
由 Paolo Bonzini 提交于
In order to migrate the PMU state correctly, we need to restore the values of MSR_CORE_PERF_GLOBAL_STATUS (a read-only register) and MSR_CORE_PERF_GLOBAL_OVF_CTRL (which has side effects when written). We also need to write the full 40-bit value of the performance counter, which would only be possible with a v3 architectural PMU's full-width counter MSRs. To distinguish host-initiated writes from the guest's, pass the full struct msr_data to kvm_pmu_set_msr. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Nick Wang 提交于
Return KVM_USER_MEM_SLOTS in kvm_dev_ioctl_check_extension(). Signed-off-by: NNick Wang <jfwang@us.ibm.com> Reviewed-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Nick Wang 提交于
To model the standby memory with memory_region_add_subregion and friends, the guest would have one or more regions of ram. Remove the check allowing only one memory slot and the check requiring the real address of memory slot starts at zero. Signed-off-by: NNick Wang <jfwang@us.ibm.com> Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Nick Wang 提交于
The current location for mapping virtio devices does not take into consideration the standby memory. This causes the failure of mapping standby memory since the location for the mapping is already taken by the virtio devices. To fix the problem, we move the location to beyond the end of standby memory. Signed-off-by: NNick Wang <jfwang@us.ibm.com> Reviewed-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Heiko Carstens 提交于
arch/s390/kvm/priv.c should include both linux/compat.h and asm/compat.h. Fixes this one: In file included from arch/s390/kvm/priv.c:23:0: arch/s390/include/asm/compat.h: In function ‘arch_compat_alloc_user_space’: arch/s390/include/asm/compat.h:258:2: error: implicit declaration of function ‘is_compat_task’ Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Heiko Carstens 提交于
In case of an exception the guest psw condition code should be left alone. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Acked-By: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Heiko Carstens 提交于
kvm_s390_inject_program_int() and friends may fail if no memory is available. This must be reported to the calling functions, so that this gets passed down to user space which should fix the situation. Alternatively we end up with guest state corruption. So fix this and enforce return value checking by adding a __must_check annotation to all of these function prototypes. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Acked-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Heiko Carstens 提交于
Being unable to parse the 5- and 8-line if statements I had to split them to be able to make any sense of them and verify that they match the architecture. So change the code since I guess that other people will also have a hard time parsing such long conditional statements with line breaks. Introduce a common is_valid_psw() function which does all the checks needed. In case of lpsw (64 bit psw -> 128 bit psw conversion) it will do some not needed additional checks, since a couple of bits can't be set anyway, but that doesn't hurt. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Acked-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Heiko Carstens 提交于
kvm_s390_inject_program_int() may return with a non-zero return value, in case of an error (out of memory). Report that to the calling functions instead of ignoring the error case. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Acked-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Heiko Carstens 提交于
When converting a 64 bit psw to a 128 bit psw the addressing mode bit of the "addr" part of the 64 bit psw must be moved to the basic addressing mode bit of the "mask" part of the 128 bit psw. In addition the addressing mode bit must be cleared when moved to the "addr" part of the 128 bit psw. Otherwise an invalid psw would be generated if the orginal psw was in the 31 bit addressing mode. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Acked-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Heiko Carstens 提交于
When checking for validity the lpsw/lpswe handler check that only the lower 20 bits instead of 24 bits have a non-zero value. There handling valid psws as invalid ones. Fix the 24 bit psw mask. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Acked-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Christian Borntraeger 提交于
Some memslot updates dont affect the gmap implementation, e.g. setting/unsetting dirty tracking. Since a gmap update will cause tlb flushes and segment table invalidations we want to avoid that. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
- 24 3月, 2013 1 次提交
-
-
- 22 3月, 2013 1 次提交
-
-
由 Paul Mackerras 提交于
Currently kvmppc_core_dequeue_external() takes a struct kvm_interrupt * argument and does nothing with it, in any of its implementations. This removes it in order to make things easier for forthcoming in-kernel interrupt controller emulation code. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAlexander Graf <agraf@suse.de>
-