- 02 8月, 2010 22 次提交
-
-
由 Gleb Natapov 提交于
Devices register mask notifier using gsi, but irqchip knows about irqchip/pin, so conversion from irqchip/pin to gsi should be done before looking for mask notifier to call. Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Avi Kivity 提交于
Userspace needs to reset and save/restore these MSRs. The MCE banks are not exposed since their number varies from vcpu to vcpu. Signed-off-by: NAvi Kivity <avi@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Xiao Guangrong 提交于
Fix: general protection fault: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC ...... Call Trace: [<ffffffffa0159bd1>] ? kvm_set_irq+0xdd/0x24b [kvm] [<ffffffff8106ea8b>] ? trace_hardirqs_off_caller+0x1f/0x10e [<ffffffff813ad17f>] ? sub_preempt_count+0xe/0xb6 [<ffffffff8106d273>] ? put_lock_stats+0xe/0x27 ... RIP [<ffffffffa0159c72>] kvm_set_irq+0x17e/0x24b [kvm] This bug is triggered when guest is shutdown, is because we freed irq_routing before pit thread stopped Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Gleb Natapov 提交于
When shadow pages are in use sometimes KVM try to emulate an instruction when it accesses a shadowed page. If emulation fails KVM un-shadows the page and reenter guest to allow vcpu to execute the instruction. If page is not in shadow page hash KVM assumes that this was attempt to do MMIO and reports emulation failure to userspace since there is no way to fix the situation. This logic has a race though. If two vcpus tries to write to the same shadowed page simultaneously both will enter emulator, but only one of them will find the page in shadow page hash since the one who founds it also removes it from there, so another cpu will report failure to userspace and will abort the guest. Fix this by checking (in addition to checking shadowed page hash) that page that caused the emulation belongs to valid memory slot. If it is then reenter the guest to allow vcpu to reexecute the instruction. Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Gleb Natapov 提交于
Currently if guest access address that belongs to memory slot but is not backed up by page or page is read only KVM treats it like MMIO access. Remove that capability. It was never part of the interface and should not be relied upon. Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Gleb Natapov 提交于
They are not used outside of the file. Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Jiri Slaby 提交于
Stanse found that there is an omitted unlock in kvm_create_pit in one fail path. Add proper unlock there. Signed-off-by: NJiri Slaby <jirislaby@gmail.com> Cc: Avi Kivity <avi@redhat.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: x86@kernel.org Cc: Gleb Natapov <gleb@redhat.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Gregory Haskins <ghaskins@novell.com> Cc: kvm@vger.kernel.org Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
Real hardware disregards permission errors when computing page fault error code bit 0 (page present). Do the same. Reviewed-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
Bit 4 of the page fault error code is set only if EFER.NX is set. Reviewed-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Wei Yongjun 提交于
This patch change to use DstAcc for decoding 'mov AL, moffs' and introduced SrcAcc for decoding 'mov moffs, AL'. Signed-off-by: NWei Yongjun <yjwei@cn.fujitsu.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Wei Yongjun 提交于
If IOPL check fail, the cli/sti emulate GP and then we should skip writeback since the default write OP is OP_REG. Signed-off-by: NWei Yongjun <yjwei@cn.fujitsu.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Wei Yongjun 提交于
The source operand of 'mov rm,sreg' is segment register, not general-purpose register, so remove SrcReg from decoding. Signed-off-by: NWei Yongjun <yjwei@cn.fujitsu.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Wei Yongjun 提交于
'and AL,imm8' should be mask as ByteOp, otherwise the dest operand length will no correct and we may fill the full EAX when writeback. Signed-off-by: NWei Yongjun <yjwei@cn.fujitsu.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Wei Yongjun 提交于
Fix the comment of out instruction, using the same style as the other instructions. Signed-off-by: NWei Yongjun <yjwei@cn.fujitsu.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Wei Yongjun 提交于
Memory reads for 'mov sreg,rm16' should be 16 bits only. Signed-off-by: NWei Yongjun <yjwei@cn.fujitsu.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
__set_spte() will happily replace an spte with the accessed bit set with one that has the accessed bit clear. Add a helper update_spte() which checks for this condition and updates the page flag if needed. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
Currently, in the window between the check for the accessed bit, and actually dropping the spte, a vcpu can access the page through the spte and set the bit, which will be ignored by the mmu. Fix by using an exchange operation to atmoically fetch the spte and drop it. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
Since we need to make the check atomic, move it to the place that will set the new spte. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
When we call rmap_remove(), we (almost) always immediately follow it by an __set_spte() to a nonpresent pte. Since we need to perform the two operations atomically, to avoid losing the dirty and accessed bits, introduce a helper drop_spte() and convert all call sites. The operation is still nonatomic at this point. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Xiao Guangrong 提交于
Commit 341d9b535b6c simplify reload logic while entry guest mode, it can avoid unnecessary sync-root if KVM_REQ_MMU_RELOAD and KVM_REQ_MMU_SYNC both set. But, it cause a issue that when we handle 'KVM_REQ_TLB_FLUSH', the root is invalid, it is triggered during my test: Kernel BUG at ffffffffa00212b8 [verbose debug info unavailable] ...... Fixed by directly return if the root is not ready. Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Joerg Roedel 提交于
For 32bit machines where the physical address width is larger than the virtual address width the frame number types in KVM may overflow. Fix this by changing them to u64. [sfr: fix build on 32-bit ppc] Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
- 01 8月, 2010 18 次提交
-
-
由 Joerg Roedel 提交于
This patch converts unnecessary divide and modulo operations in the KVM large page related code into logical operations. This allows to convert gfn_t to u64 while not breaking 32 bit builds. Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Sheng Yang 提交于
This patch fixes the following warning. =================================================== [ INFO: suspicious rcu_dereference_check() usage. ] --------------------------------------------------- include/linux/kvm_host.h:259 invoked rcu_dereference_check() without protection! other info that might help us debug this: rcu_scheduler_active = 1, debug_locks = 0 no locks held by qemu-system-x86/29679. stack backtrace: Pid: 29679, comm: qemu-system-x86 Not tainted 2.6.35-rc3+ #200 Call Trace: [<ffffffff810a224e>] lockdep_rcu_dereference+0xa8/0xb1 [<ffffffffa018a06f>] kvm_iommu_unmap_memslots+0xc9/0xde [kvm] [<ffffffffa018a0c4>] kvm_iommu_unmap_guest+0x40/0x4e [kvm] [<ffffffffa018f772>] kvm_arch_destroy_vm+0x1a/0x186 [kvm] [<ffffffffa01800d0>] kvm_put_kvm+0x110/0x167 [kvm] [<ffffffffa0180ecc>] kvm_vcpu_release+0x18/0x1c [kvm] [<ffffffff81156f5d>] fput+0x22a/0x3a0 [<ffffffff81152288>] filp_close+0xb4/0xcd [<ffffffff8106599f>] put_files_struct+0x1b7/0x36b [<ffffffff81065830>] ? put_files_struct+0x48/0x36b [<ffffffff8131ee59>] ? do_raw_spin_unlock+0x118/0x160 [<ffffffff81065bc0>] exit_files+0x6d/0x75 [<ffffffff81068348>] do_exit+0x47d/0xc60 [<ffffffff8177e7b5>] ? _raw_spin_unlock_irq+0x30/0x36 [<ffffffff81068bfa>] do_group_exit+0xcf/0x134 [<ffffffff81080790>] get_signal_to_deliver+0x732/0x81d [<ffffffff81095996>] ? cpu_clock+0x4e/0x60 [<ffffffff81002082>] do_notify_resume+0x117/0xc43 [<ffffffff810a2fa3>] ? trace_hardirqs_on+0xd/0xf [<ffffffff81080d79>] ? sys_rt_sigtimedwait+0x2b5/0x3bf [<ffffffff8177d9f2>] ? trace_hardirqs_off_thunk+0x3a/0x3c [<ffffffff81003221>] ? sysret_signal+0x5/0x3d [<ffffffff8100343b>] int_signal+0x12/0x17 Signed-off-by: NSheng Yang <sheng@linux.intel.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Alexander Graf 提交于
We just introduced generic functions to handle shadow pages on PPC. This patch makes the respective backends make use of them, getting rid of a lot of duplicate code along the way. Signed-off-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Alexander Graf 提交于
Currently the shadow paging code keeps an array of entries it knows about. Whenever the guest invalidates an entry, we loop through that entry, trying to invalidate matching parts. While this is a really simple implementation, it is probably the most ineffective one possible. So instead, let's keep an array of lists around that are indexed by a hash. This way each PTE can be added by 4 list_add, removed by 4 list_del invocations and the search only needs to loop through entries that share the same hash. This patch implements said lookup and exports generic functions that both the 32-bit and 64-bit backend can use. Signed-off-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Xiao Guangrong 提交于
Cleanup this function that we are already get the direct sp's access Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Xiao Guangrong 提交于
If the mapping is writable but the dirty flag is not set, we will find the read-only direct sp and setup the mapping, then if the write #PF occur, we will mark this mapping writable in the read-only direct sp, now, other real read-only mapping will happily write it without #PF. It may hurt guest's COW Fixed by re-install the mapping when write #PF occur. Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Xiao Guangrong 提交于
In no-direct mapping, we mark sp is 'direct' when we mapping the guest's larger page, but its access is encoded form upper page-struct entire not include the last mapping, it will cause access conflict. For example, have this mapping: [W] / PDE1 -> |---| P[W] | | LPA \ PDE2 -> |---| [R] P have two children, PDE1 and PDE2, both PDE1 and PDE2 mapping the same lage page(LPA). The P's access is WR, PDE1's access is WR, PDE2's access is RO(just consider read-write permissions here) When guest access PDE1, we will create a direct sp for LPA, the sp's access is from P, is W, then we will mark the ptes is W in this sp. Then, guest access PDE2, we will find LPA's shadow page, is the same as PDE's, and mark the ptes is RO. So, if guest access PDE1, the incorrect #PF is occured. Fixed by encode the last mapping access into direct shadow page Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Xiao Guangrong 提交于
While we sync many unsync sp at one time(in mmu_sync_children()), we may mapping the spte writable, it's dangerous, if one unsync sp's mapping gfn is another unsync page's gfn. For example: SP1.pte[0] = P SP2.gfn's pfn = P [SP1.pte[0] = SP2.gfn's pfn] First, we write protected SP1 and SP2, but SP1 and SP2 are still the unsync sp. Then, sync SP1 first, it will detect SP1.pte[0].gfn only has one unsync-sp, that is SP2, so it will mapping it writable, but we plan to sync SP2 soon, at this point, the SP2->unsync is not reliable since later we sync SP2 but SP2->gfn is already writable. So the final result is: SP2 is the sync page but SP2.gfn is writable. This bug will corrupt guest's page table, fixed by mark read-only mapping if the mapped gfn has shadow pages. Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Sheng Yang 提交于
Some guest device driver may leverage the "Non-Snoop" I/O, and explicitly WBINVD or CLFLUSH to a RAM space. Since migration may occur before WBINVD or CLFLUSH, we need to maintain data consistency either by: 1: flushing cache (wbinvd) when the guest is scheduled out if there is no wbinvd exit, or 2: execute wbinvd on all dirty physical CPUs when guest wbinvd exits. Signed-off-by: NYaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by: NSheng Yang <sheng@linux.intel.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Avi Kivity 提交于
No need to reload the mmu in between two different vcpu->requests checks. kvm_mmu_reload() may trigger KVM_REQ_TRIPLE_FAULT, but that will be caught during atomic guest entry later. Signed-off-by: NAvi Kivity <avi@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Chris Lalancette 提交于
Older versions of 32-bit linux have a "Checking 'hlt' instruction" test where they repeatedly call the 'hlt' instruction, and then expect a timer interrupt to kick the CPU out of halt. This happens before any LAPIC or IOAPIC setup happens, which means that all of the APIC's are in virtual wire mode at this point. Unfortunately, the current implementation of virtual wire mode is hardcoded to only kick the BSP, so if a crash+kexec occurs on a different vcpu, it will never get kicked. This patch makes pic_unlock() do the equivalent of kvm_irq_delivery_to_apic() for the IOAPIC code. That is, it runs through all of the vcpus looking for one that is in virtual wire mode. In the normal case where LAPICs and IOAPICs are configured, this won't be used at all. In the bootstrap phase of a modern OS, before the LAPICs and IOAPICs are configured, this will have exactly the same behavior as today; VCPU0 is always looked at first, so it will always get out of the loop after the first iteration. This will only go through the loop more than once during a kexec/kdump, in which case it will only do it a few times until the kexec'ed kernel programs the LAPIC and IOAPIC. Signed-off-by: NChris Lalancette <clalance@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Takuya Yoshikawa 提交于
kvm_ia64_sync_dirty_log() is a helper function for kvm_vm_ioctl_get_dirty_log() which copies ia64's arch specific dirty bitmap to general one in memslot. So doing sanity checks in this function is unnatural. We move these checks outside of this and change the prototype appropriately. Signed-off-by: NTakuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Takuya Yoshikawa 提交于
kvm_get_dirty_log() calls copy_to_user(). So we need to narrow the dirty_log_lock spin_lock section not to include this. Signed-off-by: NTakuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Alexander Graf 提交于
When a guest sets its SR entry to invalid, we may still find a corresponding entry in a BAT. So we need to make sure we're not faulting on invalid SR entries, but instead just claim them to be BAT resolved. This resolves breakage experienced when using libogc based guests. Signed-off-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Alexander Graf 提交于
The linux kernel already provides a hash function. Let's reuse that instead of reinventing the wheel! Signed-off-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Alexander Graf 提交于
Initially we had to search for pte entries to invalidate them. Since the logic has improved since then, we can just get rid of the search function. Signed-off-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Huang Ying 提交于
is_hwpoison_address accesses the page table, so the caller must hold current->mm->mmap_sem in read mode. So fix its usage in hva_to_pfn of kvm accordingly. Comment is_hwpoison_address to remind other users. Reported-by: NAvi Kivity <avi@redhat.com> Signed-off-by: NHuang Ying <ying.huang@intel.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-