- 20 7月, 2008 2 次提交
-
-
由 Anthony Liguori 提交于
This patch allows VMAs that contain no backing page to be used for guest memory. This is useful for assigning mmio regions to a guest. Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Christian Borntraeger 提交于
kvm_dev_ioctl casts the arg value to void __user *, just to recast it again to long. This seems unnecessary. According to objdump the binary code on x86 is unchanged by this patch. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 26 6月, 2008 2 次提交
-
-
由 Jens Axboe 提交于
It's not even passed on to smp_call_function() anymore, since that was removed. So kill it. Acked-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
It's never used and the comments refer to nonatomic and retry interchangably. So get rid of it. Acked-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 18 5月, 2008 1 次提交
-
-
由 Marcelo Tosatti 提交于
There's still a race in kvm_vcpu_block(), if a wake_up_interruptible() call happens before the task state is set to TASK_INTERRUPTIBLE: CPU0 CPU1 kvm_vcpu_block add_wait_queue kvm_cpu_has_interrupt = 0 set interrupt if (waitqueue_active()) wake_up_interruptible() kvm_cpu_has_pending_timer kvm_arch_vcpu_runnable signal_pending set_current_state(TASK_INTERRUPTIBLE) schedule() Can be fixed by using prepare_to_wait() which sets the task state before testing for the wait condition. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 04 5月, 2008 1 次提交
-
-
由 Sheng Yang 提交于
Signed-off-by: NSheng Yang <sheng.yang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 02 5月, 2008 1 次提交
-
-
由 Al Viro 提交于
a) none of the callers even looks at inode or file returned by anon_inode_getfd() b) any caller that would try to look at those would be racy, since by the time it returns we might have raced with close() from another thread and that file would be pining for fjords. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 27 4月, 2008 14 次提交
-
-
由 Al Viro 提交于
Use kvm own refcounting instead of playing with ->filp->f_count. That will allow to get rid of a lot of crap in anon_inode_getfd() and kill a race in kvm_dev_ioctl_create_vm() (file might have been closed immediately by another thread, so ->filp might point to already freed struct file when we get around to setting it). Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Hollis Blanchard 提交于
It's a globally exported symbol now. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Marcelo Tosatti 提交于
So userspace can save/restore the mpstate during migration. [avi: export the #define constants describing the value] [christian: add s390 stubs] [avi: ditto for ia64] Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Marcelo Tosatti 提交于
Timers that fire between guest hlt and vcpu_block's add_wait_queue() are ignored, possibly resulting in hangs. Also make sure that atomic_inc and waitqueue_active tests happen in the specified order, otherwise the following race is open: CPU0 CPU1 if (waitqueue_active(wq)) add_wait_queue() if (!atomic_read(pit_timer->pending)) schedule() atomic_inc(pit_timer->pending) Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Feng(Eric) Liu 提交于
This interface allows user a space application to read the trace of kvm related events through relayfs. Signed-off-by: NFeng (Eric) Liu <eric.e.liu@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Anthony Liguori 提交于
This patch introduces a gfn_to_pfn() function and corresponding functions like kvm_release_pfn_dirty(). Using these new functions, we can modify the x86 MMU to no longer assume that it can always get a struct page for any given gfn. We don't want to eliminate gfn_to_page() entirely because a number of places assume they can do gfn_to_page() and then kmap() the results. When we support IO memory, gfn_to_page() will fail for IO pages although gfn_to_pfn() will succeed. This does not implement support for avoiding reference counting for reserved RAM or for IO memory. However, it should make those things pretty straight forward. Since we're only introducing new common symbols, I don't think it will break the non-x86 architectures but I haven't tested those. I've tested Intel, AMD, NPT, and hugetlbfs with Windows and Linux guests. [avi: fix overflow when shifting left pfns by adding casts] Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
the main purpose of adding this functions is the abilaty to release the spinlock that protect the kvm list while still be able to do operations on a specific kvm in a safe way. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Xiantao Zhang 提交于
Since the size of kvm_regs is too big to allocate from kernel stack on ia64, use kzalloc to allocate it. Signed-off-by: NXiantao Zhang <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Marcelo Tosatti 提交于
Create large pages mappings if the guest PTE's are marked as such and the underlying memory is hugetlbfs backed. If the largepage contains write-protected pages, a large pte is not used. Gives a consistent 2% improvement for data copies on ram mounted filesystem, without NPT/EPT. Anthony measures a 4% improvement on 4-way kernbench, with NPT. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Marcelo Tosatti 提交于
Mark zapped root pagetables as invalid and ignore such pages during lookup. This is a problem with the cr3-target feature, where a zapped root table fools the faulting code into creating a read-only mapping. The result is a lockup if the instruction can't be emulated. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Cc: Anthony Liguori <aliguori@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Andrea Arcangeli 提交于
With CONFIG_PREEMPT=n, this is needed in order to disable the fault-in code from sleeping. Signed-off-by: NAndrea Arcangeli <andrea@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
The second page is only needed on archs that support pio. Noted by Carsten Otte. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Jan Engelhardt 提交于
Signed-off-by: NJan Engelhardt <jengelh@computergmbh.de> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 04 3月, 2008 1 次提交
-
-
由 Izik Eidus 提交于
This patch replaces the mmap_sem lock for the memory slots with a new kvm private lock, it is needed beacuse untill now there were cases where kvm accesses user memory while holding the mmap semaphore. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 09 2月, 2008 1 次提交
-
-
由 Christoph Hellwig 提交于
Sometimes simple attributes might need to return an error, e.g. for acquiring a mutex interruptibly. In fact we have that situation in spufs already which is the original user of the simple attributes. This patch merged the temporarily forked attributes in spufs back into the main ones and allows to return errors. [akpm@linux-foundation.org: build fix] Signed-off-by: NChristoph Hellwig <hch@lst.de> Cc: <stefano.brivio@polimi.it> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Greg KH <greg@kroah.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 31 1月, 2008 5 次提交
-
-
由 Marcelo Tosatti 提交于
Convert the synchronization of the shadow handling to a separate mmu_lock spinlock. Also guard fetch() by mmap_sem in read-mode to protect against alias and memslot changes. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Marcelo Tosatti 提交于
In preparation for a mmu spinlock, add kvm_read_guest_atomic() and use it in fetch() and prefetch_page(). Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Marcelo Tosatti 提交于
Do not hold kvm->lock mutex across the entire pagefault code, only acquire it in places where it is necessary, such as mmu hash list, active list, rmap and parent pte handling. Allow concurrent guest walkers by switching walk_addr() to use mmap_sem in read-mode. And get rid of the lockless __gfn_to_page. [avi: move kvm_mmu_pte_write() locking inside the function] [avi: add locking for real mode] [avi: fix cmpxchg locking] Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
This paves the way for multiple architecture support. Note that while ioapic.c could potentially be shared with ia64, it is also moved. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 30 1月, 2008 12 次提交
-
-
由 Zhang Xiantao 提交于
Move all the architecture-specific fields in kvm_vcpu into a new struct kvm_vcpu_arch. Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Acked-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 npiggin@suse.de 提交于
Signed-off-by: NNick Piggin <npiggin@suse.de> Cc: kvm-devel@lists.sourceforge.net Cc: avi@qumranet.com Cc: linux-kernel@vger.kernel.org Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Hollis Blanchard 提交于
This abstracts the detail of x86 hlt and INIT modes into a function. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Acked-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Hollis Blanchard 提交于
Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Hollis Blanchard 提交于
Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Acked-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Other archs doesn't need it. Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Non-x86 archs don't need this mechanism. Move it to arch, and keep its interface in common. Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Acked-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
We don't want the meaning of guest userspace changing under our feet. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Split kvm_arch_vcpu_create() into kvm_arch_vcpu_create() and kvm_arch_vcpu_setup(), enabling preemption notification between the two. This mean that we can now do vcpu_load() within kvm_arch_vcpu_setup(). Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Moving !user_alloc case to kvm_arch to avoid unnecessary code logic in non-x86 platform. Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-