- 15 10月, 2008 5 次提交
-
-
由 Ben-Ami Yassour 提交于
Based on a patch by: Kay, Allen M <allen.m.kay@intel.com> This patch enables PCI device assignment based on VT-d support. When a device is assigned to the guest, the guest memory is pinned and the mapping is updated in the VT-d IOMMU. [Amit: Expose KVM_CAP_IOMMU so we can check if an IOMMU is present and also control enable/disable from userspace] Signed-off-by: NKay, Allen M <allen.m.kay@intel.com> Signed-off-by: NWeidong Han <weidong.han@intel.com> Signed-off-by: NBen-Ami Yassour <benami@il.ibm.com> Signed-off-by: NAmit Shah <amit.shah@qumranet.com> Acked-by: NMark Gross <mgross@linux.intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Marcelo Tosatti 提交于
Offline or uninitialized vcpu's can be executed if requested to perform userspace work. Follow Avi's suggestion to handle halted vcpu's in the main loop, simplifying kvm_emulate_halt(). Introduce a new vcpu->requests bit to indicate events that promote state from halted to running. Also standardize vcpu wake sites. Signed-off-by: Marcelo Tosatti <mtosatti <at> redhat.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
This is esoteric and only needed to break COW on MAP_SHARED mappings. Since KVM no longer does these sorts of mappings, breaking COW on them is no longer necessary. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Dave Hansen 提交于
Signed-off-by: NDave Hansen <dave@linux.vnet.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Ben-Ami Yassour 提交于
Userspace may specify memory slots that are backed by mmio pages rather than normal RAM. In some cases it is not enough to identify these mmio pages by pfn_valid(). This patch adds checking the PageReserved as well. Signed-off-by: NBen-Ami Yassour <benami@il.ibm.com> Signed-off-by: NMuli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 29 7月, 2008 2 次提交
-
-
由 Andrea Arcangeli 提交于
Synchronize changes to host virtual addresses which are part of a KVM memory slot to the KVM shadow mmu. This allows pte operations like swapping, page migration, and madvise() to transparently work with KVM. Signed-off-by: NAndrea Arcangeli <andrea@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Andrea Arcangeli 提交于
This allows reading memslots with only the mmu_lock hold for mmu notifiers that runs in atomic context and with mmu_lock held. Signed-off-by: NAndrea Arcangeli <andrea@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 25 7月, 2008 1 次提交
-
-
由 Ulrich Drepper 提交于
This patch just extends the anon_inode_getfd interface to take an additional parameter with a flag value. The flag value is passed on to get_unused_fd_flags in anticipation for a use with the O_CLOEXEC flag. No actual semantic changes here, the changed callers all pass 0 for now. [akpm@linux-foundation.org: KVM fix] Signed-off-by: NUlrich Drepper <drepper@redhat.com> Acked-by: NDavide Libenzi <davidel@xmailserver.org> Cc: Michael Kerrisk <mtk.manpages@googlemail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 20 7月, 2008 9 次提交
-
-
由 Avi Kivity 提交于
smp_call_function_mask() now complains when called in a preemptible context; adjust its callers accordingly. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Marcelo Tosatti 提交于
Flush the shadow mmu before removing regions to avoid stale entries. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Carsten Otte 提交于
This patch #ifdefs the bitmap array for dirty tracking. We don't have dirty tracking on s390 today, and we'd love to use our storage keys to store the dirty information for migration. Therefore, we won't need this array at all, and due to our limited amount of vmalloc space this limits the amount of guests we can run. Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Laurent Vivier 提交于
This patch adds all needed structures to coalesce MMIOs. Until an architecture uses it, it is not compiled. Coalesced MMIO introduces two ioctl() to define where are the MMIO zones that can be coalesced: - KVM_REGISTER_COALESCED_MMIO registers a coalesced MMIO zone. It requests one parameter (struct kvm_coalesced_mmio_zone) which defines a memory area where MMIOs can be coalesced until the next switch to user space. The maximum number of MMIO zones is KVM_COALESCED_MMIO_ZONE_MAX. - KVM_UNREGISTER_COALESCED_MMIO cancels all registered zones inside the given bounds (bounds are also given by struct kvm_coalesced_mmio_zone). The userspace client can check kernel coalesced MMIO availability by asking ioctl(KVM_CHECK_EXTENSION) for the KVM_CAP_COALESCED_MMIO capability. The ioctl() call to KVM_CAP_COALESCED_MMIO will return 0 if not supported, or the page offset where will be stored the ring buffer. The page offset depends on the architecture. After an ioctl(KVM_RUN), the first page of the KVM memory mapped points to a kvm_run structure. The offset given by KVM_CAP_COALESCED_MMIO is an offset to the coalesced MMIO ring expressed in PAGE_SIZE relatively to the address of the start of th kvm_run structure. The MMIO ring buffer is defined by the structure kvm_coalesced_mmio_ring. [akio: fix oops during guest shutdown] Signed-off-by: NLaurent Vivier <Laurent.Vivier@bull.net> Signed-off-by: NAkio Takebe <takebe_akio@jp.fujitsu.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Laurent Vivier 提交于
Modify member in_range() of structure kvm_io_device to pass length and the type of the I/O (write or read). This modification allows to use kvm_io_device with coalesced MMIO. Signed-off-by: NLaurent Vivier <Laurent.Vivier@bull.net> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Obsoleted by the vmx-specific per-cpu list. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
KVM turns off hardware virtualization extensions during reboot, in order to disassociate the memory used by the virtualization extensions from the processor, and in order to have the system in a consistent state. Unfortunately virtual machines may still be running while this goes on, and once virtualization extensions are turned off, any virtulization instruction will #UD on execution. Fix by adding an exception handler to virtualization instructions; if we get an exception during reboot, we simply spin waiting for the reset to complete. If it's a true exception, BUG() so we can have our stack trace. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Anthony Liguori 提交于
This patch allows VMAs that contain no backing page to be used for guest memory. This is useful for assigning mmio regions to a guest. Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Christian Borntraeger 提交于
kvm_dev_ioctl casts the arg value to void __user *, just to recast it again to long. This seems unnecessary. According to objdump the binary code on x86 is unchanged by this patch. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 26 6月, 2008 2 次提交
-
-
由 Jens Axboe 提交于
It's not even passed on to smp_call_function() anymore, since that was removed. So kill it. Acked-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
It's never used and the comments refer to nonatomic and retry interchangably. So get rid of it. Acked-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 18 5月, 2008 1 次提交
-
-
由 Marcelo Tosatti 提交于
There's still a race in kvm_vcpu_block(), if a wake_up_interruptible() call happens before the task state is set to TASK_INTERRUPTIBLE: CPU0 CPU1 kvm_vcpu_block add_wait_queue kvm_cpu_has_interrupt = 0 set interrupt if (waitqueue_active()) wake_up_interruptible() kvm_cpu_has_pending_timer kvm_arch_vcpu_runnable signal_pending set_current_state(TASK_INTERRUPTIBLE) schedule() Can be fixed by using prepare_to_wait() which sets the task state before testing for the wait condition. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 04 5月, 2008 1 次提交
-
-
由 Sheng Yang 提交于
Signed-off-by: NSheng Yang <sheng.yang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 02 5月, 2008 1 次提交
-
-
由 Al Viro 提交于
a) none of the callers even looks at inode or file returned by anon_inode_getfd() b) any caller that would try to look at those would be racy, since by the time it returns we might have raced with close() from another thread and that file would be pining for fjords. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 27 4月, 2008 14 次提交
-
-
由 Al Viro 提交于
Use kvm own refcounting instead of playing with ->filp->f_count. That will allow to get rid of a lot of crap in anon_inode_getfd() and kill a race in kvm_dev_ioctl_create_vm() (file might have been closed immediately by another thread, so ->filp might point to already freed struct file when we get around to setting it). Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Hollis Blanchard 提交于
It's a globally exported symbol now. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Marcelo Tosatti 提交于
So userspace can save/restore the mpstate during migration. [avi: export the #define constants describing the value] [christian: add s390 stubs] [avi: ditto for ia64] Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Marcelo Tosatti 提交于
Timers that fire between guest hlt and vcpu_block's add_wait_queue() are ignored, possibly resulting in hangs. Also make sure that atomic_inc and waitqueue_active tests happen in the specified order, otherwise the following race is open: CPU0 CPU1 if (waitqueue_active(wq)) add_wait_queue() if (!atomic_read(pit_timer->pending)) schedule() atomic_inc(pit_timer->pending) Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Feng(Eric) Liu 提交于
This interface allows user a space application to read the trace of kvm related events through relayfs. Signed-off-by: NFeng (Eric) Liu <eric.e.liu@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Anthony Liguori 提交于
This patch introduces a gfn_to_pfn() function and corresponding functions like kvm_release_pfn_dirty(). Using these new functions, we can modify the x86 MMU to no longer assume that it can always get a struct page for any given gfn. We don't want to eliminate gfn_to_page() entirely because a number of places assume they can do gfn_to_page() and then kmap() the results. When we support IO memory, gfn_to_page() will fail for IO pages although gfn_to_pfn() will succeed. This does not implement support for avoiding reference counting for reserved RAM or for IO memory. However, it should make those things pretty straight forward. Since we're only introducing new common symbols, I don't think it will break the non-x86 architectures but I haven't tested those. I've tested Intel, AMD, NPT, and hugetlbfs with Windows and Linux guests. [avi: fix overflow when shifting left pfns by adding casts] Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
the main purpose of adding this functions is the abilaty to release the spinlock that protect the kvm list while still be able to do operations on a specific kvm in a safe way. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Xiantao Zhang 提交于
Since the size of kvm_regs is too big to allocate from kernel stack on ia64, use kzalloc to allocate it. Signed-off-by: NXiantao Zhang <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Marcelo Tosatti 提交于
Create large pages mappings if the guest PTE's are marked as such and the underlying memory is hugetlbfs backed. If the largepage contains write-protected pages, a large pte is not used. Gives a consistent 2% improvement for data copies on ram mounted filesystem, without NPT/EPT. Anthony measures a 4% improvement on 4-way kernbench, with NPT. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Marcelo Tosatti 提交于
Mark zapped root pagetables as invalid and ignore such pages during lookup. This is a problem with the cr3-target feature, where a zapped root table fools the faulting code into creating a read-only mapping. The result is a lockup if the instruction can't be emulated. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Cc: Anthony Liguori <aliguori@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Andrea Arcangeli 提交于
With CONFIG_PREEMPT=n, this is needed in order to disable the fault-in code from sleeping. Signed-off-by: NAndrea Arcangeli <andrea@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
The second page is only needed on archs that support pio. Noted by Carsten Otte. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Jan Engelhardt 提交于
Signed-off-by: NJan Engelhardt <jengelh@computergmbh.de> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 04 3月, 2008 1 次提交
-
-
由 Izik Eidus 提交于
This patch replaces the mmap_sem lock for the memory slots with a new kvm private lock, it is needed beacuse untill now there were cases where kvm accesses user memory while holding the mmap semaphore. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 09 2月, 2008 1 次提交
-
-
由 Christoph Hellwig 提交于
Sometimes simple attributes might need to return an error, e.g. for acquiring a mutex interruptibly. In fact we have that situation in spufs already which is the original user of the simple attributes. This patch merged the temporarily forked attributes in spufs back into the main ones and allows to return errors. [akpm@linux-foundation.org: build fix] Signed-off-by: NChristoph Hellwig <hch@lst.de> Cc: <stefano.brivio@polimi.it> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Greg KH <greg@kroah.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 31 1月, 2008 2 次提交
-
-
由 Marcelo Tosatti 提交于
Convert the synchronization of the shadow handling to a separate mmu_lock spinlock. Also guard fetch() by mmap_sem in read-mode to protect against alias and memslot changes. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Marcelo Tosatti 提交于
In preparation for a mmu spinlock, add kvm_read_guest_atomic() and use it in fetch() and prefetch_page(). Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-