- 18 10月, 2010 2 次提交
-
-
由 Konrad Rzeszutek Wilk 提交于
We used to depend on CONFIG_SWIOTLB, but that is disabled by default. So when compiling we get this compile error: arch/x86/xen/pci-swiotlb-xen.c: In function 'pci_xen_swiotlb_detect': arch/x86/xen/pci-swiotlb-xen.c:48: error: lvalue required as left operand of assignment Fix it by actually activating the SWIOTLB library. Reported-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Konrad Rzeszutek Wilk 提交于
It used to done in the Xen startup code but that is not really appropiate. [v2: Update Kconfig with PCI requirement] Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 30 7月, 2010 1 次提交
-
-
由 Stefano Stabellini 提交于
This patch introduce a CONFIG_XEN_PVHVM compile time option to enable/disable Xen PV on HVM support. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
- 27 7月, 2010 1 次提交
-
-
由 Konrad Rzeszutek Wilk 提交于
This patchset: PV guests under Xen are running in an non-contiguous memory architecture. When PCI pass-through is utilized, this necessitates an IOMMU for translating bus (DMA) to virtual and vice-versa and also providing a mechanism to have contiguous pages for device drivers operations (say DMA operations). Specifically, under Xen the Linux idea of pages is an illusion. It assumes that pages start at zero and go up to the available memory. To help with that, the Linux Xen MMU provides a lookup mechanism to translate the page frame numbers (PFN) to machine frame numbers (MFN) and vice-versa. The MFN are the "real" frame numbers. Furthermore memory is not contiguous. Xen hypervisor stitches memory for guests from different pools, which means there is no guarantee that PFN==MFN and PFN+1==MFN+1. Lastly with Xen 4.0, pages (in debug mode) are allocated in descending order (high to low), meaning the guest might never get any MFN's under the 4GB mark. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Albert Herranz <albert_herranz@yahoo.es> Cc: Ian Campbell <Ian.Campbell@citrix.com>
-
- 23 7月, 2010 1 次提交
-
-
由 Stefano Stabellini 提交于
Add the xen pci platform device driver that is responsible for initializing the grant table and xenbus in PV on HVM mode. Few changes to xenbus and grant table are necessary to allow the delayed initialization in HVM mode. Grant table needs few additional modifications to work in HVM mode. The Xen PCI platform device raises an irq every time an event has been delivered to us. However these interrupts are only delivered to vcpu 0. The Xen PCI platform interrupt handler calls xen_hvm_evtchn_do_upcall that is a little wrapper around __xen_evtchn_do_upcall, the traditional Xen upcall handler, the very same used with traditional PV guests. When running on HVM the event channel upcall is never called while in progress because it is a normal Linux irq handler (and we cannot switch the irq chip wholesale to the Xen PV ones as we are running QEMU and might have passed in PCI devices), therefore we cannot be sure that evtchn_upcall_pending is 0 when returning. For this reason if evtchn_upcall_pending is set by Xen we need to loop again on the event channels set pending otherwise we might loose some event channel deliveries. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NSheng Yang <sheng@linux.intel.com> Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
-
- 07 3月, 2010 1 次提交
-
-
由 Randy Dunlap 提交于
Currently the xen support drivers are displayed in the main Device Drivers menu of the config tools instead of in their own sub-menu, so move them to their own sub-menu, like the rest of the driver world uses. This keeps the main Device Drivers menu from becoming messy. Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 31 3月, 2009 2 次提交
-
-
由 Jeremy Fitzhardinge 提交于
Adds support for Xen info under /sys/hypervisor. Taken from Novell 2.6.27 backport tree. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
-
由 Ian Campbell 提交于
This driver is used by application which wish to receive notifications from the hypervisor or other guests via Xen's event channel mechanism. In particular it is used by the xenstore daemon in domain 0. Signed-off-by: NIan Campbell <ian.campbell@citrix.com> Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
-
- 30 3月, 2009 1 次提交
-
-
由 Matt LaPlante 提交于
Signed-off-by: NMatt LaPlante <kernel1@cyberdogtech.com> Acked-by: NRandy Dunlap <randy.dunlap@oracle.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 09 1月, 2009 1 次提交
-
-
由 Alex Zeffertt 提交于
The xenfs filesystem exports various interfaces to usermode. Initially this exports a file to allow usermode to interact with xenbus/xenstore. Traditionally this appeared in /proc/xen. Rather than extending procfs, this patch adds a backward-compat mountpoint on /proc/xen, and provides a xenfs filesystem which can be mounted there. Signed-off-by: NAlex Zeffertt <alex.zeffertt@eu.citrix.com> Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 4月, 2008 1 次提交
-
-
由 Jeremy Fitzhardinge 提交于
The balloon driver allows memory to be dynamically added or removed from the domain, in order to allow host memory to be balanced between multiple domains. This patch introduces the Xen balloon driver, though it currently only allows a domain to be shrunk from its initial size (and re-grown back to that size). A later patch will add the ability to grow a domain beyond its initial size. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-