- 14 3月, 2019 1 次提交
-
-
由 Jim Fehlig 提交于
All Xen domains have a xenbus device. Implicitly add one if not already explicitly specified in the domain config. Signed-off-by: NJim Fehlig <jfehlig@suse.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
-
- 12 4月, 2018 1 次提交
-
-
由 Jim Fehlig 提交于
All Xen PV and HVM with PV driver support a memory balloon device, which cannot be disabled through the toolstack. Model the device in the libxl driver, similar to the recently removed xend-based driver. Signed-off-by: NJim Fehlig <jfehlig@suse.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
-
- 15 6月, 2016 1 次提交
-
-
由 Chunyan Liu 提交于
Signed-off-by: NChunyan Liu <cyliu@suse.com> Signed-off-by: NJim Fehlig <jfehlig@suse.com>
-
- 27 1月, 2016 1 次提交
-
-
由 Pavel Hrdina 提交于
The current code was a little bit odd. At first we've removed all possible implicit input devices from domain definition to add them later back if there was any graphics device defined while parsing XML description. That's not all, while formating domain definition to XML description we at first ignore any input devices with bus different to USB and VIRTIO and few lines later we add implicit input devices to XML. This seems to me as a lot of code for nothing. This patch may look to be more complicated than original approach, but this is a preferred way to modify/add driver specific stuff only in those drivers and not deal with them in common parsing/formating functions. The update is to add those implicit input devices into config XML to follow the real HW configuration visible by guest OS. There was also inconsistence between our behavior and QEMU's in the way, that in QEMU there is no way how to disable those implicit input devices for x86 architecture and they are available always, even without graphics device. This applies also to XEN hypervisor. VZ driver already does its part by putting correct implicit devices into live XML. Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
- 19 12月, 2015 1 次提交
-
-
由 Jim Fehlig 提交于
From: Ian Campbell <ian.campbell@citrix.com> xend prior to 4.0 understands vcpus as maxvcpus and vcpu_avail as a bit map of which cpus are online (default is all). xend from 4.0 onwards understands maxvcpus as maxvcpus and vcpus as the number which are online (from 0..N-1). The upstream commit (68a94cf528e6 "xm: Add maxvcpus support") claims that if maxvcpus is omitted then the old behaviour (i.e. obeying vcpu_avail) is retained, but AFAICT it was not, in this case vcpu==maxcpus==online cpus. This is good for us because handling anything else would be fiddly. This patch changes parsing of the virDomainDef maxvcpus and vcpus entries to use the corresponding 'maxvcpus' and 'vcpus' settings from xm and xl config. It also drops use of the old Xen 3.x 'vcpu_avail' setting. The change also removes the maxvcpus limit of MAX_VIRT_VCPUS (since maxvcpus is simply a count, not a bit mask), which is particularly crucial on ARM where MAX_VIRT_CPUS == 1 (since all guests are expected to support vcpu placement, and therefore only the boot vcpu's info lives in the shared info page). Existing tests adjusted accordingly, and new tests added for the 'maxvcpus' setting.
-