- 10 7月, 2015 12 次提交
-
-
由 Jiri Denemark 提交于
Libvirt's error messages do not end with a LF. However, when reading the error from QEMU log, we would read the LF from the log and keep it in the message. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Since we already support the MIGRATION event, we just need to make sure the domain condition is signalled whenever a p2p connection drops or the domain is paused due to IO error and we can avoid waking up every 50 ms to check whether something happened. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
We don't need to call query-migrate every 50ms when we get the current migration state via MIGRATION event. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
When QEMU supports migration events the qemuDomainJobInfo structure will no longer be updated with migration statistics. We have to enter a job and explicitly ask QEMU every time virDomainGetJob{Info,Stats} is called. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Even if QEMU supports migration events it doesn't send them by default. We have to enable them by calling migrate-set-capabilities. Let's enable migration events everytime we can and clear QEMU_CAPS_MIGRATION_EVENT in case migrate-set-capabilities does not support events. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Thanks to Juan's work QEMU finally emits an event whenever migration state changes. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Guido Günther 提交于
This fixes CC qemu/libvirt_driver_qemu_impl_la-qemu_conf.lo qemu/qemu_conf.c: In function 'qemuRemoveSharedDevice': qemu/qemu_conf.c:1384:9: error: 'ret' may be used uninitialized in this function [-Werror=maybe-uninitialized]
-
由 Pavel Hrdina 提交于
Some guests lock the tray and QEMU eject command will simply fail to eject the media. But the guest OS can handle this attempt to eject the media and can unlock the tray and open it. In this case, we should try again to actually eject the media. If the first attempt fails to detect a tray_open we will fail with error, from monitor. If we receive that event, we know, that the guest properly reacted to the eject request, unlocked the tray and opened it. In this case, we need to run the command again to actually eject the media from the device. The reason to call it again is, that QEMU doesn't wait for the guest to react and report an error, that the tray is locked. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1147471Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
由 Pavel Hrdina 提交于
Modify the eject monitor functions to parse the return code and detect, whether the error contains "is locked" to report this type of failure to upper layers. Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
由 Pavel Hrdina 提交于
There are multiple consumers for the domain condition and we should always wake them all. Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
由 Pavel Hrdina 提交于
We should distinguish between success and timeout, to let the user handle those two events differently. Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
- 09 7月, 2015 12 次提交
-
-
由 Luyao Huang 提交于
Before: # virsh blockjob r7 vdc error: An error occurred, but the cause is unknown After: # virsh blockjob r7 vdc error: Disk 'vdc' not found in the domain Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1241355Signed-off-by: NLuyao Huang <lhuang@redhat.com>
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1142631 Commit id 'e0e29055' added a check to determine if the same bus had the same target value. It seems that's not quite good enough as the check should check the target name value regardless of bus type. Also added a DO_TEST_DIFFERENT to exhibit the issue
-
由 Erik Skultety 提交于
This patch reverts commit 4749d82a which tried to tweak the logic in volume creation. We did realloc and update our object list before we executed volume building within a specific storage backend. If that failed, we had to update (again) our object list to the original state as it was before the build and delete the volume from the pool (even though it didn't exist - this truly depends on the backend). I misunderstood the base idea to be able to poll the status of the volume creation using vol-info. After commit 4749d82a this wasn't possible anymore, although no BZ has been reported yet. Commit 4749d82a also claimed to fix https://bugzilla.redhat.com/show_bug.cgi?id=1223177, but commit c8be606b of the same series as 4749d82ad (which was more of a refactor than a fix) fixes the same issue so the revert should be pretty straightforward. Further more, BZ https://bugzilla.redhat.com/show_bug.cgi?id=1241454 can be fixed with this revert.
-
由 John Ferlan 提交于
Setting of 'val' is a boolean expression, so handle it that way and adjust the check/return logic to be clearer Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Set to ret = -1 and prove otherwise, like usual Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Since a future patch will need the device path generated when adding a shared host device, remove the qemuAddSharedHostdev and inline the two calls into qemuAddSharedHostdev and qemuRemoveSharedHostdev Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Split out the current function in order to share the code with hostdev in a future patch. Failure to match the expected sgio value against what is stored will cause an error which the caller would need to handle since only the caller has the disk (or eventually hostdev) specific data in order to uniquely identify the disk in an error message. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 Jim Fehlig 提交于
Set the state of virDomainObj in the functions that actually change the domain state, instead of the generic libxlDomainCleanup function. This approach gives functions calling libxlDomainCleanup more flexibility wrt when and how they change virDomainObj state via virDomainObjSetState. The prior approach of calling virDomainObjSetState in libxlDomainCleanup resulted in the following incorrect coding pattern in the various functions that change domain state libxlDomain<DoStateTransition> call libxl function to do state transition emit lifecycle event libxlDomainCleanup virDomainObjSetState Once simple manifestation of this bug is seeing a domain running in virt-manager after selecting the shutdown button, even after the domain has long shutdown.
-
由 Jim Fehlig 提交于
In Xen, dom0 is really just another domain that supports ballooning, adding/removing devices, changing vcpu configuration, etc. This patch adds support to the libxl driver for managing dom0. Note that the legacy xend driver has long supported managing dom0. Operations that are not supported on dom0 are filtered in libvirt where a sensible error is reported. Errors from libxl are not always helpful. E.g., attempting a save on dom0 results in 2015-06-23 15:25:05 MDT libxl: debug: libxl_dom.c:1570:libxl__toolstack_save: domain=0 toolstack data size=8 2015-06-23 15:25:05 MDT libxl: debug: libxl.c:979:do_libxl_domain_suspend: ao 0x7f7e68000b70: inprogress: poller=0x7f7e68000930, flags=i 2015-06-23 15:25:05 MDT libxl-save-helper: debug: starting save: Success 2015-06-23 15:25:05 MDT xc: detail: xc_domain_save_suse: starting save of domid 0 2015-06-23 15:25:05 MDT xc: error: Couldn't map live_shinfo (3 = No such process): Internal error 2015-06-23 15:25:05 MDT xc: detail: Save exit of domid 0 with errno=3 2015-06-23 15:25:05 MDT libxl-save-helper: debug: complete r=1: No such process 2015-06-23 15:25:05 MDT libxl: error: libxl_dom.c:1876:libxl__xc_domain_save_done: saving domain: domain did not respond to suspend request: No such process 2015-06-23 15:25:05 MDT libxl: error: libxl_dom.c:2033:remus_teardown_done: Remus: failed to teardown device for guest with domid 0, rc -8 Signed-off-by: NJim Fehlig <jfehlig@suse.com>
-
由 John Ferlan 提交于
Introduce a convenience function to handle formulating the hostdev path Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Add a single boolean function to handle whether the hostdev is shared or not. Use the new function for the qemu{Add|Remove}SharedHostdev calls as well as qemuSetUnprivSGIO. NB: This third usage fixes a possible bug where if this feature is enabled at some time in the future and the shareable flag wasn't set, the sgio would have been erroneously set. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Update the descriptions for disk and hostdev sgio in order to indicate not all hypervisors and OS's support this feature Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
- 08 7月, 2015 16 次提交
-
-
由 Luyao Huang 提交于
If user passes an invalid address for shared memory device to qemu, neither libvirt nor qemu will report an error, but qemu will auto assign a pci address to the shared memory device. Signed-off-by: NLuyao Huang <lhuang@redhat.com>
-
由 Luyao Huang 提交于
As the backend of shmem server is a unix type chr device, save it in virDomainChrSourceDef, so we can reuse the existing code for chr device. Signed-off-by: NLuyao Huang <lhuang@redhat.com>
-
由 Luyao Huang 提交于
Rename qemuBuildShmemDevCmd to qemuBuildShmemDevStr and change the return type so that it can be reused in the device hotplug code later. And split the chardev creation part in a new function qemuBuildShmemBackendStr for reuse in the device hotplug code later. Signed-off-by: NLuyao Huang <lhuang@redhat.com>
-
由 Luyao Huang 提交于
Signed-off-by: NLuyao Huang <lhuang@redhat.com>
-
由 Luyao Huang 提交于
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1165029Signed-off-by: NLuyao Huang <lhuang@redhat.com>
-
由 Maxim Nestratov 提交于
It is better not to assume that newly created network should be connected to a bridge with same name, but specify it explicitly by PRL_USE_VNET_NAME_FOR_BRIDGE_NAME flag. Signed-off-by: NMaxim Nestratov <mnestratov@virtuozzo.com>
-
由 Ján Tomko 提交于
Since QEMU commit ea96bc6 [1]: i386: drop FDC in pc-q35-2.4+ if neither it nor floppy drives are wanted the floppy controller is no longer implicit. Specify it explicitly on the command line if the machine type version is 2.4 or later. Note that libvirt's floppy drives do not result in QEMU implying the controller, because libvirt uses if=none instead of if=floppy. https://bugzilla.redhat.com/show_bug.cgi?id=1227880 [1] http://git.qemu.org/?p=qemu.git;a=commitdiff;h=ea96bc6
-
由 Ján Tomko 提交于
For the implicit controller, we set them via -global. Separating them will allow reuse for explicit fdc controller as well. No functional impact apart from one extra allocation.
-
由 Pavel Fedin 提交于
This allows to build libvirt under MinGW Signed-off-by: NPavel Fedin <p.fedin@samsung.com>
-
由 Pavel Fedin 提交于
Explicit 'enum' keyword does not work with portablexdr-rpcgeb, causing its parser to fail. Fix method is borrowed from virnetprotocol.x Signed-off-by: NPavel Fedin <p.fedin@samsung.com>
-
由 Serge Hallyn 提交于
Commit 03d7462d added it for channels, but it is also needed for serials. Add it for serials, parallels, and consoles as well. This solves https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1015154Signed-off-by: NSerge Hallyn <serge.hallyn@ubuntu.com>
-
由 Erik Skultety 提交于
Commit 2a31c5f0 introduced support for storage pool state XMLs, however it also introduced a regression: if (!virstoragePoolObjIsActive(pool)) { virStoragePoolObjUnlock(pool); continue; } The idea behind this was that since we've got state XMLs and the pool wasn't marked as active by autostart routine (if the autostart flag had been set earlier), the pool is inactive and we can leave it be and continue with other pools. However, filesystem type pools like fs,dir, possibly netfs are supposed to be active if the filesystem is mounted on the host. And this is exactly where the regression occurs, e.g. pool type 'dir' which has been previously destroyed and marked as !autostart gets filtered out by the condition above. The resolution should be simply to remove the condition completely, all pools will get their 'active' flag updated by check callback and if they do not support such callback, the logic doesn't change and such pools will be inactive by default (e.g. RBD, even if a state XML exists). Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1238610
-
由 Peter Krempa 提交于
Optimize the virBitmap to array-of-char bitmap conversion by skipping trailing zero bytes. This also fixes a regression when requesting iothread information from a live VM since after commit 825df8c3 the bitmap returned from virProcessGetAffinity is too big to be formatted properly via RPC. A user would get the following error: error: Unable to get domain IOThreads information error: Unable to encode message payload Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1238589
-
由 Peter Krempa 提交于
-
由 Peter Krempa 提交于
'maxcpu' and 'vcpus' are set but not used after that
-
由 Luyao Huang 提交于
When use setvcpus command with --guest option to a offline vm, we will get error: # virsh setvcpus test3 1 --guest error: Guest agent is not responding: QEMU guest agent is not connected However guest is not running, agent status could not be connected. In this case, report domain is not running will be better than agent is not connected. Move the guest status check more early to output error to point out guest status is not right. Also from the logic, a running vm is a basic requirement to use agent, we cannot use agent if vm is not running. Signed-off-by: NLuyao Huang <lhuang@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-