- 14 9月, 2015 1 次提交
-
-
由 Martin Kletzander 提交于
Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 09 9月, 2015 1 次提交
-
-
由 Martin Kletzander 提交于
Commit f1f68ca3 did not report an error if virFileMakePath() returned -1. Well, who would've guessed function with name starting with 'vir' sets an errno instead of reporting an error the libvirt way. Anyway, let's fix it, so the output changes from: $ virsh start arm error: Failed to start domain arm error: An error occurred, but the cause is unknown to: $ virsh start arm error: Failed to start domain arm error: Cannot create directory '/var/lib/libvirt/qemu/domain-arm': Not a directory Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1146886Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 26 8月, 2015 1 次提交
-
-
由 Martin Kletzander 提交于
Commit f1f68ca3 overused mdir_name() event though it was not needed in the latest version, hence labelling directory one level up in the tree and not the one it should. If anyone with SElinux managed to try run a domain with guest agent set up, it's highly possible that they will need to run 'restorecon -F /var/lib/libvirt/qemu/channel/target' to fix what was done. Reported-by: NLuyao Huang <lhuang@redhat.com> Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 24 8月, 2015 1 次提交
-
-
由 Martin Kletzander 提交于
We are automatically generating some socket paths for domains, but all those paths end up in a directory that's the same for multiple domains. The problem is that multiple domains can each run with different seclabels (users, selinux contexts, etc.). The idea here is to create a per-domain directory labelled in a way that each domain can access its own unix sockets. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1146886Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 03 8月, 2015 1 次提交
-
-
由 Martin Kletzander 提交于
The virDomainObjListRemove() function unlocks a domain that it's given due to legacy code. And because of that code, which should be refactored, that last virObjectUnlock() cannot be just removed. So instead, lock it right back for qemu for now. All calls to qemuDomainRemoveInactive() are followed by code that unlocks the domain again, plus the domain should be locked during qemuDomainObjEndJob(), so the right place to lock it is right after virDomainObjListRemove(). The only place where this would cause a problem is the autodestroy callback, so we need to get another reference there and uref+unlock it afterwards. Luckily, returning NULL from that function doesn't mean an error, and only means that it doesn't need to be unlocked anymore. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 31 7月, 2015 2 次提交
-
-
由 Jiri Denemark 提交于
When stopping a domain on the destination host after a failed migration, we need to avoid reseting security labels since the domain is still running on the source host. While we were correctly doing so in some cases, there were still some paths which did this wrong. https://bugzilla.redhat.com/show_bug.cgi?id=1242904Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
In addition to checking the current asynchronous job qemuMigrationJobIsActive reports an error if the current job does not match the one we asked for. Let's just check the job directly since we are not interested in the error in qemuProcessHandleMonitorEOF. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 14 7月, 2015 2 次提交
-
-
由 Peter Krempa 提交于
In commit 641a145d I've added code that resets the balloon memory value to full size prior to resuming the vCPUs since the size certainly was not reduced at that point. Since qemuProcessStart is used also in code paths with already booted up guests (migration, save/restore) the assumption is not entirely true since the guest might already been running before. This patch adds a function that queries the monitor rather than using the full size since a balloon event would not be reissued in case we are recovering a saved migration state. Additionally the new function is used also when reconnecting to a VM after libvirtd restart since we might have missed a few balloon events while libvirtd was not running.
-
由 John Ferlan 提交于
Add the sysfs_prefix argument to the call to allow for setting the path for tests to something other than SYSFS_SYSTEM_PATH.
-
- 13 7月, 2015 1 次提交
-
-
由 Michal Privoznik 提交于
After Jirka's migration patches libvirt is listening on migration events from qemu instead of actively polling on the monitor. There is, however, a little regression (introduced in 6d2edb6a). The problem is, the current status of migration job is updated in qemuProcessHandleMigrationStatus if and only if migration job was started. But eventually every asynchronous job may result in migration. Therefore, since this job is not strictly a migration job, internal state was not updated and later checks failed: virsh # save fedora22 /tmp/fedora22_ble.save error: Failed to save domain fedora22 to /tmp/fedora22_ble.save error: operation failed: domain save job: is not active Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 10 7月, 2015 7 次提交
-
-
由 Jiri Denemark 提交于
If QEMU fails during incoming migration, the domain disappears including a possibly useful error message read from QEMU log file. Let's remember the error in virQEMUDriver so that Finish can report more than just "no such domain". Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Since we already support the MIGRATION event, we just need to make sure the domain condition is signalled whenever a p2p connection drops or the domain is paused due to IO error and we can avoid waking up every 50 ms to check whether something happened. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
We don't need to call query-migrate every 50ms when we get the current migration state via MIGRATION event. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Even if QEMU supports migration events it doesn't send them by default. We have to enable them by calling migrate-set-capabilities. Let's enable migration events everytime we can and clear QEMU_CAPS_MIGRATION_EVENT in case migrate-set-capabilities does not support events. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Pavel Hrdina 提交于
Some guests lock the tray and QEMU eject command will simply fail to eject the media. But the guest OS can handle this attempt to eject the media and can unlock the tray and open it. In this case, we should try again to actually eject the media. If the first attempt fails to detect a tray_open we will fail with error, from monitor. If we receive that event, we know, that the guest properly reacted to the eject request, unlocked the tray and opened it. In this case, we need to run the command again to actually eject the media from the device. The reason to call it again is, that QEMU doesn't wait for the guest to react and report an error, that the tray is locked. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1147471Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
由 Pavel Hrdina 提交于
There are multiple consumers for the domain condition and we should always wake them all. Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
- 19 6月, 2015 3 次提交
-
-
由 Jiri Denemark 提交于
QEMU_CAPS_SEAMLESS_MIGRATION capability says QEMU supports SPICE_MIGRATE_COMPLETED event. Thus we can just drop all code which polls query-spice and replace it with waiting for the event. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
When libvirtd is restarted during migration, we properly cancel the ongoing migration (unless it managed to almost finished before the restart). But if we were also migrating storage using NBD, we would completely forget about the running disk mirrors. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
By switching block jobs to use domain conditions, we can drop some pretty complicated code in NBD storage migration. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 05 6月, 2015 1 次提交
-
-
由 Ján Tomko 提交于
-
- 04 6月, 2015 1 次提交
-
-
由 Peter Krempa 提交于
After libvirt issues the balloon resize command, the current balloon size needs to be changed to the maximum memory size since the vCPUs were not started and thus the balloon driver could not return the memory. Since GetXMLDesc and other APIs return the balloon size without updating it in case they are not able to obtain the job and the memory balloon does not support the asynchronous event the sizing might be incorrect.
-
- 03 6月, 2015 3 次提交
-
-
由 Peter Krempa 提交于
Since the monitor code now supports ullongs when setting balloon size, drop the legacy code with overflow checking. Additionally the comment mentioning that the job is treated as a sync job does not make sense any more since the monitor is entered asynchronously.
-
由 Peter Krempa 提交于
Store the emulator pinning cpu mask as a pure virBitmap rather than the virDomainPinDef since it stores only the bitmap and refactor qemuDomainPinEmulator to do the same operations in a much saner way. As a side effect virDomainEmulatorPinAdd and virDomainEmulatorPinDel can be removed since they don't add any value.
-
由 Peter Krempa 提交于
In case when <vcpu ... cpuset=""> is not specified, the vcpupin array is not guaranteed to be allocated to def->vcpus. This would cause a crash for TCG since it does not report thread IDs for vCPUs.
-
- 21 5月, 2015 2 次提交
-
-
由 Jiri Denemark 提交于
Most virDomainDiskIndexByName callers do not care about the index; what they really want is a disk def pointer. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Erik Skultety 提交于
When starting a domain, if a domain specifies security drivers we do not have loaded, we fail. However we don't check for this during reconnect, so any operation relying on security driver functionality would fail. If someone e.g. starts a domain with selinux driver loaded, then they change the security driver to 'none' in config, restart the daemon and call dump/save/.., QEMU will return an error. As we shouldn't kill the domain, we should at least log an error to let the user know that domain reconnect wasn't completely clean. https://bugzilla.redhat.com/show_bug.cgi?id=1183893
-
- 20 5月, 2015 1 次提交
-
-
由 Michal Privoznik 提交于
So far, we are not reporting if numatune was even defined. The value of zero is blindly returned (which maps onto VIR_DOMAIN_NUMATUNE_MEM_STRICT). Unfortunately, we are making decisions based on this value. Instead, we should not only return the correct value, but report to the caller if the value is valid at all. For better viewing of this patch use '-w'. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 15 5月, 2015 1 次提交
-
-
由 Jiri Denemark 提交于
Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 29 4月, 2015 1 次提交
-
-
由 Michael Chapman 提交于
Other threads may be blocked in qemuBlockJobSyncWait. Ensure that they're woken up when the domain is stopped. Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
-
- 28 4月, 2015 5 次提交
-
-
由 John Ferlan 提交于
Replace with just VIR_FREE.
-
由 John Ferlan 提交于
If we received zero iothreads from the monitor, but were perhaps expecting to receive something, then the code was skipping the check to ensure what's in the monitor matches our expectations. So invert the checks to check that what we get back matches expectations and then check there are zero iothreads returned.
-
由 John Ferlan 提交于
Rather than have a separate routine to parse the alias of an iothread returned from qemu in order to get the iothread_id value, parse the alias when returning and just return the iothread_id in qemuMonitorIOThreadInfoPtr This set of patches removes the function, changes the "char *name" to "unsigned int" and handles all the fallout.
-
由 John Ferlan 提交于
Remove the iothreadspin array from cputune and replace with a cpumask to be stored in the iothreadids list. Adjust the test output because our printing goes in order of the iothreadids list now.
-
由 John Ferlan 提交于
Add 'thread_id' to the virDomainIOThreadIDDef as a means to store the 'thread_id' as returned from the live qemu monitor data. Remove the iothreadpids list from _qemuDomainObjPrivate and replace with the new iothreadids 'thread_id' element. Rather than use the default numbering scheme of 1..number of iothreads defined for the domain, use the iothreadid's list for the iothread_id Since iothreadids list keeps track of the iothread_id's, these are now used in place of the many places where a for loop would "know" that the ID was "+ 1" from the array element. The new tests ensure usage of the <iothreadid> values for an exact number of iothreads and the usage of a smaller number of <iothreadid> values than iothreads that exist (and usage of the default numbering scheme).
-
- 27 4月, 2015 1 次提交
-
-
由 zhang bo 提交于
rather then -> rather than Signed-off-by: NYueWenyuan <yuewenyuan@huawei.com> Signed-off-by: NJán Tomko <jtomko@redhat.com>
-
- 26 4月, 2015 2 次提交
-
-
由 Peter Krempa 提交于
If a user hot-attaches the guest agent channel libvirt would ignore it until the restart of libvirtd or shutdown/destroy and start of the VM itself. This patch adds code that opens or closes the guest agent connection according to the state of the guest agent channel according to connect/disconnect events. To allow opening the channel from the event handler qemuConnectAgent needed to be exported.
-
由 Peter Krempa 提交于
When the guest agent channel gets hotplugged to a VM, libvirt would still report that "QEMU guest agent is not configured" rather than stating that the connection was not established yet. Currently the code won't be able to connect to the agent after hotplug but that will change in a later patch. As the qemuFindAgentConfig() helper is quite helpful in this case move it to a more usable place and export it.
-
- 24 4月, 2015 2 次提交
-
-
由 Cole Robinson 提交于
Similar to what was done for the channel socket in the previous commit.
-
由 Michal Privoznik 提交于
This is basically turning qemuDomObjEndAPI into a more general function. Other drivers which gets a reference to domain objects may benefit from this function too. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-