- 11 12月, 2014 1 次提交
-
-
由 Francesco Romani 提交于
A logic bug in qemuConnectGetAllDomainStats makes the code mark the monitor as available when qemuDomainObjBeginJob fails, instead of when it succeeds, as the correct flow requires. This patch fixes the check and updates the code documentation accordingly. Broken by commit 57023c0a. Signed-off-by: NFrancesco Romani <fromani@redhat.com>
-
- 10 12月, 2014 2 次提交
-
-
由 Wang Rui 提交于
Signed-off-by: NWang Rui <moon.wangrui@huawei.com>
-
由 Martin Kletzander 提交于
When user doesn't have read access on one of the domains he requested, the for loop could exit abruptly or continue and override pointer which pointed to locked object. This patch fixed two issues at once. One is that domflags might have had QEMU_DOMAIN_STATS_HAVE_JOB even when there was no job started (this is fixed by doing domflags |= QEMU_DOMAIN_STATS_HAVE_JOB only when the job was acquired and cleaning domflags on every start of the loop. Second one is that the domain is kept locked when virConnectGetAllDomainStatsCheckACL() fails and continues the loop when it didn't end. Adding a simple virObjectUnlock() and clearing the pointer ought to do. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 09 12月, 2014 4 次提交
-
-
由 Peter Krempa 提交于
Avoid leaving the domain locked on a failed ACL check in qemuDomainMigratePerform() and qemuDomainMigrateFinish2(). Introduced in commit abf75aea (Add ACL checks into the QEMU driver).
-
由 Eric Blake 提交于
I'm about to make block stats optionally more complex to cover backing chains, where block.count will no longer equal the number of <disks> for a domain. For these reasons, it is nicer if the statistics output includes the source path (for local files). This patch doesn't add anything for network disks, although we may decide to add that later. With this patch, I now see the following for the same domain as in the previous patch (one qcow2 file, and an empty cdrom drive): $ virsh domstats --block foo Domain: 'foo' block.count=2 block.0.name=hda block.0.path=/var/lib/libvirt/images/foo.qcow2 block.1.name=hdc * src/libvirt-domain.c (virConnectGetAllDomainStats): Document new field. * tools/virsh.pod (domstats): Document new field. * src/qemu/qemu_driver.c (qemuDomainGetStatsBlock): Return the new stat for local files/block devices. (QEMU_ADD_NAME_PARAM): Add parameter. (qemuDomainGetStatsInterface): Update caller. Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Eric Blake 提交于
I noticed that for an offline domain, 'virsh domstats --block $dom' was producing just the domain name, with no stats. But the older 'virsh domblkinfo' works just fine on offline domains. This patch starts to get us closer, by at least reporting the disk names for an offline domain. With this patch, I now see the following for an offline domain with one qcow2 disk and an empty cdrom drive: $ virsh domstats --block foo Domain: 'foo' block.count=2 block.0.name=hda block.1.name=hdc * src/qemu/qemu_driver.c (qemuDomainGetStatsBlock): Don't short-circuit output of block name. Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Eric Blake 提交于
qemuDomainGetStatsBlock() could leak a stats hash table if it encountered OOM while populating the virTypedParameters. Oddly, the fix doesn't even touch qemuDomainGetStatsBlock :) * src/qemu/qemu_driver.c (QEMU_ADD_COUNT_PARAM) (QEMU_ADD_NAME_PARAM): Don't return early. (qemuDomainGetStatsInterface): Adjust caller. Signed-off-by: NEric Blake <eblake@redhat.com>
-
- 05 12月, 2014 1 次提交
-
-
由 Shanzhi Yu 提交于
When attempting to create internal system checkpoint with a passthrough device qemu will report the following error: error: operation failed: Error -22 while writing VM This patch calls the function to check if migration is possible with given VM and thus improves the error to: error: Requested operation is not valid: domain has assigned non-USB host devices Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=874418#c19Signed-off-by: NPeter Krempa <pkrempa@redhat.com>
-
- 04 12月, 2014 2 次提交
-
-
由 Erik Skultety 提交于
If someone removes blockcopy storage file when still in mirroring phase and then requesting blockjob abort using pivot, virsh cmd freezes. This is not an issue with older qemu versions which did not support asynchronous jobs (which we prefer by default). As we have reached the mirroring phase successfully, polling monitor for blockjob info always returns 1 and the loop never ends. This fix introduces a check for qemuDomainBlockPivot return code, possibly skipping the asynchronous waiting completely, if an error occurred and asynchronous waiting was the preferred method. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1139567
-
由 Peter Krempa 提交于
Reconnect to the VM is a possibly long-running job spawned in a separate thread. We should reload the snapshot defs and managedsave state prior to spawning the thread to avoid blocking of the daemon startup which would serialize on the VM lock. Also the reloading code would violate the domain job held while reconnecting as the loader functions don't create jobs.
-
- 03 12月, 2014 2 次提交
-
-
由 John Ferlan 提交于
Since virDomainSnapshotFree will call virObjectUnref anyway, let's just use that directly so as to avoid the possibility that we inadvertently clear out a pending error message when using the public API.
-
由 John Ferlan 提交于
Since virDomainFree will call virObjectUnref anyway, let's just use that directly so as to avoid the possibility that we inadvertently clear out a pending error message when using the public API.
-
- 02 12月, 2014 1 次提交
-
-
由 Eduardo Costa 提交于
There is a race condition between the fopen and fscanf calls in qemuGetProcessInfo. If fopen succeeds, there is a small possibility that the file no longer exists before reading from it. Now, if either fopen or fscanf calls fail, the function will behave just as only fopen had failed. Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1169055Signed-off-by: NEric Blake <eblake@redhat.com>
-
- 01 12月, 2014 2 次提交
-
-
由 Martin Kletzander 提交于
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1169183Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Martin Kletzander 提交于
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1169183Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 28 11月, 2014 1 次提交
-
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1160084 As of b6d4dad1 (1.2.5) we are trying to keep the status of FSFreeze in the guest. Even though I've tried to fixed couple of corner cases (6ea54769), it occurred to me just recently, that the approach is broken by design. Firstly, there are many other ways to talk to qemu-ga (even through libvirt) that filesystems can be thawed (e.g. qemu-agent-command) without libvirt noticing. Moreover, there are plenty of ways to thaw filesystems without even qemu-ga noticing (yes, qemu-ga keeps internal track of FSFreeze status). So, instead of keeping the track ourselves, or asking qemu-ga for stale state, it's the best to let qemu-ga deal with that (and possibly let guest kernel propagate an error). Moreover, there's one bug with the following approach, if fsfreeze command failed, we've executed fsthaw subsequently. So issuing domfsfreeze in virsh gave the following result: virsh # domfsfreeze gentoo Froze 1 filesystem(s) virsh # domfsfreeze gentoo error: Unable to freeze filesystems error: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been disabled for this instance virsh # domfsfreeze gentoo Froze 1 filesystem(s) virsh # domfsfreeze gentoo error: Unable to freeze filesystems error: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been disabled for this instance Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 24 11月, 2014 2 次提交
-
-
由 Tomoki Sekiyama 提交于
Get mounted filesystems list, which contains hardware info of disks and its controllers, from QEMU guest agent 2.2+. Then, convert the hardware info to corresponding device aliases for the disks. Signed-off-by: NTomoki Sekiyama <tomoki.sekiyama@hds.com>
-
由 Peter Krempa 提交于
Add code to emit the event on change of the channel state and reconnect to the qemu process.
-
- 21 11月, 2014 2 次提交
-
-
由 Peter Krempa 提交于
New qemu added a new event that is emitted when a virtio serial channel is opened in the guest OS. This allows us to update the state of the port in the output-only XML element. This patch implements the monitor callbacks and necessary handlers to update the state in the definition.
-
由 Peter Krempa 提交于
When creating a disk image snapshot the libvirt code would blindly copy the parents label to the newly created image. This runs into problems when you start a VM from an image hosted on NFS (or other storage system that doesn't support selinux labels) and the snapshot destination is on a storage system that does support selinux labels. Libvirt's code in that case generates a different security label for the image hosted on NFS. This label is valid only for NFS images and doesn't allow access in case of a locally stored image. To fix this issue libvirt needs to refrain from copying security information in cases where the default domain seclabel is a better choice. This patch repurposes the now unused @force argument of virStorageSourceInitChainElement to denote whether a copy of the security labelling stuff should be attempted or not. This allows to fine-control the copy operation for cases where we need to keep the label of the old disk vs. the cases where we need to keep the label unset to use the default domain imagelabel. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1151718
-
- 19 11月, 2014 1 次提交
-
-
由 Eric Blake 提交于
I noticed this while working on qemuDomainGetBlockInfo. Assigning a bool value to an int variable compiles fine, but raises red flags on the maintenance front as it becomes too easy to assign -1 or 2 or any other non-bool value to the same variable. * cfg.mk (sc_prohibit_int_assign_bool): New rule. * src/conf/snapshot_conf.c (virDomainSnapshotRedefinePrep): Fix offenders. * src/qemu/qemu_driver.c (qemuDomainGetBlockInfo) (qemuDomainSnapshotCreateXML): Likewise. * src/test/test_driver.c (testDomainSnapshotAlignDisks): Likewise. * src/util/vircgroup.c (virCgroupSupportsCpuBW): Likewise. * src/util/virpci.c (virPCIDeviceBindToStub): Likewise. * src/util/virutil.c (virIsCapableVport): Likewise. * tools/virsh-domain-monitor.c (cmdDomMemStat): Likewise. * tools/virsh-domain.c (cmdBlockResize, cmdScreenshot) (cmdInjectNMI, cmdSendKey, cmdSendProcessSignal) (cmdDetachInterface): Likewise. Signed-off-by: NEric Blake <eblake@redhat.com>
-
- 15 11月, 2014 3 次提交
-
-
由 John Ferlan 提交于
For some reason, commit id '72b4151f' triggered a Coverity uninitialized 'reply' variable check when referenced within the for loop. It seems Coverity doesn't know that flags will have to be either AFFECT_LIVE or AFFECT_CONFIG after the virDomainLiveConfigHelperMethod call. By adding a "sa_assert()" to confirm that fact, Coverity is happy again.
-
由 Luyao Huang 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1164080 After a disk is hotunplugged a subsequent call to qemuDomainGetBlockIoTune to get the --config settings of that disk will fail because the disk is no longer found by qemuDiskPathToAlias causing an unexpected failure. Since only the --live flag needs to have the disk device pointer, move the fetch inside the (flags & VIR_DOMAIN_AFFECT_LIVE) condition. This will also affect the results if no flags are provided or the --current flag is provided. Signed-off-by: NLuyao Huang <lhuang@redhat.com>
-
由 Martin Kletzander 提交于
Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 13 11月, 2014 1 次提交
-
-
由 Pavel Hrdina 提交于
Commit 6e5c79a1 tried to fix deadlock between nwfilter{Define,Undefine} and starting of guest, but this same deadlock exists for updating/attaching network device to domain. The deadlock was introduced by removing global QEMU driver lock because nwfilter was counting on this lock and ensure that all driver locks are locked inside of nwfilter{Define,Undefine}. This patch extends usage of virNWFilterReadLockFilterUpdates to prevent the deadlock for all possible paths in QEMU driver. LXC and UML drivers still have global lock. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1143780Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
- 12 11月, 2014 1 次提交
-
-
由 Matthias Gatto 提交于
reported here: http://www.redhat.com/archives/libvir-list/2014-November/msg00327.html I could have just remove bool supportMaxOptions variable, but if I had do this, we could not check anymore if the nparams variable is superior to QEMU_NB_BLOCK_IO_TUNE_PARAM_MAX. v2: change following this proposal: http://www.redhat.com/archives/libvir-list/2014-November/msg00379.html
-
- 11 11月, 2014 1 次提交
-
-
由 Matthias Gatto 提交于
Add support for bps_max and friends in the driver part. In the part checking if a qemu is running, check if the running binary support bps_max, if not print an error message, if yes add it to "info" variable Signed-off-by: NMatthias Gatto <matthias.gatto@outscale.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 06 11月, 2014 3 次提交
-
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1160084 As of b6d4dad1 (1.2.5) libvirt keeps track if domain disks have been frozen. However, this falls into that set of information which don't survive domain restart. Therefore, we need to clear the flag upon some state transitions. Moreover, once we clear the flag we must update the status file too. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Erik Skultety 提交于
Since there was a valid note to patch 43b67f2e about the best spot to check for bandwidth set call while having libvirt daemon run in session mode, this patch reverts previous changes dealing with bandwith (also reverts adding variable @cfg in qemuDomainGetNumaParameters which does not have any use at the moment, but getting and unreferencing driver's config) in qemu_driver.c and qemu_command.c. There will be another patch in the series which introduces the fix itself.
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1159219 Users might want to update startupPolicy via the virDomainUpdateDeviceFlags API too. This patch implements the feature on config layer. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 03 11月, 2014 2 次提交
-
-
由 Martin Kletzander 提交于
Before: $ virsh blkiotune dummy --device-read-bytes-sec /dev/sda,-1 error: Unable to change blkio parameters error: invalid argument: unable to parse blkio device 'device_read_bytes_sec' '/dev/sda,-1' After: $ virsh blkiotune dummy --device-read-bytes-sec /dev/sda,-1 error: Unable to change blkio parameters error: invalid argument: invalid value '-1' for parameter 'device_read_bytes_sec' of device '/dev/sda' Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1131306Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Martin Kletzander 提交于
Particularly in qemuBuildNumaArgStr(), there was a need for the advice due to memory backing, which needs to know the nodeset it will be pinned to. With newer qemu this caused the following error when starting domain: error: internal error: Advice from numad is needed in case of automatic numa placement even when starting perfectly valid domain, e.g.: ... <vcpu placement='auto'>4</vcpu> <numatune> <memory mode='strict' placement='auto'/> </numatune> <cpu> <numa> <cell id='0' cpus='0' memory='524288'/> <cell id='1' cpus='1' memory='524288'/> </numa> </cpu> ... Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1138545Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 29 10月, 2014 3 次提交
-
-
由 Eric Blake 提交于
Now that all offenders have been cleaned, turn on a syntax-check rule to prevent future offenders. * cfg.mk (sc_prohibit_static_zero_init): New rule. * src/qemu/qemu_driver.c (qemuDomainBlockJobImpl): Avoid false positive. Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1141621 As part of attach processing, assign the device aliases by calling qemuAssignDeviceAliases during qemuDomainQemuAttach once all the devices are found after the qemuParseCommandLinePid processing. This will alleviate a symptom that caused a libvirtd crash during an attempted device detach.
-
由 Tony Krowiak 提交于
This patch adds functionality to processNicRxFilterChangedEvent(). The old and new multicast lists are compared and the filters in the macvtap are programmed to match the guest's filters. Signed-off-by: NTony Krowiak <akrowiak@linux.vnet.ibm.com>
-
- 28 10月, 2014 1 次提交
-
-
由 Eric Blake 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=956506 documents that given a domain where an internal snapshot parent has an external snapshot child, we lacked a safety check when trying to use the --children-only option to snapshot-delete: $ virsh start dom $ virsh snapshot-create-as dom internal $ virsh snapshot-create-as dom external --disk-only $ virsh snapshot-delete dom external error: Failed to delete snapshot external error: unsupported configuration: deletion of 1 external disk snapshots not supported yet $ virsh snapshot-delete dom internal --children error: Failed to delete snapshot internal error: unsupported configuration: deletion of 1 external disk snapshots not supported yet $ virsh snapshot-delete dom internal --children-only Domain snapshot internal children deleted While I'd still like to see patches that actually do proper external snapshot deletion, we should at least fix the inconsistency in the meantime. With this patch: $ virsh snapshot-delete dom internal --children-only error: Failed to delete snapshot internal error: unsupported configuration: deletion of 1 external disk snapshots not supported yet * src/qemu/qemu_driver.c (qemuDomainSnapshotDelete): Fix condition. Signed-off-by: NEric Blake <eblake@redhat.com>
-
- 23 10月, 2014 2 次提交
-
-
由 Daniel P. Berrange 提交于
To prepare for introducing a single global driver, rename the virDriver struct to virHypervisorDriver and the registration API to virRegisterHypervisorDriver()
-
由 Erik Skultety 提交于
Tuning NUMA or network interface parameters requires root privileges to manage cgroups. Thus an attempt to set some of these parameters in session mode on a running domain should be invalid followed by an error. An example might be memory tuning which raises an error in such case. The following behavior in session mode will be present after applying this patch: Tuning | SET | GET | ----------|---------------|--------| NUMA | shut off only | always | Memory | never | never | Interface | never | always | Resolves https://bugzilla.redhat.com/show_bug.cgi?id=1126762
-
- 22 10月, 2014 1 次提交
-
-
由 Peter Krempa 提交于
The documentation for the restore hook states that returning an empty XML is equivalent with copying the input. There was a bug in the code checking the returned string by checking the string instead of the contents. Use the new helper to check if the string is empty.
-
- 21 10月, 2014 1 次提交
-
-
由 Lubomir Rintel 提交于
virt-manager on Fedora sets up i686 hosts with "/usr/bin/qemu-kvm" emulator, which in turn unconditionally execs qemu-system-x86_64 querying capabilities then fails: Error launching details: invalid argument: architecture from emulator 'x86_64' doesn't match given architecture 'i686' Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/engine.py", line 748, in _show_vm_helper details = self._get_details_dialog(uri, vm.get_connkey()) File "/usr/share/virt-manager/virtManager/engine.py", line 726, in _get_details_dialog obj = vmmDetails(conn.get_vm(connkey)) File "/usr/share/virt-manager/virtManager/details.py", line 399, in __init__ self.init_details() File "/usr/share/virt-manager/virtManager/details.py", line 784, in init_details domcaps = self.vm.get_domain_capabilities() File "/usr/share/virt-manager/virtManager/domain.py", line 518, in get_domain_capabilities self.get_xmlobj().os.machine, self.get_xmlobj().type) File "/usr/lib/python2.7/site-packages/libvirt.py", line 3492, in getDomainCapabilities if ret is None: raise libvirtError ('virConnectGetDomainCapabilities() failed', conn=self) libvirtError: invalid argument: architecture from emulator 'x86_64' doesn't match given architecture 'i686' Journal: Oct 16 21:08:26 goatlord.localdomain libvirtd[1530]: invalid argument: architecture from emulator 'x86_64' doesn't match given architecture 'i686'
-