- 14 4月, 2015 5 次提交
-
-
由 Peter Krempa 提交于
My intention is to split qemuMonitorJSONBlockJob() into simpler separate functions for every block job type. Since the error handling code is the same for all block jobs, this patch extracts the code into a separate function that will later be reused in more places. With the new helper qemuMonitorJSONErrorIsClass we can save a few function calls as we can extract the error object once.
-
由 Peter Krempa 提交于
Split out the function that checks the actual error class string into a separate helper as it will be useful later and refactor qemuMonitorJSONHasError to return bool type and remove few useless checks. Basically virJSONValueObjectHasKey are useless here since the next call to virJSONValueObjectGet is checking the return value again (which can't fail at that point). By removing the first check we save a function call.
-
由 Peter Krempa 提交于
Previously we checked that the vcpu we are trying to set is in range of the number of threads presented by qemu. The problem is that if the VM is offline the count is 0. Since the condition subtracted 1 from the count the number would overflow and the check would never trigger. Change the condition for more sensible ones with specific error messages. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1208434
-
由 Peter Krempa 提交于
Operating systems use the identifier to name the disks. As the name suggests the ID should be unique. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1208009
-
由 John Ferlan 提交于
Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
- 13 4月, 2015 4 次提交
-
-
由 Erik Skultety 提交于
This patch adds checks for empty bitmaps right after the calls of virBitmapParse. These only include spots where set API's are called and where domain's XML is parsed. Also, it partially reverts commit 983f5a which added a check for invalid nodeset "0,^0" into virBitmapParse function. This change broke the logic, as an empty bitmap should not cause an error. https://bugzilla.redhat.com/show_bug.cgi?id=1210545
-
由 Ján Tomko 提交于
On arm, we probe for virtio-*-pci devices, but use their virtio-*-device variants. Set the capabilities based on the -device variants as well, to make them work with qemus with the PCI devices compiled out.
-
由 Michal Privoznik 提交于
When pre-creating storage for domains, we need to find corresponding disk in the XML on the destination (domain XML may differ there, e.g. disk is accessible under different path). For better debugging, I'm printing all info I received on a disk. But there was a typo when printing the disk capacity: "%lluu" instead of "%llu". Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Xing Lin 提交于
The problem with the previous implementation is, even when qemuMigrationUpdateJobStatus() detects a migration job has completed, it will do a sleep for 50 ms (which is unnecessary and only adds up to the VM pause time). Signed-off-by: NXing Lin <xinglin@cs.utah.edu> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 11 4月, 2015 1 次提交
-
-
由 John Ferlan 提交于
Future IOThread setting patches would copy the code anyway, so create and generalize the adding of pindef for the vcpu and the pinning of the thread into their own APIs.
-
- 10 4月, 2015 4 次提交
-
-
由 Dmitry Guryanov 提交于
We support VNC for containers to have the same interface with VMs. At this moment it just renders linux text console. Of course we don't pass any physical devices and don't emulate virtual devices. Our VNC server renders text from terminal master and sends input events from VNC client to terminal. So add special video type VIR_DOMAIN_VIDEO_TYPE_PARALLELS for these pseudo-devices. Signed-off-by: NDmitry Guryanov <dguryanov@parallels.com>
-
由 John Ferlan 提交于
Future IOThread setting patches would copy the code anyway, so create and generalize a delete cgroup and pindef for the vcpu into its own API. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Future IOThread setting patches would copy the code anyway, so create and generalize the add the vcpu to a cgroup into its own API. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Replace the virCgroupNew{Vcpu|Emulator|IOThread} calls with the common virCgroupNewThread API Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
- 09 4月, 2015 6 次提交
-
-
由 Peter Krempa 提交于
Old compilers whine that 'sync' is being shadowed in the function introduced in 1eccac1d.
-
由 Peter Krempa 提交于
Support for drive-reopen was never present in the upstream code so we don't need to pause the VM when doing the block pivot. Kill all the code related to this semi-upstream artifact.
-
由 Peter Krempa 提交于
There are two leftover unused variables. Remove them and clean up the fallout of the change.
-
由 Peter Krempa 提交于
Refactor the function to use the new helpers.
-
由 Peter Krempa 提交于
We need to check that qemu supports block jobs in multiple places. Add a helper to do the check.
-
由 Peter Krempa 提交于
In some cases where the function does not need to access the private data this helper may be used to retrieve the monitor object.
-
- 08 4月, 2015 9 次提交
-
-
由 Luyao Huang 提交于
131,088 bytes in 16 blocks are definitely lost in loss record 2,174 of 2,176 at 0x4C29BFD: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) by 0x4C2BACB: realloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) by 0x52A026F: virReallocN (viralloc.c:245) by 0x52BFCB5: saferead_lim (virfile.c:1268) by 0x52C00EF: virFileReadLimFD (virfile.c:1328) by 0x52C019A: virFileReadAll (virfile.c:1351) by 0x52A5D4F: virCgroupGetValueStr (vircgroup.c:763) by 0x1DDA0DA3: qemuRestoreCgroupState (qemu_cgroup.c:805) by 0x1DDA0DA3: qemuConnectCgroup (qemu_cgroup.c:857) by 0x1DDB7BA1: qemuProcessReconnect (qemu_process.c:3694) by 0x52FD171: virThreadHelper (virthread.c:206) by 0x82B8DF4: start_thread (pthread_create.c:308) by 0x85C31AC: clone (clone.S:113) Signed-off-by: NLuyao Huang <lhuang@redhat.com>
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1198645 Once upon a time, there was a little domain. And the domain was pinned onto a NUMA node and hasn't fully allocated its memory: <memory unit='KiB'>2355200</memory> <currentMemory unit='KiB'>1048576</currentMemory> <numatune> <memory mode='strict' nodeset='0'/> </numatune> Oh little me, said the domain, what will I do with so little memory. If I only had a few megabytes more. But the old admin noticed the whimpering, barely audible to untrained human ear. And good admin he was, he gave the domain yet more memory. But the old NUMA topology witch forbade to allocate more memory on the node zero. So he decided to allocate it on a different node: virsh # numatune little_domain --nodeset 0-1 virsh # setmem little_domain 2355200 The little domain was happy. For a while. Until bad, sharp teeth shaped creature came. Every process in the system was afraid of him. The OOM Killer they called him. Oh no, he's after the little domain. There's no escape. Do you kids know why? Because when the little domain was born, her father, Libvirt, called numa_set_membind(). So even if the admin allowed her to allocate memory from other nodes in the cgroups, the membind() forbid it. So what's the lesson? Libvirt should rely on cgroups, whenever possible and use numa_set_membind() as the last ditch effort. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michael Chapman 提交于
Currently we check qemuCaps before starting the block job. But qemuCaps isn't available on a stopped domain, which means we get a misleading error message in this case: # virsh domstate example shut off # virsh blockjob example vda error: unsupported configuration: block jobs not supported with this QEMU binary Move the qemuCaps check into the block job so that we are guaranteed the domain is running. Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
-
由 Michael Chapman 提交于
qemuMigrationCookieAddNBD is usually called from within an async MIGRATION_OUT or MIGRATION_IN job, so it needs to start a nested job. (The one exception is during the Begin phase when change protection isn't enabled, but qemuDomainObjEnterMonitorAsync will behave the same as qemuDomainObjEnterMonitor in this case.) This bug was encountered with a libvirt client that repeatedly queries the disk mirroring block job info during a migration. If one of these queries occurs just as the Perform migration cookie is baked, libvirt crashes. Relevant logs are as follows: 6701: warning : qemuDomainObjEnterMonitorInternal:1544 : This thread seems to be the async job owner; entering monitor without asking for a nested job is dangerous [1] 6701: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block","id":"libvirt-629"} [2] 6699: info : qemuMonitorIOWrite:503 : QEMU_MONITOR_IO_WRITE: mon=0x7fefdc004700 buf={"execute":"query-block","id":"libvirt-629"} [3] 6704: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block-jobs","id":"libvirt-630"} [4] 6699: info : qemuMonitorJSONIOProcessLine:203 : QEMU_MONITOR_RECV_REPLY: mon=0x7fefdc004700 reply={"return": [...], "id": "libvirt-629"} 6699: error : qemuMonitorJSONIOProcessLine:211 : internal error: Unexpected JSON reply '{"return": [...], "id": "libvirt-629"}' At [1] qemuMonitorBlockStatsUpdateCapacity sends its request, then waits on mon->notify. At [2] the request is written out to the monitor socket. At [3] qemuMonitorBlockJobInfo sends its request, and also waits on mon->notify. The reply from the first request is received at [4]. However, qemuMonitorJSONIOProcessLine is not expecting this reply since the second request hadn't completed sending. The reply is dropped and an error is returned. qemuMonitorIO signals mon->notify twice during its error handling, waking up both of the threads waiting on it. One of them clears mon->msg as it exits qemuMonitorSend; the other crashes: qemuMonitorSend (mon=0x7fefdc004700, msg=<value optimized out>) at qemu/qemu_monitor.c:975 975 while (!mon->msg->finished) { (gdb) print mon->msg $1 = (qemuMonitorMessagePtr) 0x0 Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
-
由 Michael Chapman 提交于
If a VM migration is aborted, a disk mirror may be failed by QEMU before libvirt has a chance to cancel it. The disk->mirrorState remains at _ABORT in this case, and this breaks subsequent mirrorings of that disk. We should instead check the mirrorState directly and transition to _NONE if it is already aborted. Do the check *after* aborting the block job in QEMU to avoid a race. Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
-
由 Michael Chapman 提交于
If virCloseCallbacksSet fails, qemuMigrationBegin must return NULL to indicate an error occurred. Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
-
由 Michael Chapman 提交于
The destination libvirt daemon in a migration may segfault if the client disconnects immediately after the migration has begun: # virsh -c qemu+tls://remote/system list --all Id Name State ---------------------------------------------------- ... # timeout --signal KILL 1 \ virsh migrate example qemu+tls://remote/system \ --verbose --compressed --live --auto-converge \ --abort-on-error --unsafe --persistent \ --undefinesource --copy-storage-all --xml example.xml Killed # virsh -c qemu+tls://remote/system list --all error: failed to connect to the hypervisor error: unable to connect to server at 'remote:16514': Connection refused The crash is in: 1531 void 1532 qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj) 1533 { 1534 qemuDomainObjPrivatePtr priv = obj->privateData; 1535 qemuDomainJob job = priv->job.active; 1536 1537 priv->jobs_queued--; Backtrace: #0 at qemuDomainObjEndJob at qemu/qemu_domain.c:1537 #1 in qemuDomainRemoveInactive at qemu/qemu_domain.c:2497 #2 in qemuProcessAutoDestroy at qemu/qemu_process.c:5646 #3 in virCloseCallbacksRun at util/virclosecallbacks.c:350 #4 in qemuConnectClose at qemu/qemu_driver.c:1154 ... qemuDomainRemoveInactive calls virDomainObjListRemove, which in this case is holding the last remaining reference to the domain. qemuDomainRemoveInactive then calls qemuDomainObjEndJob, but the domain object has been freed and poisoned by then. This patch bumps the domain's refcount until qemuDomainRemoveInactive has completed. We also ensure qemuProcessAutoDestroy does not return the domain to virCloseCallbacksRun to be unlocked in this case. There is similar logic in bhyveProcessAutoDestroy and lxcProcessAutoDestroy (which call virDomainObjListRemove directly). Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
-
由 Michal Privoznik 提交于
==19015== 968 (416 direct, 552 indirect) bytes in 1 blocks are definitely lost in loss record 999 of 1,049 ==19015== at 0x4C2C070: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) ==19015== by 0x52ADF14: virAllocVar (viralloc.c:560) ==19015== by 0x5302FD1: virObjectNew (virobject.c:193) ==19015== by 0x1DD9401E: virQEMUDriverConfigNew (qemu_conf.c:164) ==19015== by 0x1DDDF65D: qemuStateInitialize (qemu_driver.c:666) ==19015== by 0x53E0823: virStateInitialize (libvirt.c:777) ==19015== by 0x11E067: daemonRunStateInit (libvirtd.c:905) ==19015== by 0x53201AD: virThreadHelper (virthread.c:206) ==19015== by 0xA1EE1F2: start_thread (in /lib64/libpthread-2.19.so) ==19015== by 0xA4EFC8C: clone (in /lib64/libc-2.19.so) Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
==19015== 1,064 (656 direct, 408 indirect) bytes in 2 blocks are definitely lost in loss record 1,002 of 1,049 ==19015== at 0x4C2C070: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) ==19015== by 0x52AD74B: virAlloc (viralloc.c:144) ==19015== by 0x52B47CA: virCgroupNew (vircgroup.c:1057) ==19015== by 0x52B53E5: virCgroupNewVcpu (vircgroup.c:1451) ==19015== by 0x1DD85A40: qemuSetupCgroupForVcpu (qemu_cgroup.c:1013) ==19015== by 0x1DDA66EA: qemuProcessStart (qemu_process.c:4844) ==19015== by 0x1DDF1807: qemuDomainObjStart (qemu_driver.c:7265) ==19015== by 0x1DDF1A66: qemuDomainCreateWithFlags (qemu_driver.c:7320) ==19015== by 0x1DDF1ACD: qemuDomainCreate (qemu_driver.c:7337) ==19015== by 0x53F87EA: virDomainCreate (libvirt-domain.c:6820) ==19015== by 0x12690A: remoteDispatchDomainCreate (remote_dispatch.h:3481) ==19015== by 0x126827: remoteDispatchDomainCreateHelper (remote_dispatch.h:3457) Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 02 4月, 2015 11 次提交
-
-
由 Ján Tomko 提交于
In virDomainVirtioSerialAddrNext, add another controller if we've exhausted all ports of the existing controllers. https://bugzilla.redhat.com/show_bug.cgi?id=1076708
-
由 Ján Tomko 提交于
-
由 Ján Tomko 提交于
-
由 Ján Tomko 提交于
Instead of always using controller 0 and incrementing port number, respect the maximum port numbers of controllers and use all of them. Ports for virtio consoles are quietly reserved, but not formatted (neither in XML nor on QEMU command line). Also rejects duplicate virtio-serial addresses. https://bugzilla.redhat.com/show_bug.cgi?id=890606 https://bugzilla.redhat.com/show_bug.cgi?id=1076708 Test changes: * virtio-auto.args Filling out the port when just the controller is specified. switched from using maxport + 1 to: first free port on the controller * virtio-autoassign.args Filling out the address when no <address> is specified. Started using all the controllers instead of 0, also discards the bus value. * xml -> xml output of virtio-auto The port assignment is no longer done as a part of XML parsing, so the unspecified values stay 0.
-
由 Luyao Huang 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1206479 As described in virDomainBlockCopy() parameters description, the VIR_DOMAIN_BLOCK_COPY_GRANULARITY parameter may require the value to have some specific attributes (e.g. be a power of two or fall within a certain range). And in qemu, a power of two is required. However, our code does not check that and let qemu operation fail. Moreover, the virsh man page is not as exact as it could be in this respect. Signed-off-by: NLuyao Huang <lhuang@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 zhang bo 提交于
When we shutdown/reboot a guest using agent-mode, if the guest itself blocks infinitely, libvirt would block in qemuAgentShutdown() forever. Thus, we set a timeout for shutdown/reboot, from our experience, 60 seconds would be fine. Signed-off-by: NZhang Bo <oscar.zhangbo@huawei.com> Signed-off-by: NWang Yufei <james.wangyufei@huawei.com>
-
由 Shanzhi Yu 提交于
virDomainHasDiskMirror() currently detects only jobs that add the mirror elements. Since some operations like migration are interlocked by existing block jobs on the given domain the check needs to be instrumented to check regular jobs too. This patch renames virDomainHasDiskMirror to virDomainHasDiskBlockjob and adds an argument that allows to select that it returns true only for block copy jobs as those interlock making the domain persistent. Other two uses trigger on any block job type. Signed-off-by: NShanzhi Yu <shyu@redhat.com> Signed-off-by: NPeter Krempa <pkrempa@redhat.com>
-
由 Peter Krempa 提交于
If any disk of a VM was involved in a (copy) block job we refused to do a snapshot. As not only copy jobs interlock snapshots and the interlocking is applicable to individual disks only we can make the check in a more individual fashion and interlock all block job types supported by libvirt. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1203628
-
由 Ján Tomko 提交于
In the order of appearance: * MAX_LISTEN - never used added by 23ad665c (qemud) and addec57 (lock daemon) * NEXT_FREE_CLASS_ID - never used, added by 07d1b6b5 * virLockError - never used, added by eb8268a4 * OPENVZ_MAX_ARG, CMDBUF_LEN, CMDOP_LEN unused since the removal of ADD_ARG_LIT in d8b31306 * QEMU_NB_PER_CPU_STAT_PARAM - unused since 897808e7 * QEMU_CMD_PROMPT, QEMU_PASSWD_PROMPT - unused since 1dc10a7b * TEST_MODEL_WORDSIZE - unused since c25c18f7 * TEMPDIR - never used, added by 714bef5b * NSIG - workaround around old headers added by commit 60ed1d2a unused since virExec was moved by commit 02e86910 * DO_TEST_PARSE - never used, added by 9afa0060 * DIFF_MSEC, GETTIMEOFDAY - unused since eee6eb66
-
由 Peter Krempa 提交于
Use virBitmapNewCopy instead of a combination of virBitmapNew and virBitmapCopy.
-
由 Peter Krempa 提交于
The function doesn't make sense. There's a simpler way to achieve the same.
-