- 10 4月, 2015 8 次提交
-
-
由 Dmitry Guryanov 提交于
Return value of functions prlsdkStart/Kill/Stop e.t.c. is PRL_RESULT in parallels_sdk.c and int in parallels_sdk.h. PRL_RESULT is int, so compiler didn't report errors. Let's fix the difference. Signed-off-by: NDmitry Guryanov <dguryanov@parallels.com>
-
由 John Ferlan 提交于
Future IOThread setting patches would copy the code anyway, so create and generalize a delete cgroup and pindef for the vcpu into its own API. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Future IOThread setting patches would copy the code anyway, so create and generalize the add the vcpu to a cgroup into its own API. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Replace the virCgroupNew{Vcpu|Emulator|IOThread} calls with the common virCgroupNewThread API Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Create a new common API to replace the virCgroupNew{Vcpu|Emulator|IOThread} API's using an emum to generate the cgroup name Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1206521 If the backend driver updates the pool available and/or allocation values, then the storage_driver VolCreateXML, VolCreateXMLFrom, and VolDelete APIs should not change the value; otherwise, it will appear as if the values were "doubled" for each change. Additionally since unsigned arithmetic will be used depending on the size and operation, either or both values could be appear to be much larger than they should be (in the EiB range). Currently only the disk pool updates the values, but other pools could. Assume a "fresh" disk pool of 500 MiB using /dev/sde: $ virsh pool-info disk-pool ... Capacity: 509.88 MiB Allocation: 0.00 B Available: 509.84 MiB $ virsh vol-create-as disk-pool sde1 --capacity 300M $ virsh pool-info disk-pool ... Capacity: 509.88 MiB Allocation: 600.47 MiB Available: 16.00 EiB Following assumes disk backend updated to refresh the disk pool at deletion of primary partition as well as extended partition: $ virsh vol-delete --pool disk-pool sde1 Vol sde1 deleted $ virsh pool-info disk-pool ... Capacity: 509.88 MiB Allocation: 9.73 EiB Available: 6.27 EiB This patch will check if the backend updated the pool values and honor that update.
-
由 John Ferlan 提交于
Commit id '471e1c4e' only considered updating the pool if the extended partition was removed. As it turns out removing a primary partition would also need to update the freeExtent list otherwise the following sequence would fail (assuming a "fresh" disk pool for /dev/sde of 500M): $ virsh pool-info disk-pool ... Capacity: 509.88 MiB Allocation: 0.00 B Available: 509.84 MiB $ virsh vol-create-as disk-pool sde1 --capacity 300M $ virsh vol-delete --pool disk-pool sde1 $ virsh vol-create-as disk-pool sde1 --capacity 300M error: Failed to create vol sde1 error: internal error: no large enough free extent $ This patch will refresh the pool, rereading the partitions, and return
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1073305 When creating a volume in a pool, the creation allows the 'capacity' value to be larger than the available space in the pool. As long as the 'allocation' value will fit in the space, the volume will be created. However, resizing the volume checks were made with the new absolute capacity value against existing capacity + the available space without regard for whether the new absolute capacity was actually allocating space or not. For example, a pool with 75G of available space creates a volume of 10G using a capacity of 100G and allocation of 10G will succeed; however, if the allocation used a capacity of 10G instead and then tried to resize the allocation to 100G the code would fail to allow the backend to try the resize. Furthermore, when updating the pool "available" and "allocation" values, the resize code would just "blindly" adjust them regardless of whether space was "allocated" or just "capacity" was being adjusted. This left a scenario whereby a resize to 100G would fail; however, a resize to 50G followed by one to 100G would both succeed. Again, neither was adjusting the allocation value, just the "capacity" value. This patch adds more logic to the resize code to understand whether the new capacity value is actually "allocating" space as well and whether it shrinking or expanding. Since unsigned arithmatic is involved, the possibility that we adjust the pool size values incorrectly is probable. This patch also ensures that updates to the pool values only occur if we actually performed the allocation. NB: The storageVolDelete, storageVolCreateXML, and storageVolCreateXMLFrom each only updates the pool allocation/availability values by the target volume allocation value.
-
- 09 4月, 2015 12 次提交
-
-
由 Peter Krempa 提交于
Old compilers whine that 'sync' is being shadowed in the function introduced in 1eccac1d.
-
由 Peter Krempa 提交于
Support for drive-reopen was never present in the upstream code so we don't need to pause the VM when doing the block pivot. Kill all the code related to this semi-upstream artifact.
-
由 Peter Krempa 提交于
There are two leftover unused variables. Remove them and clean up the fallout of the change.
-
由 Peter Krempa 提交于
Refactor the function to use the new helpers.
-
由 Peter Krempa 提交于
We need to check that qemu supports block jobs in multiple places. Add a helper to do the check.
-
由 Peter Krempa 提交于
In some cases where the function does not need to access the private data this helper may be used to retrieve the monitor object.
-
由 Erik Skultety 提交于
We documented this almost everywhere, but missed it on several places. https://bugzilla.redhat.com/show_bug.cgi?id=1208763
-
由 Cédric Bosdonnat 提交于
lxc-enter-namespace stopped working on recent kernels (at least 3.19+) due to /proc/PID/ns/* file descriptors being opened RW. From outside the namespace these can only be opened RO.
-
由 Cédric Bosdonnat 提交于
SLES 11 has legacy qemu-kvm package, /usr/bin/qemu-kvm and /usr/share/qemu-kvm need to be accessed to domains.
-
由 Lubomir Rintel 提交于
/var/run may reside on a tmpfs and we fail to create the PID file if /var/run/lxc does not exist. Since commit 0a8addc1, the lxc driver's state directory isn't automatically created before starting a domain. Now, the lxc driver makes sure the state directory exists when it initializes. Signed-off-by: NLubomir Rintel <lkundrak@v3.sk>
-
由 Peter Krempa 提交于
rfc3986 states that the separator in URI path is a single slash. Multiple slashes may potentially lead to different resources and thus we should not remove them.
-
由 Peter Krempa 提交于
Add test infrastructure for virFileSanitizePath so that it can be sensibly refactored later.
-
- 08 4月, 2015 16 次提交
-
-
由 Michal Privoznik 提交于
Like we are doing in qemu driver (ea576ee5), lets call virNumaSetupMemoryPolicy() only if really needed. Problem is, if we numa_set_membind() child, there's no way to change it from the daemon afterwards. So any later attempts to change the pinning will fail. But in very weird way - CGroups will be set, but due to membind child will not allocate memory from any other node. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Luyao Huang 提交于
131,088 bytes in 16 blocks are definitely lost in loss record 2,174 of 2,176 at 0x4C29BFD: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) by 0x4C2BACB: realloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) by 0x52A026F: virReallocN (viralloc.c:245) by 0x52BFCB5: saferead_lim (virfile.c:1268) by 0x52C00EF: virFileReadLimFD (virfile.c:1328) by 0x52C019A: virFileReadAll (virfile.c:1351) by 0x52A5D4F: virCgroupGetValueStr (vircgroup.c:763) by 0x1DDA0DA3: qemuRestoreCgroupState (qemu_cgroup.c:805) by 0x1DDA0DA3: qemuConnectCgroup (qemu_cgroup.c:857) by 0x1DDB7BA1: qemuProcessReconnect (qemu_process.c:3694) by 0x52FD171: virThreadHelper (virthread.c:206) by 0x82B8DF4: start_thread (pthread_create.c:308) by 0x85C31AC: clone (clone.S:113) Signed-off-by: NLuyao Huang <lhuang@redhat.com>
-
由 Dawid Zamirski 提交于
Since the holdtime is not supported by VBOX SDK, it's being simulated by sleeping before sending the key-up codes. The key-up codes are auto-generated based on XT codeset rules (adding of 0x80 to key-down) which results in the same behavior as for QEMU implementation.
-
由 Dawid Zamirski 提交于
The IKeyboard COM object is needed to implement virDomainSendKey and is available in all supported VBOX versions.
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1198645 Once upon a time, there was a little domain. And the domain was pinned onto a NUMA node and hasn't fully allocated its memory: <memory unit='KiB'>2355200</memory> <currentMemory unit='KiB'>1048576</currentMemory> <numatune> <memory mode='strict' nodeset='0'/> </numatune> Oh little me, said the domain, what will I do with so little memory. If I only had a few megabytes more. But the old admin noticed the whimpering, barely audible to untrained human ear. And good admin he was, he gave the domain yet more memory. But the old NUMA topology witch forbade to allocate more memory on the node zero. So he decided to allocate it on a different node: virsh # numatune little_domain --nodeset 0-1 virsh # setmem little_domain 2355200 The little domain was happy. For a while. Until bad, sharp teeth shaped creature came. Every process in the system was afraid of him. The OOM Killer they called him. Oh no, he's after the little domain. There's no escape. Do you kids know why? Because when the little domain was born, her father, Libvirt, called numa_set_membind(). So even if the admin allowed her to allocate memory from other nodes in the cgroups, the membind() forbid it. So what's the lesson? Libvirt should rely on cgroups, whenever possible and use numa_set_membind() as the last ditch effort. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
This new internal API checks if given CGroup controller is available. It is going to be needed later when we need to make a decision whether pin domain memory onto NUMA nodes using cpuset CGroup controller or using numa_set_membind(). Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michael Chapman 提交于
Currently we check qemuCaps before starting the block job. But qemuCaps isn't available on a stopped domain, which means we get a misleading error message in this case: # virsh domstate example shut off # virsh blockjob example vda error: unsupported configuration: block jobs not supported with this QEMU binary Move the qemuCaps check into the block job so that we are guaranteed the domain is running. Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
-
由 Michael Chapman 提交于
qemuMigrationCookieAddNBD is usually called from within an async MIGRATION_OUT or MIGRATION_IN job, so it needs to start a nested job. (The one exception is during the Begin phase when change protection isn't enabled, but qemuDomainObjEnterMonitorAsync will behave the same as qemuDomainObjEnterMonitor in this case.) This bug was encountered with a libvirt client that repeatedly queries the disk mirroring block job info during a migration. If one of these queries occurs just as the Perform migration cookie is baked, libvirt crashes. Relevant logs are as follows: 6701: warning : qemuDomainObjEnterMonitorInternal:1544 : This thread seems to be the async job owner; entering monitor without asking for a nested job is dangerous [1] 6701: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block","id":"libvirt-629"} [2] 6699: info : qemuMonitorIOWrite:503 : QEMU_MONITOR_IO_WRITE: mon=0x7fefdc004700 buf={"execute":"query-block","id":"libvirt-629"} [3] 6704: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block-jobs","id":"libvirt-630"} [4] 6699: info : qemuMonitorJSONIOProcessLine:203 : QEMU_MONITOR_RECV_REPLY: mon=0x7fefdc004700 reply={"return": [...], "id": "libvirt-629"} 6699: error : qemuMonitorJSONIOProcessLine:211 : internal error: Unexpected JSON reply '{"return": [...], "id": "libvirt-629"}' At [1] qemuMonitorBlockStatsUpdateCapacity sends its request, then waits on mon->notify. At [2] the request is written out to the monitor socket. At [3] qemuMonitorBlockJobInfo sends its request, and also waits on mon->notify. The reply from the first request is received at [4]. However, qemuMonitorJSONIOProcessLine is not expecting this reply since the second request hadn't completed sending. The reply is dropped and an error is returned. qemuMonitorIO signals mon->notify twice during its error handling, waking up both of the threads waiting on it. One of them clears mon->msg as it exits qemuMonitorSend; the other crashes: qemuMonitorSend (mon=0x7fefdc004700, msg=<value optimized out>) at qemu/qemu_monitor.c:975 975 while (!mon->msg->finished) { (gdb) print mon->msg $1 = (qemuMonitorMessagePtr) 0x0 Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
-
由 Maxim Nestratov 提交于
In order to change an existing domain we delete all existing devices and add new from scratch. In case of network devices we should also delete corresponding virtual networks (if any) before removing actual devices from xml. In the patch, we do it by extending prlsdkDoApplyConfig with a new parameter, which stands for old xml, and calling prlsdkDelNet every time old xml is specified. Signed-off-by: NMaxim Nestratov <mnestratov@parallels.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michael Chapman 提交于
The close callbacks hash are keyed by a UUID-string, but virCloseCallbacksRun was attempting to remove them by raw UUID. This patch ensures the callback entries are removed by UUID-string as well. This bug caused problems when guest migrations were abnormally aborted: # timeout --signal KILL 1 \ virsh migrate example qemu+tls://remote/system \ --verbose --compressed --live --auto-converge \ --abort-on-error --unsafe --persistent \ --undefinesource --copy-storage-all --xml example.xml Killed # virsh migrate example qemu+tls://remote/system \ --verbose --compressed --live --auto-converge \ --abort-on-error --unsafe --persistent \ --undefinesource --copy-storage-all --xml example.xml error: Requested operation is not valid: domain 'example' is not being migrated Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
-
由 Michael Chapman 提交于
If a VM migration is aborted, a disk mirror may be failed by QEMU before libvirt has a chance to cancel it. The disk->mirrorState remains at _ABORT in this case, and this breaks subsequent mirrorings of that disk. We should instead check the mirrorState directly and transition to _NONE if it is already aborted. Do the check *after* aborting the block job in QEMU to avoid a race. Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
-
由 Michael Chapman 提交于
If virCloseCallbacksSet fails, qemuMigrationBegin must return NULL to indicate an error occurred. Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
-
由 Michael Chapman 提交于
The destination libvirt daemon in a migration may segfault if the client disconnects immediately after the migration has begun: # virsh -c qemu+tls://remote/system list --all Id Name State ---------------------------------------------------- ... # timeout --signal KILL 1 \ virsh migrate example qemu+tls://remote/system \ --verbose --compressed --live --auto-converge \ --abort-on-error --unsafe --persistent \ --undefinesource --copy-storage-all --xml example.xml Killed # virsh -c qemu+tls://remote/system list --all error: failed to connect to the hypervisor error: unable to connect to server at 'remote:16514': Connection refused The crash is in: 1531 void 1532 qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj) 1533 { 1534 qemuDomainObjPrivatePtr priv = obj->privateData; 1535 qemuDomainJob job = priv->job.active; 1536 1537 priv->jobs_queued--; Backtrace: #0 at qemuDomainObjEndJob at qemu/qemu_domain.c:1537 #1 in qemuDomainRemoveInactive at qemu/qemu_domain.c:2497 #2 in qemuProcessAutoDestroy at qemu/qemu_process.c:5646 #3 in virCloseCallbacksRun at util/virclosecallbacks.c:350 #4 in qemuConnectClose at qemu/qemu_driver.c:1154 ... qemuDomainRemoveInactive calls virDomainObjListRemove, which in this case is holding the last remaining reference to the domain. qemuDomainRemoveInactive then calls qemuDomainObjEndJob, but the domain object has been freed and poisoned by then. This patch bumps the domain's refcount until qemuDomainRemoveInactive has completed. We also ensure qemuProcessAutoDestroy does not return the domain to virCloseCallbacksRun to be unlocked in this case. There is similar logic in bhyveProcessAutoDestroy and lxcProcessAutoDestroy (which call virDomainObjListRemove directly). Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
-
由 Michal Privoznik 提交于
==19015== 968 (416 direct, 552 indirect) bytes in 1 blocks are definitely lost in loss record 999 of 1,049 ==19015== at 0x4C2C070: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) ==19015== by 0x52ADF14: virAllocVar (viralloc.c:560) ==19015== by 0x5302FD1: virObjectNew (virobject.c:193) ==19015== by 0x1DD9401E: virQEMUDriverConfigNew (qemu_conf.c:164) ==19015== by 0x1DDDF65D: qemuStateInitialize (qemu_driver.c:666) ==19015== by 0x53E0823: virStateInitialize (libvirt.c:777) ==19015== by 0x11E067: daemonRunStateInit (libvirtd.c:905) ==19015== by 0x53201AD: virThreadHelper (virthread.c:206) ==19015== by 0xA1EE1F2: start_thread (in /lib64/libpthread-2.19.so) ==19015== by 0xA4EFC8C: clone (in /lib64/libc-2.19.so) Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
==19015== 8 bytes in 1 blocks are definitely lost in loss record 34 of 1,049 ==19015== at 0x4C29F80: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) ==19015== by 0x4C2C32F: realloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) ==19015== by 0x52AD888: virReallocN (viralloc.c:245) ==19015== by 0x52AD97E: virExpandN (viralloc.c:294) ==19015== by 0x52ADC51: virInsertElementsN (viralloc.c:436) ==19015== by 0x5335864: virDomainVirtioSerialAddrSetAddController (domain_addr.c:816) ==19015== by 0x53358E0: virDomainVirtioSerialAddrSetAddControllers (domain_addr.c:839) ==19015== by 0x1DD5513B: qemuDomainAssignVirtioSerialAddresses (qemu_command.c:1422) ==19015== by 0x1DD55A6E: qemuDomainAssignAddresses (qemu_command.c:1711) ==19015== by 0x1DDA5818: qemuProcessStart (qemu_process.c:4616) ==19015== by 0x1DDF1807: qemuDomainObjStart (qemu_driver.c:7265) ==19015== by 0x1DDF1A66: qemuDomainCreateWithFlags (qemu_driver.c:7320) Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
==19015== 1,064 (656 direct, 408 indirect) bytes in 2 blocks are definitely lost in loss record 1,002 of 1,049 ==19015== at 0x4C2C070: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) ==19015== by 0x52AD74B: virAlloc (viralloc.c:144) ==19015== by 0x52B47CA: virCgroupNew (vircgroup.c:1057) ==19015== by 0x52B53E5: virCgroupNewVcpu (vircgroup.c:1451) ==19015== by 0x1DD85A40: qemuSetupCgroupForVcpu (qemu_cgroup.c:1013) ==19015== by 0x1DDA66EA: qemuProcessStart (qemu_process.c:4844) ==19015== by 0x1DDF1807: qemuDomainObjStart (qemu_driver.c:7265) ==19015== by 0x1DDF1A66: qemuDomainCreateWithFlags (qemu_driver.c:7320) ==19015== by 0x1DDF1ACD: qemuDomainCreate (qemu_driver.c:7337) ==19015== by 0x53F87EA: virDomainCreate (libvirt-domain.c:6820) ==19015== by 0x12690A: remoteDispatchDomainCreate (remote_dispatch.h:3481) ==19015== by 0x126827: remoteDispatchDomainCreateHelper (remote_dispatch.h:3457) Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 07 4月, 2015 4 次提交
-
-
由 Erik Skultety 提交于
The 'checkPool' callback was originally part of the storageDriverAutostart function, but the pools need to be checked earlier during initialization phase, otherwise we can't start a domain which mounts a volume after the libvirtd daemon restarted. This is because qemuProcessReconnect is called earlier than storageDriverAutostart. Therefore the 'checkPool' logic has been moved to storagePoolUpdateAllState which is called inside storageDriverInitialize. We also need a valid 'conn' reference to be able to execute 'refreshPool' during initialization phase. Though it isn't available until storageDriverAutostart all of our storage backends do ignore 'conn' pointer, except for RBD, but RBD doesn't support 'checkPool' callback, so it's safe to pass conn = NULL in this case. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1177733
-
由 Erik Skultety 提交于
These functions operate exactly the same as their network equivalents virNetworkLoadAllState, virNetworkLoadState.
-
由 Erik Skultety 提交于
This patch introduces new virStorageDriverState element stateDir. Also adds necessary changes to storageStateInitialize, so that directories initialization becomes more generic.
-
由 Shivaprasad G Bhat 提交于
The nodedev-detach can report the name of the domain using the device just the way nodedev-reattach does it. Signed-off-by: NShivaprasad G Bhat <sbhat@linux.vnet.ibm.com> Signed-off-by: NJán Tomko <jtomko@redhat.com>
-