- 19 4月, 2012 1 次提交
-
-
由 Eric Blake 提交于
The sequence: long long val; if ((long long) val != val) is dead code. * src/util/util.c (virStrToLong_ll, virStrToLong_ull): Remove useless cast.
-
- 18 4月, 2012 4 次提交
-
-
由 D. Herrendoerfer 提交于
Currently upon a migration a callback is created when a 802.1qbg link is set to PREASSOCIATE, this should not happen because this is a no-op on most switches, and does not lead to an ASSOCIATE state. This patch only creates callbacks when CREATE or RESTORE is requested. Migration and libvirtd restart scenarios are already handled elsewhere. Signed-off-by: ND. Herrendoerfer <d.herrendoerfer@herrendoerfer.name>
-
由 Stefan Berger 提交于
The below patch fixes the following memory leak. ==20624== 24 bytes in 2 blocks are definitely lost in loss record 532 of 1,867 ==20624== at 0x4A05E46: malloc (vg_replace_malloc.c:195) ==20624== by 0x38EC27FC01: strdup (strdup.c:43) ==20624== by 0x4EB6BA3: virDomainChrSourceDefCopy (domain_conf.c:1122) ==20624== by 0x495D76: qemuProcessFindCharDevicePTYs (qemu_process.c:1497) ==20624== by 0x498321: qemuProcessWaitForMonitor (qemu_process.c:1258) ==20624== by 0x49B5F9: qemuProcessStart (qemu_process.c:3652) ==20624== by 0x468B5C: qemuDomainObjStart (qemu_driver.c:4753) ==20624== by 0x469171: qemuDomainStartWithFlags (qemu_driver.c:4810) ==20624== by 0x4F21735: virDomainCreate (libvirt.c:8153) ==20624== by 0x4302BF: remoteDispatchDomainCreateHelper (remote_dispatch.h:852) ==20624== by 0x4F72C14: virNetServerProgramDispatch (virnetserverprogram.c:416) ==20624== by 0x4F6D690: virNetServerHandleJob (virnetserver.c:164) ==20624== by 0x4E8F43D: virThreadPoolWorker (threadpool.c:144) ==20624== by 0x4E8EAB5: virThreadHelper (threads-pthread.c:161) ==20624== by 0x38EC606CCA: start_thread (pthread_create.c:301) ==20624== by 0x38EC2E0C2C: clone (clone.S:115)
-
由 Eric Blake 提交于
Most of our errors complaining about an inability to support a particular action due to qemu limitations used CONFIG_UNSUPPORTED, but we had a few outliers. Reported by Jiri Denemark. * src/qemu/qemu_command.c (qemuBuildDriveDevStr): Prefer CONFIG_UNSUPPORTED. * src/qemu/qemu_driver.c (qemuDomainReboot) (qemuDomainBlockJobImpl): Likewise. * src/qemu/qemu_hotplug.c (qemuDomainAttachPciControllerDevice): Likewise. * src/qemu/qemu_monitor.c (qemuMonitorTransaction) (qemuMonitorBlockJob, qemuMonitorSystemWakeup): Likewise.
-
由 Jean-Baptiste Rouault 提交于
Passing a NULL pointer to IMachine::delete virtualbox API causes VBoxSVC to raise an assertion. This patch passes an empty array instead.
-
- 17 4月, 2012 7 次提交
-
-
由 Osier Yang 提交于
So that a domain xml which doesn't have "placement" specified, but "cpuset" is specified, could be parsed. And in this case, the "placement" mode will be set as "static".
-
由 Osier Yang 提交于
The objects (domain, pool, network, etc) for testing are defined/ started each time when opening a connect to test driver, and thus the UUID for the objects will be generated each time, with different values. e.g. % for i in {1..3}; do ./tools/virsh --connect \ test:///default dumpxml test | grep uuid; done <uuid>a1b6ee1f-97de-f0ee-617a-0cdb74947df5</uuid> <uuid>ee68d7d2-3eb9-593e-2769-797ce1f4c4aa</uuid> <uuid>fecb1d3a-918a-8412-e534-76192cf32b18</uuid> It's the potential bug which can cause operations like below to fail: $ virsh -c test:///default dumpxml test > test.xml [ Some modificatons, though it's not supported, but it should work ] $ virsh -c test:///default define test.xml This patch set fixed UUID for objects which support it. (domain, pool, network).
-
由 Osier Yang 提交于
A "ide-drive" device can be either a hard disk or a CD-ROM, if there is ",media=cdrom" specified for the backend, it's a CD-ROM, otherwise it's a hard disk. Upstream qemu splitted "ide-drive" into "ide-hd" and "ide-cd" since commit 1f56e32, and ",media=cdrom" is not required for ide-cd anymore. "ide-drive" is still supported for backwards compatibility, but no doubt we should go foward.
-
由 Osier Yang 提交于
A "scsi-disk" device can be either a hard disk or a CD-ROM, if there is ",media=cdrom" specified for the backend, it's a CD-ROM, otherwise it's a hard disk. But upstream qemu splitted "scsi-disk" into "scsi-hd" and "scsi-cd" since commit b443ae, and ",media=cdrom" is not required for scsi-cd anymore. "scsi-disk" is still supported for backwards compatibility, but no doubt we should go foward.
-
由 Jan Kiszka 提交于
If console[0] is an alias for serial[0], do not enforce the former to have a PTY source type. This breaks serial consoles on stdio and makes no sense. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
-
由 Stefan Bader 提交于
When using the xm/xend stack to manage instances there is a bug that causes the emulated interfaces to be unusable when the vif config contains type=ioemu. The current code already has a special quirk to not use this keyword if no specific model is given for the emulated NIC (defaulting to rtl8139). Essentially it works because regardless of the type argument,i the Xen stack always creates emulated and paravirt interfaces and lets the guest decide which one to use. So neither xl nor xm stack actually require the type keyword for emulated NICs. Signed-off-by: NStefan Bader <stefan.bader@canonical.com>
-
由 Osier Yang 提交于
It should be 0.9.12 instead of 0.9.11
-
- 16 4月, 2012 6 次提交
-
-
由 Michal Privoznik 提交于
Currently, we have 3 boolean arguments we have to pass to qemuProcessStart(). As libvirt grows it is harder and harder to remember them and their position. Therefore we should switch to flags instead.
-
由 Cole Robinson 提交于
lvcreate want's the parent pool's name, not the pool path lvchange and lvremove want lv specified as $vgname/$lvname This largely worked before because these commands strip off a starting /dev. But https://bugzilla.redhat.com/show_bug.cgi?id=714986 is from a user using a 'nested VG' that was having problems. I couldn't find any info on nested LVM and the reporter never responded, but I reproduced with XML that specified a valid source name, and set target path to a symlink.
-
由 Osier Yang 提交于
-
由 Osier Yang 提交于
As explained in previous patch, numad will balance the affinity dynamically, so reflecting the cpuset from numad at the first time doesn't make much case, and may just could cause confusion.
-
由 Osier Yang 提交于
Instead of returning a CPUs list, numad returns NUMA node list instead, this patch is to convert the node list to cpumap before affinity setting. Otherwise, the domain processes will be pinned only to CPU[$numa_cell_num], which will cause significiant performance losses. Also because numad will balance the affinity dynamically, reflecting the cpuset from numad back doesn't make much sense then, and it may just could produce confusion for the users. Thus the better way is not to reflect it back to XML. And in this case, it's better to ignore the cpuset when parsing XML. The codes to update the cpuset is removed in this patch incidentally, and there will be a follow up patch to ignore the manually specified "cpuset" if "placement" is "auto", and document will be updated too.
-
由 Guido Günther 提交于
in detail nodeGetCPUStats, nodeGetMemoryStats, nodeGetCellsFreeMemory and nodeGetFreeMemory
-
- 14 4月, 2012 1 次提交
-
-
由 Philipp Hahn 提交于
The linux-2.6.32 kernel header does not yet define IFLA_VF_MAX and others, which breaks compiling a new libvirt on old systems like Debian Squeeze. (I also have to add --without-macvtap --disable-werror --without-virtualport to ./configure to get it to compile.) Signed-off-by: NPhilipp Hahn <hahn@univention.de>
-
- 13 4月, 2012 4 次提交
-
-
由 Michal Privoznik 提交于
Although it should be harmless to do: disk = disk = def->disks[i] some not-so-wise compilers may fool around. Besides, such assignment is useless here.
-
由 Michal Privoznik 提交于
If placement mode is AUTO, on some return paths char *cpumap or char *nodeset are leaked.
-
由 Stefan Bader 提交于
On newer xend (v3.x and after) there is no state and domid reported for inactive domains. When initially creating connections this is handled in various places by assigning domain->id = -1. But once an instance has been running, the id is set to the current domain id. And it does not change when the instance is shut down. So when querying the domain info, the hypervisor driver, which gets asked first will indicate it cannot find information, then the xend driver is asked and will set the status to NOSTATE because it checks for the -1 domain id. Checking domain/status for 0 seems to be more reliable for that. One note: I am not sure whether the domain->id also should get set back to -1 whenever any sub-driver thinks the instance is no longer running. BugLink: https://bugzilla.redhat.com/show_bug.cgi?id=746007 BugLink: http://bugs.launchpad.net/bugs/929626Signed-off-by: NStefan Bader <stefan.bader@canonical.com>
-
由 D. Herrendoerfer 提交于
This patch adds a netlink callback when migrating a VEPA enabled virtual machine. It fixes a Bug where a VM would not request a port association when it was cleared by lldpad. This patch requires the latest git version of lldpad to work. Signed-off-by: ND. Herrendoerfer <d.herrendoerfer@herrendoerfer.name>
-
- 12 4月, 2012 6 次提交
-
-
由 Michal Privoznik 提交于
If dynamic_ownership is off and we are creating a file on NFS we force chown. This will fail as chown/chmod are not supported on NFS. However, with no dynamic_ownership we are not required to do any chown.
-
由 Eric Blake 提交于
In my testing, I was able to provoke an odd block pull failure: $ virsh blockpull dom vda --bandwidth 10000 error: Requested operation is not valid: No active operation on device: drive-virtio-disk0 merely by using gdb to artifically wait to do the block job set speed until after the pull had already finished. But in reality, that should be a success, since the pull finished before we had a chance to set speed. Furthermore, using a double job lock is not only annoying, but a bug in itself - if you do parallel virDomainBlockRebase, and hit the race window just right, the first call grabs the VM job to start a fast block job, then the second call grabs the VM job to start a long-running job with unspecified speed, then the first call finally regrabs the VM job and sets the speed, which ends up running the second job under the speed from the first call. By consolidating things into a single job, we avoid opening that race, as well as reduce the time between starting the job and changing the speed, for less likelihood of the speed change happening after block job completion in the first place. * src/qemu/qemu_monitor.h (BLOCK_JOB_CMD): Add new mode. * src/qemu/qemu_driver.c (qemuDomainBlockRebase): Move secondary job call... (qemuDomainBlockJobImpl): ...here, for fewer locks. * src/qemu/qemu_monitor_json.c (qemuMonitorJSONBlockJob): Change return value on new internal mode.
-
由 Eric Blake 提交于
Without the VIR_DOMAIN_BLOCK_JOB_ABORT_ASYNC flag, libvirt will internally poll using qemu's "query-block-jobs" API and will not return until the operation has been completed. API users are advised that this operation is unbounded and further interaction with the domain during this period may block. Future patches may refactor things to allow other queries in parallel with this polling. For older qemu, we synthesize the cancellation event, since qemu won't generate it. The choice of polling duration copies from the code in qemu_migration.c. Signed-off-by: NAdam Litke <agl@us.ibm.com> Cc: Stefan Hajnoczi <stefanha@gmail.com> Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Eric Blake 提交于
Probably in the noise, but this will let us scale more efficiently as we learn to recognize even more qemu events. * src/qemu/qemu_monitor_json.c (eventHandlers): Sort. (qemuMonitorEventCompare): New helper function. (qemuMonitorJSONIOProcessEvent): Optimize event lookup.
-
由 Adam Litke 提交于
Block job cancellation can take a while. Now that upstream qemu 1.1 has asynchronous block cancellation, we want to expose that to the user. Therefore, the following updates are made to the virDomainBlockJob API: A new block job event type VIR_DOMAIN_BLOCK_JOB_CANCELED is managed by libvirt. Regardless of the flags used with virDomainBlockJobAbort, this event will be raised: 1. when using synchronous block_job_cancel (the event will be synthesized by libvirt), and 2. whenever it is received from qemu (via asynchronous block-job-cancel). Note that the event may be detected by libvirt even before the virDomainBlockJobAbort completes (always true when it is synthesized, but also possible if cancellation was fast). A new extension flag VIR_DOMAIN_BLOCK_JOB_ABORT_ASYNC is added to the virDomainBlockJobAbort API. When enabled, this function will allow (but not require) asynchronous operation (ie, it returns as soon as possible, which might be before the job has actually been canceled). When the API is used in this mode, it is the responsibility of the caller to wait for a VIR_DOMAIN_BLOCK_JOB_CANCELED event or poll via the virDomainGetBlockJobInfo API to check the cancellation status. This patch also exposes the new flag through virsh, and makes virsh slightly easier to use (--async implies --abort, and lack of any options implies --info), although it leaves the qemu implementation for later patches. Signed-off-by: NAdam Litke <agl@us.ibm.com> Cc: Stefan Hajnoczi <stefanha@gmail.com> Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Eric Blake 提交于
RHEL 6.2 was released with an early version of block jobs, which only worked on the qed file format, where the commands were spelled with underscore (contrary to QMP style), and where 'block_job_cancel' was synchronous and did not trigger an event. The upcoming qemu 1.1 release has fixed these short-comings [1][2]: the commands now work on multiple file types, are spelled with dash, and 'block-job-cancel' is asynchronous and emits an event upon conclusion. [1]qemu commit 370521a1d6f5537ea7271c119f3fbb7b0fa57063 [2]https://lists.gnu.org/archive/html/qemu-devel/2012-04/msg01248.html This patch recognizes the new spellings, and fixes virDomainBlockRebase to give a graceful error when talking to a too-old qemu on a partial rebase attempt. Fixes for the new semantics will come later. This patch also removes a bogus ATTRIBUTE_NONNULL mistakenly added in commit 10ec36e2. * src/qemu/qemu_capabilities.h (QEMU_CAPS_BLOCKJOB_SYNC) (QEMU_CAPS_BLOCKJOB_ASYNC): New bits. * src/qemu/qemu_capabilities.c (qemuCaps): Name them. * src/qemu/qemu_monitor_json.c (qemuMonitorJSONCheckCommands): Set them. (qemuMonitorJSONBlockJob): Manage both command names. (qemuMonitorJSONDiskSnapshot): Minor formatting fix. * src/qemu/qemu_monitor.h (qemuMonitorBlockJob): Alter signature. * src/qemu/qemu_monitor_json.h (qemuMonitorJSONBlockJob): Likewise. * src/qemu/qemu_monitor.c (qemuMonitorBlockJob): Pass through capability bit. * src/qemu/qemu_driver.c (qemuDomainBlockJobImpl): Update callers.
-
- 11 4月, 2012 4 次提交
-
-
由 Peter Krempa 提交于
The new safe console handling introduced a possibility to deadlock the qemu driver when a new console connection forcibly disconnects a previous console stream that belongs to an already closed connection. The virStreamFree function calls subsequently a the virReleaseConnect function that tries to lock the driver while discarding the connection, but the driver was already locked in qemuDomainOpenConsole. Backtrace of the deadlocked thread: 0 0x00007f66e5aa7f14 in __lll_lock_wait () from /lib64/libpthread.so.0 1 0x00007f66e5aa3411 in _L_lock_500 () from /lib64/libpthread.so.0 2 0x00007f66e5aa322a in pthread_mutex_lock () from/lib64/libpthread.so.0 3 0x0000000000462bbd in qemudClose () 4 0x00007f66e6e178eb in virReleaseConnect () from/usr/lib64/libvirt.so.0 5 0x00007f66e6e19c8c in virUnrefStream () from /usr/lib64/libvirt.so.0 6 0x00007f66e6e3d1de in virStreamFree () from /usr/lib64/libvirt.so.0 7 0x00007f66e6e09a5d in virConsoleHashEntryFree () from/usr/lib64/libvirt.so.0 8 0x00007f66e6db7282 in virHashRemoveEntry () from/usr/lib64/libvirt.so.0 9 0x00007f66e6e09c4e in virConsoleOpen () from /usr/lib64/libvirt.so.0 10 0x00000000004526e9 in qemuDomainOpenConsole () 11 0x00007f66e6e421f1 in virDomainOpenConsole () from/usr/lib64/libvirt.so.0 12 0x00000000004361e4 in remoteDispatchDomainOpenConsoleHelper () 13 0x00007f66e6e80375 in virNetServerProgramDispatch () from/usr/lib64/libvirt.so.0 14 0x00007f66e6e7ae11 in virNetServerHandleJob () from/usr/lib64/libvirt.so.0 15 0x00007f66e6da897d in virThreadPoolWorker () from/usr/lib64/libvirt.so.0 16 0x00007f66e6da7ff6 in virThreadHelper () from/usr/lib64/libvirt.so.0 17 0x00007f66e5aa0c5c in start_thread () from /lib64/libpthread.so.0 18 0x00007f66e57e7fcd in clone () from /lib64/libc.so.6 * src/qemu/qemu_driver.c: qemuDomainOpenConsole() -- unlock the qemu driver right after acquiring the domain object
-
由 Jiri Denemark 提交于
qemuDomainObjEnterMonitor{,WithDriver} should not be called from async jobs, only EnterMonitorAsync variant is allowed.
-
由 Jiri Denemark 提交于
In case an API fails with "cannot acquire state change lock", searching for the API that possibly forgot to end its job is not always easy. Let's keep track of the job owner and print it out for easier identification.
-
由 Jiri Denemark 提交于
As reported by Daniel Berrangé, we have a huge performance regression for virDomainGetInfo() due to the change which makes virDomainEndJob() save the XML status file every time it is called. Previous to that change, 2000 calls to virDomainGetInfo() took ~2.5 seconds. After that change, 2000 calls to virDomainGetInfo() take 2 *minutes* 45 secs. We made the change to be able to recover from libvirtd restart in the middle of a job. However, only destroy and async jobs are taken care of. Thus it makes more sense to only save domain state XML when these jobs are started/stopped.
-
- 10 4月, 2012 4 次提交
-
-
由 Eric Blake 提交于
I noticed these compiler warnings when building for the s390 architecture. * src/node_device/node_device_udev.c (udevDeviceMonitorStartup): Mark unused variable. * src/nodeinfo.c (linuxNodeInfoCPUPopulate): Avoid unused variable.
-
由 Daniel P. Berrange 提交于
* src/qemu/qemu_command.c: Wire up -bios with <loader> * tests/qemuxml2argvdata/qemuxml2argv-bios.args, tests/qemuxml2argvdata/qemuxml2argv-bios.xml: Expand existing BIOS test case to cover <loader>
-
由 Daniel P. Berrange 提交于
-
由 MATSUDA, Daiki 提交于
I found typo in UML driver. MATSUDA Daiki
-
- 06 4月, 2012 3 次提交
-
-
由 Eric Blake 提交于
Leak introduced in commit 0436d328. If we allocate an actions array, but fail early enough to never consume it with the qemu monitor transaction call, we leaked memory. But our semantics of making the transaction command free the caller's memory is awkward; avoiding the memory leak requires making every intermediate function in the call chain check for error. It is much easier to fix things so that the function that allocates also frees, while the call chain leaves the caller's data intact. To do that, I had to hack our JSON data structure to make it easy to protect a portion of an arbitrary JSON tree from being freed. * src/util/json.h (virJSONType): Name the enum. (_virJSONValue): New field. * src/util/json.c (virJSONValueFree): Use it to protect a portion of an array. * src/qemu/qemu_monitor_json.c (qemuMonitorJSONTransaction): Avoid freeing caller's data. * src/qemu/qemu_driver.c (qemuDomainSnapshotCreateDiskActive): Free actions array on failure.
-
由 Michal Privoznik 提交于
We can tell qemuDomainSnapshotFSThaw if we want it to report errors or not. However, if we don't want to and an error has been already set by previous qemuReportError() we must keep copy of that error not just a pointer to it. Otherwise, it get overwritten if FSThaw reports an error.
-
由 Stefan Bader 提交于
This causes an implicit vkbd device to be added which takes 6min to finally fail being initialized in the guest. http://lists.xen.org/archives/html/xen-devel/2012-04/msg00409.htmlSigned-off-by: NStefan Bader <stefan.bader@canonical.com>
-