- 16 3月, 2015 5 次提交
-
-
由 John Ferlan 提交于
Since both Vcpu and IOThreads code use the same API's, alter the naming of the API's to remove the "Vcpu" specific reference
-
由 John Ferlan 提交于
As pointed out by jtomko in his review of the IOThreads pinning code: http://www.redhat.com/archives/libvir-list/2015-March/msg00495.html there are some comments sprinkled in indicating IOThreads were using the same structure as the VcpuPin code... This is the first patch of a few that will change the virDomainVcpuPin* structures and code to just virDomainPin* - starting with the data structure naming...
-
由 Peter Krempa 提交于
As there are two possible approaches to define a domain's memory size - one used with legacy, non-NUMA VMs configured in the <memory> element and per-node based approach on NUMA machines - the user needs to make sure that both are specified correctly in the NUMA case. To avoid this burden on the user I'd like to replace the NUMA case with automatic totaling of the memory size. To achieve this I need to replace direct access to the virDomainMemtune's 'max_balloon' field with two separate getters depending on the desired size. The two sizes are needed as: 1) Startup memory size doesn't include memory modules in some hypervisors. 2) After startup these count as the usable memory size. Note that the comments for the functions are future aware and document state that will be present after a few later patches.
-
由 Peter Krempa 提交于
Surprisingly we did not grab a VM job when a block job finished and we'd happily rewrite the backing chain data. This made it possible to crash libvirt when queueing two backing chains tightly and other badness. To fix it, add yet another handler to the helper thread that handles monitor events that require a job.
-
由 Peter Krempa 提交于
-
- 03 3月, 2015 4 次提交
-
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1197600 So, libvirt uses pid file to track pid of started qemus. Whenever a domain is started, its pid is put into corresponding pid file. The pid file path is generated based on domain name and stored into domain object internals. However, it's not stored in the status XML and therefore lost on daemon restarts. Hence, later, when domain is being shut down, the daemon does not know which pid file to unlink, and the correct pid file is left behind. To avoid this, lets generate the pid file path again in qemuProcessReconnect(). Reported-by: NLuyao Huang <lhuang@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Pavel Hrdina 提交于
Instead of checking defaultMode for every channel that has no mode configured, test it only once outside of channel loop. This fixes a bug that in case all possible channels are fore example set to insecure, but defaultMode is set to secure, we wouldn't auto-generate TLS port. This results in failure while starting a guest. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1143832Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
由 Pavel Hrdina 提交于
We have two different places that needs to be updated while touching code for allocation spice ports. Add a bool option to 'qemuProcessSPICEAllocatePorts' function to switch between true and fake allocation so we can use this function also in qemu_driver to generate native domain definition. Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
由 Martin Kletzander 提交于
Since adding the support for scheduler policy settings in commit 8680ea97, there are two enums with the same information. That was caused by rewriting the patch since first draft. Find out thanks to clang, but there was no impact whatsoever. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 21 2月, 2015 1 次提交
-
-
由 Peter Krempa 提交于
The structure will gradually become the only place for NUMA related config, thus rename it appropriately.
-
- 20 2月, 2015 1 次提交
-
-
由 Michal Privoznik 提交于
It will come handy in the near future when we will filter some capabilities based on it. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 19 2月, 2015 2 次提交
-
-
由 Michal Privoznik 提交于
Upon BLOCK_JOB_COMPLETED event delivery, we check if the job has completed (in qemuMonitorJSONHandleBlockJobImpl()). For better image, the event looks something like this: "timestamp": {"seconds": 1423582694, "microseconds": 372666}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive-virtio-disk0", "len": 8412790784, "offset": 409993216, "speed": 8796093022207, "type": "mirror", "error": "No space left on device"}} If "len" does not equal "offset" it's considered an error, and we can clearly see "error" field filled in. However, later in the event processing this case was handled no differently to case of job being aborted via separate API. It's time that we start differentiate these two because of the future work. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
Currently, upon BLOCK_JOB_* event, disk->mirrorState is not updated each time. The callback code handling the events checks if a blockjob was started via our public APIs prior to setting the mirrorState. However, some block jobs may be started internally (e.g. during storage migration), in which case we don't bother with setting disk->mirror (there's nothing we can set it to anyway), or other fields. But it will come handy if we update the mirrorState in these cases too. The event wasn't delivered just for fun - we've started the job after all. So, in this commit, the mirrorState is set to whatever job status we've obtained. Of course, there are some actions on some statuses that we want to perform. But instead of if {} else if {} else {} ... enumeration, let's move to switch(). Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 13 2月, 2015 1 次提交
-
-
由 Erik Skultety 提交于
We do have a check for valid per-domain security model, however we still do permit an invalid security model for a domain's device (those which are specified with <source> element). This patch introduces a new function virSecurityManagerCheckAllLabel which compares user specified security model against currently registered security drivers. That being said, it also permits 'none' being specified as a device security model. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1165485Signed-off-by: NJán Tomko <jtomko@redhat.com>
-
- 12 2月, 2015 2 次提交
-
-
由 Daniel P. Berrange 提交于
If a previous commit I fixed the incorrect handling of vcpu pids for TCG mode QEMU: commit b07f3d82 Author: Daniel P. Berrange <berrange@redhat.com> Date: Thu Dec 18 16:34:39 2014 +0000 Don't setup fake CPU pids for old QEMU The code assumes that def->vcpus == nvcpupids, so when we setup fake CPU pids for old QEMU with nvcpupids == 1, we cause the later code to read off the end of the array. This has fun results like sche_setaffinity(0, ...) which changes libvirtd's own CPU affinity, or even better sched_setaffinity($RANDOM, ...) which changes the affinity of a random OS process. The intent was that this would merely disable the ability to set per-vCPU affinity. It should still have been possible to set VM level host CPU affinity. Unfortunately, when you set <vcpu cpuset='0-1'>4</vcpu>, the XML parser will internally take this & initialize an entry in the def->cputune.vcpupin array for every VCPU. IOW this is implicitly being treated as <cputune> <vcpupin cpuset='0-1' vcpu='0'/> <vcpupin cpuset='0-1' vcpu='1'/> <vcpupin cpuset='0-1' vcpu='2'/> <vcpupin cpuset='0-1' vcpu='3'/> </cputune> Even more fun, the faked cputune elements are hidden from view when querying the live XML, because their cpuset mask is the same as the VM default cpumask. The upshot was that it was impossible to set VM level CPU affinity. To fix this we must update qemuProcessSetVcpuAffinities so that it only reports a fatal error if the per-VCPU cpu mask is different from the VM level cpu mask. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Martin Kletzander 提交于
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1178986Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 06 2月, 2015 1 次提交
-
-
由 Daniel P. Berrange 提交于
It is often helpful to know which version of libvirt and QEMU was present when a guest was first launched. Ensure this info is written into the QEMU log file for each guest.
-
- 27 1月, 2015 2 次提交
-
-
由 Daniel P. Berrange 提交于
Record the index of each TAP device created and report them to systemd, so they show up in machinectl status for the VM.
-
由 Daniel P. Berrange 提交于
The nwfilter driver can rely on its global state instead of the connect private data.
-
- 19 1月, 2015 2 次提交
-
-
由 Ján Tomko 提交于
Depending on the context, either error out if the domain has disappeared in the meantime, or just ignore the value to allow marking the function as ATTRIBUTE_RETURN_CHECK.
-
由 Ján Tomko 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1161024 In the device type-specific functions, exit early if the domain has disappeared, because the cleanup should have been done by qemuProcessStop. Check the return value in processDeviceDeletedEvent and qemuProcessUpdateDevices. Skip audit and removing the device from live def because it has already been cleaned up.
-
- 15 1月, 2015 1 次提交
-
-
由 Ján Tomko 提交于
Make local copy of the disk alias in qemuProcessInitPasswords, instead of referencing the one in domain definition, which might get freed if the domain crashes while we're in monitor. Also copy the memballoon period value.
-
- 14 1月, 2015 1 次提交
-
-
由 Pavel Hrdina 提交于
QEMU internally updates the size of video memory if the domain XML had provided too low memory size or there are some dependencies for a QXL devices 'vgamem' and 'ram' size. We need to know about the changes and store them into the status XML to not break migration or managedsave through different libvirt versions. The values would be loaded only if the "vgamem_mb" property exists for the device. The presence of the "vgamem_mb" also tells that the "ram_size" and "vram_size" exists for QXL devices. Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
- 21 12月, 2014 1 次提交
-
-
由 Martin Kletzander 提交于
There is one problem that causes various errors in the daemon. When domain is waiting for a job, it is unlocked while waiting on the condition. However, if that domain is for example transient and being removed in another API (e.g. cancelling incoming migration), it get's unref'd. If the first call, that was waiting, fails to get the job, it unref's the domain object, and because it was the last reference, it causes clearing of the whole domain object. However, when finishing the call, the domain must be unlocked, but there is no way for the API to know whether it was cleaned or not (unless there is some ugly temporary variable, but let's scratch that). The root cause is that our APIs don't ref the objects they are using and all use the implicit reference that the object has when it is in the domain list. That reference can be removed when the API is waiting for a job. And because each domain doesn't do its ref'ing, it results in the ugly checking of the return value of virObjectUnref() that we have everywhere. This patch changes qemuDomObjFromDomain() to ref the domain (using virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which should be the only function in which the return value of virObjectUnref() is checked. This makes all reference counting deterministic and makes the code a bit clearer. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 19 12月, 2014 2 次提交
-
-
由 Daniel P. Berrange 提交于
Although QMP returns info about vCPU threads in TCG mode, the data it returns is mostly lies. Only the first vCPU has a valid thread_id returned. The thread_id given for the other vCPUs is in fact the main emulator thread. All vCPUs actually run under the same thread in TCG mode. Our vCPU pinning code is not at all able to cope with this so if you try to set CPU affinity per-vCPU you end up with wierd errors error: Failed to start domain instance-00000007 error: cannot set CPU affinity on process 24365: Invalid argument Since few people will care about the performance of TCG with strict CPU pinning, lets just disable that for now, so we get a clear error message error: Failed to start domain instance-00000007 error: Requested operation is not valid: cpu affinity is not supported
-
由 Daniel P. Berrange 提交于
The code assumes that def->vcpus == nvcpupids, so when we setup fake CPU pids for old QEMU with nvcpupids == 1, we cause the later code to read off the end of the array. This has fun results like sche_setaffinity(0, ...) which changes libvirtd's own CPU affinity, or even better sched_setaffinity($RANDOM, ...) which changes the affinity of a random OS process.
-
- 16 12月, 2014 2 次提交
-
-
由 Martin Kletzander 提交于
Thanks to that we don't need to drag the pointer everywhere and future code will get cleaner. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Martin Kletzander 提交于
Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 14 12月, 2014 2 次提交
-
-
由 Laine Stump 提交于
We now have a qemuInterfaceStartDevices() which does the final activation needed for the host-side tap/macvtap devices that are used for qemu network connections. It will soon make sense to have the converse qemuInterfaceStopDevices() which will undo whatever was done during qemuInterfaceStartDevices(). A function to "stop" a single device has also been added, and is called from the appropriate place in qemuDomainDetachNetDevice(), although this is currently unnecessary - the device is going to immediately be deleted anyway, so any extra "deactivation" will be for naught. The call is included for completeness, though, in anticipation that in the future there may be some required action that *isn't* nullified by deleting the device. This patch is a part of a more complete fix for: https://bugzilla.redhat.com/show_bug.cgi?id=1081461
-
由 Laine Stump 提交于
The patch that added qemuInterfaceStartDevices() (upstream commit 82977058) had an extra conditional to prevent calling it if the reason for starting the CPUs was VIR_DOMAIN_RUNNING_UNPAUSED or VIR_DOMAIN_RUNNING_SAVE_CANCELED. This was put in by the author as the result of a reviewer asking if it was necessary to ifup the interfaces in *all* occasions (because these were the two cases where the CPU would have already been started (and stopped) once, so the interface would already be ifup'ed). It turns out that, as long as there is no corresponding qemuInterfaceStopDevices() to ifdown the interfaces anytime the CPUs are stopped, neglecting to ifup when reason is RUNNING_UNPAUSED or RUNNING_SAVE_CANCELED doesn't cause any problems (because it just happens that the interface will have already been ifup'ed by a prior call when the CPU was previously started for some other reason). However, it also doesn't *help*, and there will soon be a qemuInterfaceStopDevices() function which *will* ifdown these interfaces when the guest CPUs are stopped, and once that is done, the interfaces will be left down in some cases when they should be up (for example, if a domain is paused and then unpaused). So, this patch is removing the condition in favor of always calling qemuInterfaeStartDevices() when the guest CPUs are started. This patch (and the aforementioned patch) resolve: https://bugzilla.redhat.com/show_bug.cgi?id=1081461
-
- 11 12月, 2014 1 次提交
-
-
由 Matthew Rosato 提交于
Currently, MAC registration occurs during device creation, which is early enough that, during live migration, you end up with duplicate MAC addresses on still-running source and target devices, even though the target device isn't actually being used yet. This patch proposes to defer MAC registration until right before the guest can actually use the device -- In other words, right before starting guest CPUs. Signed-off-by: NMatthew Rosato <mjrosato@linux.vnet.ibm.com> Signed-off-by: NLaine Stump <laine@laine.org>
-
- 04 12月, 2014 2 次提交
-
-
由 Peter Krempa 提交于
3ecebf07 breaks the build as it adds a way to jump to cleanup before the 'cfg' object is retrieved and 'priv' is initialized.
-
由 Peter Krempa 提交于
Move entering the job into the thread to simplify the program flow. Also as the code holds a separate reference to the domain object some conditions can be simplified. After this patch qemuDomainObjTransferJob is no longer needed so this patch removes it.
-
- 01 12月, 2014 2 次提交
-
-
由 Luyao Huang 提交于
There are some small issue in qemuProcessAttach: 1.Fix virSecurityManagerGetProcessLabel always get pid = 0, move 'vm->pid = pid' before call virSecurityManagerGetProcessLabel. 2.Use virSecurityManagerGenLabel to get image label. 3.Fix always set selinux label for other security driver label. Signed-off-by: NLuyao Huang <lhuang@redhat.com>
-
由 Erik Skultety 提交于
When a block{commit,copy} job was aborted on a domain, block job handler did not process it correctly, leaving a phantom job in the background. Any further calls to any blockjob causes "block <jobtype> still active" error. This patch fixes the blockjob handler so that it checks not only for VIR_DOMAIN_BLOCK_JOB_FAILED status, but VIR_DOMAIN_BLOCK_JOB_CANCELED status as well, followed by our existing cleanup routine. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1135169Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 28 11月, 2014 1 次提交
-
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1160084 As of b6d4dad1 (1.2.5) we are trying to keep the status of FSFreeze in the guest. Even though I've tried to fixed couple of corner cases (6ea54769), it occurred to me just recently, that the approach is broken by design. Firstly, there are many other ways to talk to qemu-ga (even through libvirt) that filesystems can be thawed (e.g. qemu-agent-command) without libvirt noticing. Moreover, there are plenty of ways to thaw filesystems without even qemu-ga noticing (yes, qemu-ga keeps internal track of FSFreeze status). So, instead of keeping the track ourselves, or asking qemu-ga for stale state, it's the best to let qemu-ga deal with that (and possibly let guest kernel propagate an error). Moreover, there's one bug with the following approach, if fsfreeze command failed, we've executed fsthaw subsequently. So issuing domfsfreeze in virsh gave the following result: virsh # domfsfreeze gentoo Froze 1 filesystem(s) virsh # domfsfreeze gentoo error: Unable to freeze filesystems error: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been disabled for this instance virsh # domfsfreeze gentoo Froze 1 filesystem(s) virsh # domfsfreeze gentoo error: Unable to freeze filesystems error: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been disabled for this instance Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 24 11月, 2014 3 次提交
-
-
由 Peter Krempa 提交于
Add code to emit the event on change of the channel state and reconnect to the qemu process.
-
由 Peter Krempa 提交于
Use data provided by "query-chardev" to refresh the guest frontend state of virtio channels.
-
由 Peter Krempa 提交于
Improve the monitor function to also retrieve the guest state of character device (if provided) so that we can refresh the state of virtio-serial channels and perhaps react to changes in the state in future patches. This patch changes the returned data from qemuMonitorGetChardevInfo to return a structure containing the pty path and the state for all the character devices. The change to the testsuite makes sure that the data is parsed correctly.
-
- 21 11月, 2014 1 次提交
-
-
由 Peter Krempa 提交于
New qemu added a new event that is emitted when a virtio serial channel is opened in the guest OS. This allows us to update the state of the port in the output-only XML element. This patch implements the monitor callbacks and necessary handlers to update the state in the definition.
-