- 20 1月, 2020 1 次提交
-
-
由 Ján Tomko 提交于
This function grabs an agent job but ends a monitor job. End the agent job instead. https://bugzilla.redhat.com/show_bug.cgi?id=1792723Signed-off-by: NJán Tomko <jtomko@redhat.com> Reported-by: NDan Zheng <dzheng@redhat.com> Fixes: e005c95f
-
- 17 1月, 2020 2 次提交
-
-
由 Daniel P. Berrangé 提交于
gmtime_r/localtime_r are mostly used in combination with strftime to format timestamps in libvirt. This can all be replaced with GDateTime resulting in simpler code that is also more portable. There is some boundary condition problem in parsing POSIX timezone offsets in GLib which tickles our test suite. The test suite is hacked to avoid the problem. The upsteam GLib bug report is https://gitlab.gnome.org/GNOME/glib/issues/1999Reviewed-by: NPavel Hrdina <phrdina@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
G_STATIC_ASSERT() is a drop-in functional equivalent of the GNULIB verify() macro. Reviewed-by: NPavel Hrdina <phrdina@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
- 16 1月, 2020 8 次提交
-
-
由 Jonathon Jongsma 提交于
Signed-off-by: NJonathon Jongsma <jjongsma@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Jonathon Jongsma 提交于
Switch from old VIR_ allocation APIs to glib equivalents. Signed-off-by: NJonathon Jongsma <jjongsma@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Jonathon Jongsma 提交于
In order to avoid holding an agent job and a normal job at the same time, we want to avoid accessing the domain's definition while holding the agent job. To achieve this, qemuAgentGetFSInfo() only returns the raw information from the agent query to the caller. The caller can then release the agent job and then proceed to look up the disk alias from the vm definition. This necessitates moving a few helper functions to qemu_driver.c and exposing the agent data structure (qemuAgentFSInfo) in the header. In addition, because the agent function no longer returns the looked-up disk alias, we can't test the alias within qemuagenttest. Instead we simply test that we parse and return the raw agent data correctly. Signed-off-by: NJonathon Jongsma <jjongsma@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
When resuming a domain from a save file, we read the domain XML from the file, add it onto our internal list of domains, start the qemu process, let it load the incoming migration stream and resume its vCPUs afterwards. If anything goes wrong, the domain object is removed from the list of domains and error is returned to the caller. However, the qemu process might be left behind - if resuming vCPUs fails (e.g. because qemu is unable to acquire write lock on a disk) then due to a bug the qemu process is not killed but the domain object is removed from the list. Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1718707Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
-
由 Michal Privoznik 提交于
Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
-
由 Michal Privoznik 提交于
Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
-
由 Michal Privoznik 提交于
Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
-
由 Julio Faracco 提交于
We have to keep the default - querying the agent if no flag is set. Signed-off-by: NJulio Faracco <jcfaracco@gmail.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NErik Skultety <eskultet@redhat.com>
-
- 10 1月, 2020 1 次提交
-
-
由 Michael Weiser 提交于
Internal snapshots of a non-running domain do not carry any memory state and restoring such a snapshot will not replace existing saved memory state. This allows a scenario, where a user first suspends a domain into managedsave, restores a non-running snapshot and then resumes the domain from managedsave. After that, the guest system will run with its previous memory state atop a different disk state. The most obvious possible fallout from this is extensive file system corruption. Swap content and RAID bitmaps might also be off. This has been discussed[1] and fixed[2] from the end-user perspective for virt-manager. This patch marks the restore operation as risky at the libvirt level, requiring the user to remove the saved memory state first or force the operation. [1] https://www.redhat.com/archives/virt-tools-list/2019-November/msg00011.html [2] https://www.redhat.com/archives/virt-tools-list/2019-December/msg00049.htmlSigned-off-by: NMichael Weiser <michael.weiser@gmx.de> Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 07 1月, 2020 2 次提交
-
-
由 Daniel Henrique Barboza 提交于
Remove unneeded, easy to remove goto labels (cleanup|error|done|...). Signed-off-by: NDaniel Henrique Barboza <danielhb413@gmail.com> Reviewed-by: NErik Skultety <eskultet@redhat.com>
-
由 Michal Privoznik 提交于
These functions are meant to replace verbose check for the old style of specifying UEFI with a simple function call. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
-
- 06 1月, 2020 3 次提交
-
-
由 Wang Huaqiang 提交于
Introduce an option '--memory' for showing memory related information. The memory bandwidth infomatio is listed as: Domain: 'libvirt-vm' memory.bandwidth.monitor.count=4 memory.bandwidth.monitor.0.name=vcpus_0-4 memory.bandwidth.monitor.0.vcpus=0-4 memory.bandwidth.monitor.0.node.count=2 memory.bandwidth.monitor.0.node.0.id=0 memory.bandwidth.monitor.0.node.0.bytes.total=10208067584 memory.bandwidth.monitor.0.node.0.bytes.local=4807114752 memory.bandwidth.monitor.0.node.1.id=1 memory.bandwidth.monitor.0.node.1.bytes.total=8693735424 memory.bandwidth.monitor.0.node.1.bytes.local=5850161152 memory.bandwidth.monitor.1.name=vcpus_7 memory.bandwidth.monitor.1.vcpus=7 memory.bandwidth.monitor.1.node.count=2 memory.bandwidth.monitor.1.node.0.id=0 memory.bandwidth.monitor.1.node.0.bytes.total=853811200 memory.bandwidth.monitor.1.node.0.bytes.local=290701312 memory.bandwidth.monitor.1.node.1.id=1 memory.bandwidth.monitor.1.node.1.bytes.total=406044672 memory.bandwidth.monitor.1.node.1.bytes.local=229425152 Signed-off-by: NWang Huaqiang <huaqiang.wang@intel.com>
-
由 Wang Huaqiang 提交于
The underlying resctrl monitoring is actually using 64 bit counters, not the 32bit one. Correct this by using 64bit data type for reading hardware value. To keep the interface consistent, the result of CPU last level cache that occupied by vcpu processors of specific restrl monitor group is still reported with a truncated 32bit data type. because, in silicon world, CPU cache size will never exceed 4GB. Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com> Signed-off-by: NWang Huaqiang <huaqiang.wang@intel.com>
-
由 Peter Krempa 提交于
When cancelling the blockjobs as part of failed backup job startup recover we didn't pass in the correct async job type. Luckily the block job handler and cancellation code paths use no block job at all currently so those were correct. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
-
- 03 1月, 2020 1 次提交
-
-
由 Daniel P. Berrangé 提交于
Reviewed-by: NFabiano Fidêncio <fidencio@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
- 18 12月, 2019 3 次提交
-
-
由 Michal Privoznik 提交于
When freeing qemu driver struct members, we forgot to free @hostcpu and @hostnuma members. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
-
由 Michal Privoznik 提交于
This function is supposed to clean up virQEMUDriver structure and free individual members. However, it's doing that in random order which makes it hard to track which members are being freed and which are not. Do the free in reverse order than the structure definition - assuming that the most important members (like mutex) are declared first and freed last. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
-
由 Laine Stump 提交于
Prior to commit 55ce6564 (first in libvirt 4.6.0), the XML sent to virDomainAttachDeviceFlags() was parsed only once, and the results of that parse were inserted into both the live object of the running domain and into the persistent config. Thus, if MAC address was omitted from in XML for a network device (<interface>), both the live and config object would have the same MAC address. Commit 55ce6564 changed the code to parse the incoming XML twice - once for live and once for config. This does eliminate the problem of PCI (/scsi/sata) address conflicts caused by allocating an address based on existing devices in live object, but then inserting the result into the config (which may already have a device using that address), BUT it also means that when the MAC address of a network device hasn't been specified in the XML, each copy will get a different auto-generated MAC address. This results in the MAC address of the device changing the next time the domain is shutdown and restarted, which creates havoc with the guest OS's network config. There have been several discussions about this in the last > 1 year, attempting to find the ideal solution to this problem that makes MAC addresses consistent and accounts for all sorts of corner cases with PCI/scsi/sata addresses. All of these discussions fizzled out because every proposal was either too difficult to implement or failed to fix some esoteric case someone thought up. So, in the interest of solving the MAC address problem while not making the "other address" situation any worse than before, this patch simply adds a qemuDomainAttachDeviceLiveAndConfigHomogenize() function that (for now) copies the MAC address from the config object to the live object (if the original xml had <mac address='blah'/> then this will be an effective NOP (as the macs already match)). Any downstream libvirt containing upstream commit 55ce6564 should have this patch as well. https://bugzilla.redhat.com/1783411Signed-off-by: NLaine Stump <laine@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 17 12月, 2019 5 次提交
-
-
由 Michal Privoznik 提交于
If we use glib alloc functions, we can drop the 'cleanup' label and @rv variable and also simplify the code a bit. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
Some variables are not used outside of the for() loop. Move their declaration to clean up the code a bit. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
When using the monolithic daemon, then dom->conn has all driver tables filled in properly and thus it's safe to call an API other than virDomain*(). However, when using split daemons then dom->conn has only hypervisor driver table set (dom->conn->driver) and the rest is NULL. Therefore, if we want to call a non-domain API (virNetworkLookupByName() in this case), we have obtain the cached connection object accessible via virGetConnectNetwork(). Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
If we place qemuDomainInterfaceAddresses() a few lines below the two functions its using then we can drop forward declarations of those functions. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Michal Privoznik 提交于
To simplify implementation, some restrictions are added. For instance, an NVMe disk can't go to any bus but virtio and has to be type of 'disk' and can't have startupPolicy set. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
- 13 12月, 2019 4 次提交
-
-
由 Ján Tomko 提交于
As of commit 2a00ef6e which was released in v5.2.0, we require YAJL to build the QEMU driver. Remove the checks from code that requires the QEMU driver or checks that also check for WITH_QEMU. Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Peter Krempa 提交于
Re-create any active persistent bitmap in the snapshot overlay image so that tracking for a checkpoint is persisted. While this basically duplicates data in the allocation map it's currently the only possible way as qemu can't mirror the allocation map into a dirty bitmap if we'd ever want to do a backup. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Peter Krempa 提交于
qemuDomainSnapshotDiskPrepareOne is already called for each disk which is member of the snapshot so we don't need to iterate through the snapshot list again to generate members of the 'transaction' command for each snapshot. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Peter Krempa 提交于
Check that the value is less than 0. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
- 12 12月, 2019 6 次提交
-
-
由 Jonathon Jongsma 提交于
This function will be removed in a future commit because it allows the caller to acquire both monitor and agent jobs at the same time. Holding both job types creates a vulnerability to denial of service from a malicious guest agent. qemuDomainSetVcpusFlags() always passes NONE for either the monitor job or the agent job (and thus is not vulnerable to the DoS), so we can simply replace this function with the functions for acquiring the appropriate type of job. Signed-off-by: NJonathon Jongsma <jjongsma@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Jonathon Jongsma 提交于
We have to assume that the guest agent may be malicious so we don't want to allow any agent queries to block any other libvirt API. By holding a monitor job while we're querying the agent, we open ourselves up to a DoS. Split the function so that the portion issuing the agent command only holds an agent job and the portion issuing the monitor command holds only a monitor job. Signed-off-by: NJonathon Jongsma <jjongsma@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Jonathon Jongsma 提交于
We have to assume that the guest agent may be malicious so we don't want to allow any agent queries to block any other libvirt API. By holding a monitor job while we're querying the agent, we open ourselves up to a DoS. So split the function up a bit to only hold the monitor job while querying qemu for whether the domain supports suspend. Then acquire only an agent job while issuing the agent suspend command. Signed-off-by: NJonathon Jongsma <jjongsma@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Jonathon Jongsma 提交于
We have to assume that the guest agent may be malicious so we don't want to allow any agent queries to block any other libvirt API. By holding a monitor job while we're querying the agent, we open ourselves up to a DoS. Split the function so that we only hold the appropriate type of job while rebooting. Signed-off-by: NJonathon Jongsma <jjongsma@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Jonathon Jongsma 提交于
We have to assume that the guest agent may be malicious so we don't want to allow any agent queries to block any other libvirt API. By holding a monitor job while we're querying the agent, we open ourselves up to a DoS. So split the function into separate parts: one that does the agent shutdown and one that does the monitor shutdown. Each part holds only a job of the appropriate type. Signed-off-by: NJonathon Jongsma <jjongsma@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Ján Tomko 提交于
Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 11 12月, 2019 2 次提交
-
-
由 Pavel Mores 提交于
With all plumbing in place, we can now enable the new functionality. Signed-off-by: NPavel Mores <pmores@redhat.com> Reviewed-by: NPeter Krempa <pkrempa@redhat.com>
-
由 Pavel Mores 提交于
Propagate the delete flag from qemuDomainBlockCommit() (which was just ignoring it until now) to qemuBlockJobDiskNewCommit() where it can be stored in the qemuBlockJobCommitData structure which holds information necessary to finish the job asynchronously. In the actual qemuBlockJobDiskNewCommit() in this commit, we temporarily pass a literal 'false' to preserve the current behaviour until the whole implementation of the feature is in place. Signed-off-by: NPavel Mores <pmores@redhat.com> Reviewed-by: NPeter Krempa <pkrempa@redhat.com>
-
- 10 12月, 2019 2 次提交
-
-
由 Peter Krempa 提交于
Use the helper which cancels all blockjobs to perform the backup job cancellation in qemuDomainAbortJob. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
We can use the output of 'query-jobs' to figure out some useful information about a backup job. That is progress in case of a push job and scratch file use in case of a pull job. Add a worker which will total up the data and call it from qemuDomainGetJobStatsInternal. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-