- 13 12月, 2019 13 次提交
-
-
由 Peter Krempa 提交于
This function looks up a named bitmap for a virStorageSource in the data returned from query-named-block-nodes. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Peter Krempa 提交于
Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Peter Krempa 提交于
The function will require the bitmap topology for the full implementation. To facilitate testing, add the propagation of the necessary data beforehand so that the test code can stay unchanged during the changes. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Peter Krempa 提交于
Separate the for now incomplete code that collects the bitmaps to be merged for an incremental backup into a separate function. This will allow adding testing prior to the improvement of the algorithm to include snapshots. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Peter Krempa 提交于
The object itself has no extra value and it would make testing the code harder. Refactor it to remove just the definition pointer. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Peter Krempa 提交于
Re-create any active persistent bitmap in the snapshot overlay image so that tracking for a checkpoint is persisted. While this basically duplicates data in the allocation map it's currently the only possible way as qemu can't mirror the allocation map into a dirty bitmap if we'd ever want to do a backup. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Peter Krempa 提交于
Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Peter Krempa 提交于
qemuDomainSnapshotDiskPrepareOne is already called for each disk which is member of the snapshot so we don't need to iterate through the snapshot list again to generate members of the 'transaction' command for each snapshot. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Peter Krempa 提交于
Check that the value is less than 0. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Peter Krempa 提交于
For testing purposes it will be beneficial to be able to parse the data from JSON directly rather than trying to simulate the monitor. Extract the worker bits and export them. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Peter Krempa 提交于
We will need to inspect the presence and attributes for dirty bitmaps. Extract them when processing reply of query-named-block-nodes. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
由 Daniel P. Berrangé 提交于
The use of the parseOpaque parameter was mistakenly removed in commit 4a4132b4 Author: Daniel P. Berrangé <berrange@redhat.com> Date: Tue Dec 3 10:49:49 2019 +0000 conf: don't use passed in caps in post parse method causing the method to re-fetch qemuCaps that were already just fetched and put into parseOpaque. This is inefficient when parsing incoming XML, but for live XML this is more serious as it means we use the capabilities for the current QEMU binary on disk, rather than the running QEMU. That commit, however, did have a useful side effect of fixing a crasher bug in the qemu post parse callback introduced by commit 5e939cea Author: Jiri Denemark <jdenemar@redhat.com> Date: Thu Sep 26 18:42:02 2019 +0200 qemu: Store default CPU in domain XML The qemuDomainDefSetDefaultCPU() method in that patch did not allow for the possibility that qemuCaps would be NULL and thus resulted in a SEGV. This shows a risk in letting each check in the post parse callback look for qemuCaps == NULL. The safer option is to check once upfront and immediately stop (postpone) further validation. Reviewed-by: NMichal Privoznik <mprivozn@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Daniel P. Berrangé 提交于
Don't check os type / virt type / arch in the post-parse callback because we can't assume qemuCaps is non-NULL at this point. It also conceptually belongs to the validation callback. Reviewed-by: NMichal Privoznik <mprivozn@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
- 12 12月, 2019 11 次提交
-
-
由 Jonathon Jongsma 提交于
This function will be removed in a future commit because it allows the caller to acquire both monitor and agent jobs at the same time. Holding both job types creates a vulnerability to denial of service from a malicious guest agent. qemuDomainSetVcpusFlags() always passes NONE for either the monitor job or the agent job (and thus is not vulnerable to the DoS), so we can simply replace this function with the functions for acquiring the appropriate type of job. Signed-off-by: NJonathon Jongsma <jjongsma@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Jonathon Jongsma 提交于
We have to assume that the guest agent may be malicious so we don't want to allow any agent queries to block any other libvirt API. By holding a monitor job while we're querying the agent, we open ourselves up to a DoS. Split the function so that the portion issuing the agent command only holds an agent job and the portion issuing the monitor command holds only a monitor job. Signed-off-by: NJonathon Jongsma <jjongsma@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Jonathon Jongsma 提交于
We have to assume that the guest agent may be malicious so we don't want to allow any agent queries to block any other libvirt API. By holding a monitor job while we're querying the agent, we open ourselves up to a DoS. So split the function up a bit to only hold the monitor job while querying qemu for whether the domain supports suspend. Then acquire only an agent job while issuing the agent suspend command. Signed-off-by: NJonathon Jongsma <jjongsma@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Jonathon Jongsma 提交于
We have to assume that the guest agent may be malicious so we don't want to allow any agent queries to block any other libvirt API. By holding a monitor job while we're querying the agent, we open ourselves up to a DoS. Split the function so that we only hold the appropriate type of job while rebooting. Signed-off-by: NJonathon Jongsma <jjongsma@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Jonathon Jongsma 提交于
We have to assume that the guest agent may be malicious so we don't want to allow any agent queries to block any other libvirt API. By holding a monitor job while we're querying the agent, we open ourselves up to a DoS. So split the function into separate parts: one that does the agent shutdown and one that does the monitor shutdown. Each part holds only a job of the appropriate type. Signed-off-by: NJonathon Jongsma <jjongsma@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Ján Tomko 提交于
Replace all the uses passing a single parameter as the length. Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Ján Tomko 提交于
Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NDaniel Henrique Barboza <danielhb413@gmail.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Ján Tomko 提交于
My hesitation to remove VIR_STRDUP without VIR_STRNDUP resulted in these being able to sneak in. Signed-off-by: NJán Tomko <jtomko@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Daniel P. Berrangé 提交于
This reverts commit 7be5fe66. This commit broke resctrl, because it missed the fact that the virResctrlInfoGetCache() has side-effects causing it to actually change the virResctrlInfo parameter, not merely get data from it. This code will need some refactoring before we can try separating it from virCapabilities again. Reviewed-by: NCole Robinson <crobinso@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
由 Pavel Mores 提交于
This commit aims to fix https://bugzilla.redhat.com/show_bug.cgi?id=1610207 The cause was apparently incorrect handling of jobs in snapshot revert code which allowed a thread executing snapshot delete to begin job while snapshot revert was still running on another thread. The snapshot delete thread then waited on a condition variable in qemuMonitorSend() while the revert thread finished, changing (and effectively corrupting) the qemuMonitor structure under the delete thread which led to its crash. The incorrect handling of jobs in revert code was due to the fact that although qemuDomainRevertToSnapshot() correctly begins a job at the start, the job was implicitly ended when qemuProcessStop() was called because the job lives in the QEMU driver's private data (qemuDomainObjPrivate) that was purged during qemuProcessStop(). This fix prevents qemuProcessStop() from clearing jobs as the idea of qemuProcessStop() clearing jobs seems wrong in the first place. It was (inadvertently) introduced in commit 888aa4b6, which is effectively reverted by the second hunk of this commit. To preserve the desired effects of the faulty commit, the first hunk is included as suggested by Michal. Signed-off-by: NPavel Mores <pmores@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Daniel P. Berrangé 提交于
When QEMU uid/gid is set to non-root this is pointless as if we just used a regular setuid/setgid call, the process will have all its capabilities cleared anyway by the kernel. When QEMU uid/gid is set to root, this is almost (always?) never what people actually want. People make QEMU run as root in order to access some privileged resource that libvirt doesn't support yet and this often requires capabilities. As a result they have to go find the qemu.conf param to turn this off. This is not viable for libguestfs - they want to control everything via the XML security label to request running as root regardless of the qemu.conf settings for user/group. Clearing capabilities was implemented originally because there was a proposal in Fedora to change permissions such that root, with no capabilities would not be able to compromise the system. ie a locked down root account. This never went anywhere though, and as a result clearing capabilities when running as root does not really get us any security benefit AFAICT. The root user can easily do something like create a cronjob, which will then faithfully be run with full capabilities, trivially bypassing the restriction we place. IOW, our clearing of capabilities is both useless from a security POV, and breaks valid use cases when people need to run as root. This removes the clear_emulator_capabilities configuration option from qemu.conf, and always runs QEMU with capabilities when root. The behaviour when non-root is unchanged. Reviewed-by: NCole Robinson <crobinso@redhat.com> Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
- 11 12月, 2019 4 次提交
-
-
由 Pavel Mores 提交于
With all plumbing in place, we can now enable the new functionality. Signed-off-by: NPavel Mores <pmores@redhat.com> Reviewed-by: NPeter Krempa <pkrempa@redhat.com>
-
由 Pavel Mores 提交于
Since blockcommit is asynchronous, libvirtd can be restarted while the operation runs. To ensure the information necessary to finish up the job is not lost, serialisation to and deserialisation from the status XML is added. To unittest this, the new element was only added to the active commit test, the non-active commit test doesn't have the new element so as to test its absence. Signed-off-by: NPavel Mores <pmores@redhat.com> Reviewed-by: NPeter Krempa <pkrempa@redhat.com>
-
由 Pavel Mores 提交于
When blockcommit finishes successfully, one of the qemuBlockJobProcessEventCompletedCommit() and qemuBlockJobProcessEventCompletedActiveCommit() event handlers is called. This is where the delete flag (stored in qemuBlockJobCommitData since the previous commit) can actually be used to delete the committed snapshot images if requested. We use virFileRemove() instead of a simple unlink() to cover the case where the image to be removed is on an NFS volume. Signed-off-by: NPavel Mores <pmores@redhat.com> Reviewed-by: NPeter Krempa <pkrempa@redhat.com>
-
由 Pavel Mores 提交于
Propagate the delete flag from qemuDomainBlockCommit() (which was just ignoring it until now) to qemuBlockJobDiskNewCommit() where it can be stored in the qemuBlockJobCommitData structure which holds information necessary to finish the job asynchronously. In the actual qemuBlockJobDiskNewCommit() in this commit, we temporarily pass a literal 'false' to preserve the current behaviour until the whole implementation of the feature is in place. Signed-off-by: NPavel Mores <pmores@redhat.com> Reviewed-by: NPeter Krempa <pkrempa@redhat.com>
-
- 10 12月, 2019 12 次提交
-
-
由 Pavel Hrdina 提交于
Signed-off-by: NPavel Hrdina <phrdina@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Pavel Hrdina 提交于
Signed-off-by: NPavel Hrdina <phrdina@redhat.com> Reviewed-by: NCole Robinson <crobinso@redhat.com>
-
由 Peter Krempa 提交于
Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
After the individual sub-blockjobs of a backup libvirt job finish we must detect it and notify the parent job, so that it can be properly terminated. Since we update job information to determine success of a blockjob we can directly report back also statistics of the blockjob. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Use the helper which cancels all blockjobs to perform the backup job cancellation in qemuDomainAbortJob. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
We can use the output of 'query-jobs' to figure out some useful information about a backup job. That is progress in case of a push job and scratch file use in case of a pull job. Add a worker which will total up the data and call it from qemuDomainGetJobStatsInternal. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
This allows to start and manage the backup job. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
A backup blockjob needs to be able to notify the parent backup job as well as track all data to be able to clean up the bitmap and blockdev used for the backup. Add the data structure, job allocation function and status XML formatter and parser. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Store the data of a backup job along with the index counter for new backup jobs in the status XML. Currently we will support only one backup job and thus there's no necessity to add arrays of jobs. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
Implement the transaction actions generator for blockdev-backup. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
A backup job may consist of many backup sub-blockjobs. Add the new blockjob type and add all type converter strings. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 Peter Krempa 提交于
We will want to use the async job infrastructure along with all the APIs and event for the backup job so add the backup job as a new async job type. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com> Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-