- 30 10月, 2012 2 次提交
-
-
由 Vladislav Bogdanov 提交于
-
由 Michal Privoznik 提交于
Currently, we use iohelper when saving/restoring a domain. However, if there's some kind of error (like I/O) it is not propagated to libvirt. Since it is not qemu who is doing the actual write() it will not get error. The iohelper does. Therefore we should check for iohelper errors as it makes libvirt more user friendly.
-
- 29 10月, 2012 2 次提交
-
-
由 Ján Tomko 提交于
In the XML warning, we print a virsh command line that can be used to edit that XML. This patch prints UUIDs if the entity name contains special characters (like shell metacharacters, or "--" that would break parsing of the XML comment). If the entity doesn't have a UUID, just print the virsh command that can be used to edit it.
-
由 Jiri Denemark 提交于
This reverts commit 8d75e47e. Libvirt was never released with support for migration cookies without hostuuid.
-
- 28 10月, 2012 1 次提交
-
-
- 27 10月, 2012 12 次提交
-
-
由 Eric Blake 提交于
Introduced in commit 0039a32f. * src/qemu/qemu_process.c (qemuPrepareCpumap): s/covert/convert/
-
由 Eric Blake 提交于
When using block copy to pivot over to a new chain, the backing files for the new chain might still need labeling (particularly if the user passes --reuse-ext with a relative backing file name). Relabeling a file that is already labeled won't hurt, so this just labels the entire chain at the point of the pivot. Doing the relabel of the chain uses the fact that we already safely probed the file type of an external file at the start of the block copy. * src/qemu/qemu_driver.c (qemuDomainBlockPivot): Relabel chain before asking qemu to pivot.
-
由 Eric Blake 提交于
Use the recent addition of qemuDomainPrepareDiskChainElement to obtain locking manager lease, permit a block device through cgroups, and set the SELinux label; then audit the fact that we hand a new file over to qemu. Alas, releasing the lease and label at the end of the mirroring is a trickier prospect (we would have to trace the backing chain of both source and destination, and be sure not to revoke rights to any part of the chain that is shared), so for now, virDomainBlockJobAbort still leaves things with additional access granted (as block-pull and block-commit have the same problem of not clamping access after completion, a future cleanup would cover all three commands). * src/qemu/qemu_driver.c (qemuDomainBlockCopy): Set up labeling.
-
由 Eric Blake 提交于
Support the REUSE_EXT flag, in part by copying sanity checks from snapshot code. This code introduces a case of probing an external file for its type; such an action would be a security risk if the existing file is supposed to be raw but the contents resemble some other format; however, since the virDomainBlockRebase API has a flag to force treating the file as raw rather than probe, we can assume that probing is safe in all other instances. Besides, if we don't probe or force raw, then qemu will. * src/qemu/qemu_driver.c (qemuDomainBlockRebase): Allow REUSE_EXT flag. (qemuDomainBlockCopy): Wire up flag, and add some sanity checks.
-
由 Eric Blake 提交于
Minimal patch to wire up all the pieces in the previous patches to actually enable a block copy job. By minimal, I mean that qemu creates the file (that is, no REUSE_EXT flag support yet), SELinux must be disabled, a lock manager is not informed, and the audit logs aren't updated. But those will be added as improvements in future patches. This patch is designed so that if we ever add a future API virDomainBlockCopy with more bells and whistles (such as letting the user specify a destination image format different than the source), where virDomainBlockRebase is a wrapper around the simpler portions of the new functionality, then the new API can just reuse the new qemuDomainBlockCopy function and already support _SHALLOW and _REUSE_EXT flags. Also note that libvirt.c already filtered the new flags if _COPY is not present, so that we are not impacting the case of BlockRebase being a wrapper around BlockPull. * src/qemu/qemu_driver.c (qemuDomainBlockCopy): New function. (qemuDomainBlockRebase): Call it when appropriate.
-
由 Eric Blake 提交于
Since libvirt drops locks between issuing a monitor command and getting a response, it is possible for libvirtd to be restarted before getting a response on a block-job-complete command; worse, it is also possible for the guest to shut itself down during the window while libvirtd is down, ending the qemu process. A management app needs to know if the pivot happened (and the destination file contains guest contents not in the source) or failed (and the source file contains guest contents not in the destination), but since the job is finished, 'query-block-jobs' no longer tracks the status of the job, and if the qemu process itself has disappeared, even 'query-block' cannot be checked to ask qemu its current state. At the time of this patch, the design for persistent bitmap has not been clarified, so a followup patch will be needed once qemu actually figures out how to expose it, and we figure out how to use it. In the meantime, we have a solution that avoids the worst of the problem. [This problem was first analyzed with the RHEL 6.3 __com.redhat_drive-reopen command; which partly explains why upstream qemu 1.3 ditched the drive-reopen idea and went with block-job-complete plus persistent bitmap instead.] If we surround 'drive-reopen' with a pause/resume pair, then we can guarantee that the guest cannot modify either source or destination files in the window of libvirtd uncertainty, and the management app is guaranteed that either libvirt knows the outcome and reported it correctly; or that on libvirtd restart, the guest will still be paused and that the qemu process cannot have disappeared due to guest shutdown; and use that as a clue that the management app must implement recovery protocol, with both source and destination files still being in sync and with 'query-block' still being an option as part of that recovery. My testing shows that the pause window will typically be only a fraction of a second. * src/qemu/qemu_driver.c (qemuDomainBlockPivot): Pause around drive-reopen. (qemuDomainBlockJobImpl): Update caller.
-
由 Eric Blake 提交于
This is the bare minimum to end a copy job (of course, until a later patch adds the ability to start a copy job, this patch doesn't do much in isolation; I've just split the patches to ease the review). This patch intentionally avoids SELinux, lock manager, and audit actions. Also, if libvirtd restarts at the exact moment that a 'block-job-complete' is in flight, the proposed proper way to detect the outcome of that would be with a persistent bitmap and some additional query commands when libvirtd restarts. This patch is enough to test the common case of success when used correctly, while saving the subtleties of proper cleanup for worst-case errors for later. When a mirror job is started, cancelling the job safely reverts back to the source disk, regardless of whether the destination is in phase 1 (streaming, in which case the destination is worthless) or phase 2 (mirroring, in which case the destination is synced up to the source at the time of the cancel). Our existing code does just fine in either phase, other than some bookkeeping cleanup; this implements live block copy. Ideas for future enhancements via new flags: Depending on when persistent bitmap support is added, it may be worth adding a VIR_DOMAIN_REBASE_COPY_ATOMIC flag that fails up front if we detect an older qemu with risky pivot operation. Interesting side note: while snapshot-create --disk-only creates a copy of the disk at a point in time by moving the domain on to a new file (the copy is the file now in the just-extended backing chain), blockjob --abort of a copy job creates a copy of the disk while keeping the domain on the original file. There may be potential improvements to the snapshot code to exploit block copy over multiple disks all at one point in time. And, if 'block-job-cancel' were made part of 'transaction', you could copy multiple disks at the same point in time without pausing the domain. This also implies we may want to add a --quiesce flag to virDomainBlockJobAbort, so that when breaking a mirror (whether by cancel or pivot), the side of the mirror that we are abandoning is at least in a stable state with regards to guest I/O. * src/qemu/qemu_driver.c (qemuDomainBlockJobAbort): Accept new flag. (qemuDomainBlockPivot): New helper function. (qemuDomainBlockJobImpl): Implement it.
-
由 Eric Blake 提交于
Handle the new type of block copy event and info. Of course, this patch does nothing until a later patch actually allows the creation/abort of a block copy job. * include/libvirt/libvirt.h.in (VIR_DOMAIN_BLOCK_JOB_READY): New block job status. * src/libvirt.c (virDomainBlockRebase): Document the event. * src/qemu/qemu_monitor_json.c (eventHandlers): New event. (qemuMonitorJSONHandleBlockJobReady): New function. (qemuMonitorJSONGetBlockJobInfoOne): Translate new job type. (qemuMonitorJSONHandleBlockJobImpl): Handle new event and job type. * src/qemu/qemu_process.c (qemuProcessHandleBlockJob): Recognize the event to minimize snooping. * src/qemu/qemu_driver.c (qemuDomainBlockJobImpl): Snoop a successful info query to save effort on a pivot request.
-
由 Eric Blake 提交于
For now, disk migration via block copy job is not implemented in libvirt. But when we do implement it, we have to deal with the fact that qemu does not yet provide an easy way to re-start a qemu process with mirroring still intact. Paolo has proposed an idea for a persistent dirty bitmap that might make this possible, but until that design is complete, it's hard to say what changes libvirt would need. Even something like 'virDomainSave' becomes hairy, if you realize the implications that 'virDomainRestore' would be stuck with recreating the same mirror layout. But if we step back and look at the bigger picture, we realize that the initial client of live storage migration via disk mirroring is oVirt, which always uses transient domains, and that if a transient domain is destroyed while a mirror exists, oVirt can easily restart the storage migration by creating a new domain that visits just the source storage, with no loss in data. We can make life a lot easier by being cowards for now, forbidding certain operations on a domain. This patch guarantees that we never get in a state where we would have to restart a domain with a mirroring block copy, by preventing saves, snapshots, migration, hot unplug of a disk in use, and conversion to a persistent domain (thankfully, it is still relatively easy to 'virsh undefine' a running domain to temporarily make it transient, run tests on 'virsh blockcopy', then 'virsh define' to restore the persistence). Later, if the qemu design is enhanced, we can relax our code. The change to qemudDomainDefine looks a bit odd for undoing an assignment, rather than probing up front to avoid the assignment, but this is because of how virDomainAssignDef combines both a lookup and assignment into a single function call. * src/conf/domain_conf.h (virDomainHasDiskMirror): New prototype. * src/conf/domain_conf.c (virDomainHasDiskMirror): New function. * src/libvirt_private.syms (domain_conf.h): Export it. * src/qemu/qemu_driver.c (qemuDomainSaveInternal) (qemuDomainSnapshotCreateXML, qemuDomainRevertToSnapshot) (qemuDomainBlockJobImpl, qemudDomainDefine): Prevent dangerous actions while block copy is already in action. * src/qemu/qemu_hotplug.c (qemuDomainDetachDiskDevice): Likewise. * src/qemu/qemu_migration.c (qemuMigrationIsAllowed): Likewise.
-
由 Eric Blake 提交于
Upstream qemu 1.3 is adding two new monitor commands, 'drive-mirror' and 'block-job-complete'[1], which can drive live block copy and storage migration. [Additionally, RHEL 6.3 had backported an earlier version of most of the same functionality, but under the names '__com.redhat_drive-mirror' and '__com.redhat_drive-reopen' and with slightly different JSON arguments, and has been using patches similar to these upstream patches for several months now.] The libvirt API virDomainBlockRebase as already committed for 0.9.12 is flexible enough to expose the basics of block copy, but some additional features in the 'drive-mirror' qemu command, such as setting error policy, setting granularity, or using a persistent bitmap, may later require a new libvirt API virDomainBlockCopy. I will wait to add that API until we know more about what qemu 1.3 will finally provide. This patch caters only to the upstream qemu 1.3 interface, although I have proven that the changes for RHEL 6.3 can be isolated to just qemu_monitor_json.c, and the rest of this series will gracefully handle either interface once the JSON differences are papered over in a downstream patch. For consistency with other block job commands, libvirt must handle the bandwidth argument as MiB/sec from the user, even though qemu exposes the speed argument as bytes/sec; then again, qemu rounds up to cluster size internally, so using MiB hides the worst effects of that rounding if you pass small numbers. [1]https://lists.gnu.org/archive/html/qemu-devel/2012-10/msg04123.html * src/qemu/qemu_capabilities.h (QEMU_CAPS_DRIVE_MIRROR) (QEMU_CAPS_DRIVE_REOPEN): New bits. * src/qemu/qemu_capabilities.c (qemuCaps): Name them. * src/qemu/qemu_monitor_json.c (qemuMonitorJSONCheckCommands): Set them. (qemuMonitorJSONDriveMirror, qemuMonitorDrivePivot): New functions. * src/qemu/qemu_monitor_json.h (qemuMonitorJSONDriveMirror) (qemuMonitorDrivePivot): Declare them. * src/qemu/qemu_monitor.c (qemuMonitorDriveMirror) (qemuMonitorDrivePivot): New passthroughs. * src/qemu/qemu_monitor.h (qemuMonitorDriveMirror) (qemuMonitorDrivePivot): Declare them.
-
由 Laine Stump 提交于
This resolves: https://bugzilla.redhat.com/show_bug.cgi?id=862515 which describes inconsistencies in dealing with duplicate mac addresses on network devices in a domain. (at any rate, it resolves *almost* everything, and prints out an informative error message for the one problem that isn't solved, but has a workaround.) A synopsis of the problems: 1) you can't do a persistent attach-interface of a device with a mac address that matches an existing device. 2) you *can* do a live attach-interface of such a device. 3) you *can* directly edit a domain and put in two devices with matching mac addresses. 4) When running virsh detach-device (live or config), only MAC address is checked when matching the device to remove, so the first device with the desired mac address will be removed. This isn't always the one that's wanted. 5) when running virsh detach-interface (live or config), the only two items that can be specified to match against are mac address and model type (virtio, etc) - if multiple netdevs match both of those attributes, it again just finds the first one added and assumes that is the only match. Since it is completely valid to have multiple network devices with the same MAC address (although it can cause problems in many cases, there *are* valid use cases), what is needed is: 1) remove the restriction that prohibits doing a persistent add of a netdev with a duplicate mac address. 2) enhance the backend of virDomainDetachDeviceFlags to check for something that *is* guaranteed unique (but still work with just mac address, as long as it yields only a single results. This patch does three things: 1) removes the check for duplicate mac address during a persistent netdev attach. 2) unifies the searching for both live and config detach of netdevices in the subordinate functions of qemuDomainModifyDeviceFlags() to use the new function virDomainNetFindIdx (which matches mac address and PCI address if available, checking for duplicates if only mac address was specified). This function returns -2 if multiple matches are found, allowing the callers to print out an appropriate message. Steps 1 & 2 are enough to fully fix the problem when using virsh attach-device and detach-device (which require an XML description of the device rather than a bunch of commandline args) 3) modifies the virsh detach-interface command to check for multiple matches of mac address and show an error message suggesting use of the detach-device command in cases where there are multiple matching mac addresses. Later we should decide how we want to input a PCI address on the virsh commandline, and enhance detach-interface to take a --address option, eliminating the need to use detach-device * src/conf/domain_conf.c * src/conf/domain_conf.h * src/libvirt_private.syms * added new virDomainNetFindIdx function * removed now unused virDomainNetIndexByMac and virDomainNetRemoveByMac * src/qemu/qemu_driver.c * remove check for duplicate max from qemuDomainAttachDeviceConfig * use virDomainNetFindIdx/virDomainNetRemove instead of virDomainNetRemoveByMac in qemuDomainDetachDeviceConfig * use virDomainNetFindIdx instead of virDomainIndexByMac in qemuDomainUpdateDeviceConfig * src/qemu/qemu_hotplug.c * use virDomainNetFindIdx instead of a homespun loop in qemuDomainDetachNetDevice. * tools/virsh-domain.c: modified detach-interface command as described above
-
由 Eric Blake 提交于
It turns out that the cpuacct results properly account for offline cpus, and always returns results for every possible cpu, not just the online ones. So there is no need to check the map of online cpus in the first place, merely only a need to know the maximum possible cpu. Meanwhile, virNodeGetCPUBitmap had a subtle change from returning the maximum id to instead returning the width of the bitmap (one larger than the maximum id) in commit 2f4c5338, which made this code encounter some off-by-one logic leading to bad error messages when a cpu was offline: $ virsh cpu-stats dom error: Failed to virDomainGetCPUStats() error: An error occurred, but the cause is unknown Cleaning this up unraveled a chain of other unused variables. * src/qemu/qemu_driver.c (qemuDomainGetPercpuStats): Drop pointless check for cpumap changes, and use correct number of cpus. Simplify signature. (qemuDomainGetCPUStats): Adjust caller. * src/nodeinfo.h (nodeGetCPUCount): New prototype. (nodeGetCPUBitmap): Drop unused parameter. * src/nodeinfo.c (nodeGetCPUBitmap): Likewise. (nodeGetCPUMap): Adjust caller. (nodeGetCPUCount): New function. * src/libvirt_private.syms (nodeinfo.h): Export it.
-
- 26 10月, 2012 2 次提交
-
-
由 Viktor Mihajlovski 提交于
Driver support added for: - test: pretending 8 host CPUS, 3 being online - qemu, lxc, openvz, uml: using nodeGetCPUMap Signed-off-by: NViktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
-
由 Eric Blake 提交于
Callers should not need to know what the name of the file to be read in the Linux-specific version of nodeGetCPUmap; furthermore, qemu cares about online cpus, not present cpus, when determining which cpus to skip. While at it, I fixed the fact that we were computing the maximum online cpu id by doing a slow iteration, when what we really want to know is the max available cpu. * src/nodeinfo.h (nodeGetCPUmap): Rename... (nodeGetCPUBitmap): ...and simplify signature. * src/nodeinfo.c (linuxParseCPUmax): New function. (linuxParseCPUmap): Simplify and alter signature. (nodeGetCPUBitmap): Change implementation. * src/libvirt_private.syms (nodeinfo.h): Reflect rename. * src/qemu/qemu_driver.c (qemuDomainGetPercpuStats): Update caller.
-
- 24 10月, 2012 5 次提交
-
-
由 Osier Yang 提交于
On one hand, numad probably will manage the affinity of domain process dynamically in future. On the other hand, even numad won't manage it, it still could confusion. Let's make things simpler enough to avoid the lair for now.
-
由 Osier Yang 提交于
When the cpu placement model is "auto", it sets the affinity for domain process with the advisory nodeset from numad, however, creating cgroup for the domain process (called emulator thread in some contexts) later overrides that with pinning it to all available pCPUs. How to reproduce: * Configure the domain with "auto" placement for <vcpu>, e.g. <vcpu placement='auto'>4</vcpu> * % virsh start dom * % cat /proc/$dompid/status Though the emulator cgroup cause conflicts, but we can't simply prohibit creating it, as other tunables are still useful, such as "emulator_period", which is used by API virDomainSetSchedulerParameter. So this patch doesn't prohibit creating the emulator cgroup, but inherit the nodeset from numad, and reset the affinity for domain process. * src/qemu/qemu_cgroup.h: Modify definition of qemuSetupCgroupForEmulator to accept the passed nodenet * src/qemu/qemu_cgroup.c: Set the affinity with the passed nodeset
-
由 Osier Yang 提交于
Abstract the codes to prepare cpumap into a helper a function, which can be used later. * src/qemu/qemu_process.h: Declare qemuPrepareCpumap * src/qemu/qemu_process.c: Implement qemuPrepareCpumap, and use it.
-
由 Kyle Mestery 提交于
Transport Open vSwitch per-port data during live migration by using the utility functions virNetDevOpenvswitchGetMigrateData() and virNetDevOpenvswitchSetMigrateData(). Signed-off-by: NKyle Mestery <kmestery@cisco.com>
-
由 Kyle Mestery 提交于
Add the ability for the Qemu V3 migration protocol to include transporting network configuration. A generic framework is proposed with this patch to allow for the transfer of opaque data. Signed-off-by: NKyle Mestery <kmestery@cisco.com> Signed-off-by: NLaine Stump <laine@laine.org>
-
- 23 10月, 2012 2 次提交
-
-
由 Eric Blake 提交于
The snapshot code when reusing an existing file had hard-to-read logic, as well as a missing sanity check: REUSE_EXT should require the destination to already be present. * src/qemu/qemu_driver.c (qemuDomainSnapshotDiskPrepare): Require destination on REUSE_EXT, rename variable for legibility.
-
由 Cole Robinson 提交于
Since the option doesn't exist. Fixes booting with cpu mode='host-model' and qemu 1.2.0
-
- 22 10月, 2012 6 次提交
-
-
由 Doug Goldstein 提交于
Currently it's assumed that qemu always supports VNC, however it is definitely possible to compile qemu without VNC support so we should at the very least check for it and handle that correctly.
-
由 Eric Blake 提交于
Yet another instance of where using plain open() mishandles files that live on root-squash NFS, and where improving the API can improve the chance of a successful probe. * src/util/storage_file.h (virStorageFileProbeFormat): Alter signature. * src/util/storage_file.c (virStorageFileProbeFormat): Use better method for opening file. * src/qemu/qemu_driver.c (qemuDomainGetBlockInfo): Update caller. * src/storage/storage_backend_fs.c (virStorageBackendProbeTarget): Likewise.
-
由 Ján Tomko 提交于
In v2 migration protocol, XML is obtained by calling domainGetXMLDesc. This includes the default USB controller in XML, which breaks migration to older libvirt (before 0.9.2). Commit 409b5f54 qemu: Emit compatible XML when migrating a domain only fixed this for v3 migration. This patch uses the new VIR_DOMAIN_XML_MIGRATABLE flag (detected by VIR_DRV_FEATURE_XML_MIGRATABLE) to obtain XML without the default controller, enabling backward v2 migration.
-
由 Michal Privoznik 提交于
As we switched to setting capabilities based on QMP communication, qemu seamless-migration capability was not set. In the -help output this knob is called seamless-migration=[on|off]. The equivalent in QMP world is SPICE_MIGRATE_COMPLETED event (qemu upstream commit 2fdd16e2).
-
由 Osier Yang 提交于
-
由 Osier Yang 提交于
"nodeinfo" is not used in these two functions, and it's waste of goto in qemuProcessSetEmulatorAffinites
-
- 20 10月, 2012 8 次提交
-
-
由 Eric Blake 提交于
Gcc with optimization warns: ../../src/qemu/qemu_driver.c: In function 'qemuDomainBlockCommit': ../../src/qemu/qemu_driver.c:12813:46: error: 'disk' may be used uninitialized in this function [-Werror=maybe-uninitialized] ../../src/qemu/qemu_driver.c:12698:25: note: 'disk' was declared here cc1: all warnings being treated as errors so obviously I had only been testing with optimization off. * src/qemu/qemu_driver.c (qemuDomainBlockCommit): Guard cleanup.
-
由 Eric Blake 提交于
I finally have all the pieces in place to perform a block-commit with SELinux enforcing. There's still missing cleanup work when the commit completes, but doing that requires tracking both the backing chain and the base and top files within that chain in domain XML across libvirtd restarts. Furthermore, from a security standpoint, once you have granted access, you must assume any damage that can be done will be done; later revoking access is nice to minimize the window of damage, but less important as it does not affect the fact that damage can be done in the first place. Therefore, deferring the revoke efforts until we have better XML tracking of what chain operations are in effect, including across a libvirtd restart, is reasonable. * src/qemu/qemu_driver.c (qemuDomainBlockCommit): Label disks as needed. (qemuDomainPrepareDiskChainElement): Cast away const.
-
由 Eric Blake 提交于
Previously, snapshot code did its own permission granting (lock manager, cgroup device controller, and security manager labeling) inline. But now that we are adding block-commit and block-copy which also have to change permissions, it's better to reuse common code for the task. While snapshot should fall back to no access if read-write access failed, block-commit will want to fall back to read-only access. The common code doesn't know whether failure to grant read-write access should revert to no access (snapshot, block-copy) or read-only access (block-commit). This code can also be used to revoke access to unused files after block-pull. It might be nice to clean things up in a future patch by adding new functions to the lock manager, cgroup manager, and security manager that takes a single file name and applies context of a disk to that file, rather than the current semantics of applying context to the entire chain already associated to a disk. That way, we could avoid the games this patch plays of temporarily swapping out the disk->src and related fields of the disk. But that would involve more code changes, so this patch really is the smallest hack for doing the necessary work; besides, this patch is more or less code motion (the hack was already employed by the snapshot creation code, we are just making it reusable). * src/qemu/qemu_driver.c (qemuDomainSnapshotCreateSingleDiskActive) (qemuDomainSnapshotUndoSingleDiskActive): Refactor labeling hacks... (qemuDomainPrepareDiskChainElement): ...into new function.
-
由 Eric Blake 提交于
Now that we can crawl the chain of backing files, we can do argument validation and implement the 'shallow' flag. In testing this, I discovered that it can be handy to pass the shallow flag and an explicit base, as a means of validating that the base is indeed the file we expected. * src/qemu/qemu_driver.c (qemuDomainBlockCommit): Crawl through chain to implement shallow flag. * src/libvirt.c (virDomainBlockCommit): Relax API.
-
由 Eric Blake 提交于
This is the bare minimum to kick off a block commit. In particular, flags support is missing (shallow requires us to crawl the backing chain to determine the file name to pass to the qemu monitor command; delete requires us to track what needs to be deleted at the time the completion event fires). Also, we are relying on qemu to do error checking (such as validating 'top' and 'base' as being members of the backing chain), including the fact that the current qemu code does not support committing the active layer (although it is still planned to add that before qemu 1.3). Since the active layer won't change, we have it easy and do not have to alter the domain XML. Additionally, this will fail if SELinux is enforcing, because we fail to grant qemu proper read/write access to the files it will modify. * src/qemu/qemu_driver.c (qemuDomainBlockCommit): New function. (qemuDriver): Register it.
-
由 Eric Blake 提交于
qemu 1.3 will be adding a 'block-commit' monitor command, per qemu.git commit ed61fc1. It matches nicely to the libvirt API virDomainBlockCommit. * src/qemu/qemu_capabilities.h (QEMU_CAPS_BLOCK_COMMIT): New bit. * src/qemu/qemu_capabilities.c (qemuCapsProbeQMPCommands): Set it. * src/qemu/qemu_monitor.h (qemuMonitorBlockCommit): New prototype. * src/qemu/qemu_monitor_json.h (qemuMonitorJSONBlockCommit): Likewise. * src/qemu/qemu_monitor.c (qemuMonitorBlockCommit): Implement it. * src/qemu/qemu_monitor_json.c (qemuMonitorJSONBlockCommit): Likewise. (qemuMonitorJSONHandleBlockJobImpl) (qemuMonitorJSONGetBlockJobInfoOne): Handle new event type.
-
由 Eric Blake 提交于
Minor cleanup made possible by previous simplifications. * src/qemu/qemu_cgroup.h (qemuSetupDiskCgroup) (qemuTeardownDiskCgroup): Alter signature. * src/qemu/qemu_cgroup.c (qemuSetupDiskCgroup) (qemuTeardownDiskCgroup, qemuSetupCgroup): Update all uses. * src/qemu/qemu_hotplug.c (qemuDomainDetachPciDiskDevice) (qemuDomainDetachDiskDevice): Likewise. * src/qemu/qemu_driver.c (qemuDomainAttachDeviceDiskLive) (qemuDomainChangeDiskMediaLive) (qemuDomainSnapshotCreateSingleDiskActive) (qemuDomainSnapshotUndoSingleDiskActive): Likewise.
-
由 Eric Blake 提交于
We used to walk the backing file chain at least twice per disk, once to set up cgroup device whitelisting, and once to set up security labeling. Rather than walk the chain every iteration, which possibly includes calls to fork() in order to open root-squashed NFS files, we can exploit the cache of the previous patch. * src/conf/domain_conf.h (virDomainDiskDefForeachPath): Alter signature. * src/conf/domain_conf.c (virDomainDiskDefForeachPath): Require caller to supply backing chain via disk, if recursion is desired. * src/security/security_dac.c (virSecurityDACSetSecurityImageLabel): Adjust caller. * src/security/security_selinux.c (virSecuritySELinuxSetSecurityImageLabel): Likewise. * src/security/virt-aa-helper.c (get_files): Likewise. * src/qemu/qemu_cgroup.c (qemuSetupDiskCgroup) (qemuTeardownDiskCgroup): Likewise. (qemuSetupCgroup): Pre-populate chain.
-