- 13 11月, 2012 3 次提交
-
-
由 Peter Krempa 提交于
The AMD Bulldozer architecture uses so called "Clustered integer core modules" that count both as threads and cores. This patch expects the cpu to be detected using the new fallback condition otherwise twice the number of processors would be detected.
-
由 Peter Krempa 提交于
This test data was gathered on an AMD MagnyCours machine that reports it has only one NUMA node although the hardware is consisting of 4. As duplicate core id's are ignored the reported topology was bogous. This should be fixed by the previous patch. Reported and data provided by George-Cristian Bîrzan.
-
由 Peter Krempa 提交于
Lately there were a few reports of the output of the virsh nodeinfo command being inaccurate. This patch tries to avoid that by checking if the topology actually makes sense. If it doesn't we then report a synthetic topology that indicates to the user that the host capabilities should be checked for the actual topology.
-
- 12 11月, 2012 2 次提交
-
-
由 Michal Privoznik 提交于
This API was never synchronous and probably doesn't even need to be.
-
由 Michal Privoznik 提交于
Currently, if user calls virDomainAbortJob we just issue 'migrate_cancel' and hope for the best. However, if user calls the API in wrong phase when migration hasn't been started yet (perform phase) the cancel request is just ignored. With this patch, the request is remembered and as soon as perform phase starts, migration is cancelled.
-
- 10 11月, 2012 1 次提交
-
-
由 Viktor Mihajlovski 提交于
For S390, the default console target type cannot be of type 'serial'. It is necessary to at least interpret the 'arch' attribute value of the os/type element to produce the correct default type. Therefore we need to extend the signature of defaultConsoleTargetType to account for architecture. As a consequence all the drivers supporting this capability function must be updated. Despite the amount of changed files, the only change in behavior is that for S390 the default console target type will be 'virtio'. N.B.: A more future-proof approach could be to to use hypervisor specific capabilities to determine the best possible console type. For instance one could add an opaque private data pointer to the virCaps structure (in case of QEMU to hold capsCache) which could then be passed to the defaultConsoleTargetType callback to determine the console target type. Seems to be however a bit overengineered for the use case... Signed-off-by: NViktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
-
- 09 11月, 2012 2 次提交
-
-
由 Peter Krempa 提交于
When the libvirt daemon is restarted it tries to reconnect to running qemu domains. Since commit d38897a5 the re-connection code runs in separate threads. In the original implementation the maximum of domain ID's (that is used as an initializer for numbering guests created next) while libvirt was reconnecting to the guest. With the threaded implementation this opens a possibility for race conditions with the thread that is autostarting guests. When there's a guest running with id 1 and the daemon is restarted. The autostart code is reached first and spawns the first guest that should be autostarted as id 1. This results into the following unwanted situation: # virsh list Id Name State ---------------------------------------------------- 1 guest1 running 1 guest2 running This patch extracts the detection code before the re-connection threads are started so that the maximum id of the guests being reconnected to is known. The only semantic change created by this is if the guest with greatest ID quits before we are able to reconnect it's ID is used anyway as the greatest one as without this patch the greatest ID of a process we could successfuly reconnect to would be used.
-
由 Philipp Hahn 提交于
82507838 refactored the code to keep both the raw and canonicalized form of the backingStore, which breaks badly when the storage pool contains a storage volume, which is missing its backing store file: # ./daemon/libvirtd -l 2012-11-07 12:43:33.279+0000: 22175: info : libvirt version: 1.0.0 2012-11-07 12:43:33.279+0000: 22175: error : absolutePathFromBaseFile:542 : Can't canonicalize path '/var/lib/libvirt/images/base.qcow2': No such file or directory 2012-11-07 12:43:33.280+0000: 22175: error : storageDriverAutostart:115 : Failed to autostart storage pool 'default': Can't canonicalize path '/var/lib/libvirt/images/base.qcow2': No such file or directory This is because virStorageFileGetMetadataFromBuf() aborts with -1 if the filename of the backingStore can not be canonicalized: #0 absolutePathFromBaseFile () at util/storage_file.c:541 #1 virStorageFileGetMetadataFromBuf () at util/storage_file.c:728 #2 virStorageFileGetMetadataFromFD () at util/storage_file.c:932 #3 virStorageBackendProbeTarget () at storage/storage_backend_fs.c:94 #4 virStorageBackendFileSystemRefresh () at storage/storage_backend_fs.c:849 #5 storagePoolStart () at storage/storage_driver.c:700 #6 virStoragePoolCreate () at libvirt.c:12471 ... Treat files which miss their backing file as standalone files. Signed-off-by: NPhilipp Hahn <hahn@univention.de>
-
- 08 11月, 2012 5 次提交
-
-
由 Peter Krempa 提交于
Headers of qemuDomainSnapshotLoad and qemuDomainNetsRestart were improperly formatted.
-
由 Peter Krempa 提交于
This patch adds support for external disk snapshots of inactive domains. The snapshot is created by calling using qemu-img by calling: qemu-img create -f format_of_snapshot -o backing_file=/path/to/src,backing_fmt=format_of_backing_image /path/to/snapshot in case the backing image format is known or probing is allowed and otherwise: qemu-img create -f format_of_snapshot -o backing_file=/path/to/src /path/to/snapshot on each of the disks selected for snapshotting. This patch also modifies the snapshot preparing function to support creating external snapshots and to sanitize arguments. For now the user isn't able to mix external and internal snapshots but this restriction might be lifted in the future.
-
由 Guido Günther 提交于
We require a file and don't accept standard input: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=692322
-
由 Eric Blake 提交于
External checkpoints could be created with snapshot-create, but without libvirt supplying a default name for the memory file, it is essential to add a new argument to snapshot-create-as to allow the user to choose the memory file name. This adds the option --memspec [file=]name[,snapshot=type], where type can be none, internal, or external. For an example, virsh snapshot-create-as $dom --memspec /path/to/file is the shortest possible command line for creating an external checkpoint, named after the current timestamp. * tools/virsh-snapshot.c (vshParseSnapshotMemspec): New function. (cmdSnapshotCreateAs): Use it. * tests/virsh-optparse (test_url): Test it. * tools/virsh.pod (snapshot-create-as): Document it.
-
由 Eric Blake 提交于
So far, none of the existing callers of vshStringToArray expected the user to ever pass a literal comma; meanwhile, snapshot parsing had rolled its own array parser. Moving the comma escaping into the common function won't affect any existing callers, and will make this function reusable for adding memory handling to snapshot parsing. As a bonus, the testsuite was already testing snapshot parsing, so the fact that the test still passes means that we are now giving testsuite exposure to vshStringToArray. * tools/virsh-snapshot.c (vshParseSnapshotDiskspec): Move ,, parsing... * tools/virsh.c (vshStringToArray): ...into common function. Also, vshStrdup can't fail.
-
- 07 11月, 2012 2 次提交
-
-
由 Michal Privoznik 提交于
Some operations, APIs needs domain to be paused prior operation can be performed, e.g. (managed-) save of a domain. The processors should be restored in the end. However, if 'cont' fails for some reason, we log a message but this is not sufficient as an event should be emitted as well. Mgmt application can then decide what to do.
-
由 Michal Privoznik 提交于
This is supposed to be thrown every time we need to pause domain because of API execution (e.g. qemuDomainSaveInternal) but fails to restore it back after. In this case, domain remains paused, however, none of existing reasons can fit this scenario.
-
- 06 11月, 2012 9 次提交
-
-
由 Eric Blake 提交于
Make it clear that the alternate terms have no difference except for length of time they were supported. * tools/virsh.pod (start, shutdown, reboot): More documentation.
-
由 Eric Blake 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=873344 suggested that the grouping 'boot', 'shutdown', 'reboot'; as well as the grouping 'start', 'stop', 'restart'; might be easier to remember than the current mix of 'start', 'shutdown', 'reboot'. Also, touch up the wording of 'reboot' to be more accurate. * tools/virsh-domain.c (domManagementCmds): Add other command names. * tools/virsh.pod (start, shutdown, reboot): Document the aliases.
-
由 Peter Krempa 提交于
The code that was split out into the qemuDomainSaveMemory expands the pointer containing the XML description of the domain that it gets from higher layers. If the pointer changes the old one is invalid and the upper layer function tries to free it causing an abort. This patch changes the expansion of the original string to a new allocation and copy of the contents.
-
由 Martin Kletzander 提交于
After the connection to ESX 5.1 being broken since g1e7cd395, the fix in bab7752c helped a bit, but still missed a spot, so the connection is now successful, but some APIs (for example defineXML) don't work. Two cases missing are added in this patch to avoid that.
-
由 Michal Privoznik 提交于
-
由 Michal Privoznik 提交于
qemu is sensitive to the order of arguments passed. Hence, if a device requires a controller, the controller cmd string must precede device cmd string. The same apply for controllers, when for instance ccid controller requires usb controller. So controllers create partial ordering in which they should be added to qemu cmd line.
-
由 Michal Privoznik 提交于
which just re-indent code and prepare it for next patch.
-
由 Václav Pavlín 提交于
https://bugzilla.redhat.com/850186 I added %with_systemd_macros so it should now work in F17 with old scriptlets and in F18+/RHEL7+ with systemd macros (see https://fedoraproject.org/wiki/Packaging:ScriptletSnippets#Systemd) I missed libvirt-guests.service because there is no systemctl call for it. So I only added systemd macros calls.
-
由 Eric Blake 提交于
In Fedora 16, we quit enabling cgconfig because systemd set up default cgroups that were good enough for our use. But in F17, when we switched to systemd, we reverted and started up cgconfig again. See also the tail of this thread: https://www.redhat.com/archives/libvir-list/2012-October/msg01657.html * libvirt.spec.in (with_systemd): Rely on systemd for cgroups.
-
- 05 11月, 2012 3 次提交
-
-
由 Michal Privoznik 提交于
Some FDs may not implement fdatasync() functionality, e.g. pipes. In that case EINVAL or EROFS is returned. We don't want to fail then nor report any error. Reported-by: NChristophe Fergeau <cfergeau@redhat.com>
-
由 liguang 提交于
ignore cscope.in.out, cscope.po.out Signed-off-by: Nliguang <lig.fnst@cn.fujitsu.com>
-
由 Peter Krempa 提交于
Some of the pre-snapshot check have restrictions wired in regarding configuration options that influence taking of external checkpoints. This patch removes restrictions that would inhibit taking of such a snapshot.
-
- 04 11月, 2012 1 次提交
-
-
由 Peter Krempa 提交于
This patch adds support to take external system checkpoints. The functionality is layered on top of the previous disk-only snapshot code. When the checkpoint is requested the domain memory is saved to the memory image file using migration to file. (The user may specify to take the memory image while the guest is live with the VIR_DOMAIN_SNAPSHOT_CREATE_LIVE flag.) The memory save image shares format with the image created by virDomainSave() API.
-
- 03 11月, 2012 11 次提交
-
-
由 Peter Krempa 提交于
Before now, libvirt supported only internal snapshots for active guests. This patch renames this function to qemuDomainSnapshotCreateActiveInternal to prepare the grounds for external active snapshots.
-
由 Peter Krempa 提交于
The new external system checkpoints will require an async job while the snapshot is taken. This patch adds QEMU_ASYNC_JOB_SNAPSHOT to track this job type.
-
由 Peter Krempa 提交于
The default behavior while creating external checkpoints is to pause the guest while the memory state is captured. We want the users to sacrifice space saving for creating the memory save image while the guest is live to minimize downtime. This patch adds a flag that causes the guest not to be paused before taking the snapshot. *include/libvirt/libvirt.h.in: - add new paused reason: VIR_DOMAIN_PAUSED_SNAPSHOT - add new flag for taking snapshot: VIR_DOMAIN_SNAPSHOT_CREATE_LIVE *tools/virsh-domain-monitor.c: - add string representation for VIR_DOMAIN_PAUSED_SNAPSHOT *tools/virsh-snapshot.c: - add support for VIR_DOMAIN_SNAPSHOT_CREATE_LIVE *tools/virsh.pod: - add docs for --live option added to use VIR_DOMAIN_SNAPSHOT_CREATE_LIVE flag
-
由 Peter Krempa 提交于
The code that saves domain memory by migration to file can be reused while doing external checkpoints of a machine. This patch extracts the common code and places it in a separate function.
-
由 Peter Krempa 提交于
Two other places were left with the old code to look up snapshots. Change them to use the snapshot lookup helper.
-
由 Peter Krempa 提交于
-
由 Peter Krempa 提交于
This patch adds a few new processor feature flags. Namely: f16c rdrand lwp tbm topoext perfctr_core perfctr_nb fsgsbase bmi1 hle avx2 bmi2 erms invpcid rtm rdseed adx tce
-
由 Peter Krempa 提交于
When pausing the guest while migration is running (to speed up convergence) the virDomainSuspend API checks if the migration job is active before entering the job. This could cause a possible race if the virDomainSuspend is called while the job is active but ends before the Suspend API enters the job (this would require that the migration is aborted). This would cause a incorrect event to be emitted.
-
由 Eric Blake 提交于
Both system checkpoint snapshots and disk snapshots were iterating over all disks, doing a final sanity check before doing any work. But since future patches will allow offline snapshots to be either external or internal, it makes sense to share the pass over all disks, and then relax restrictions in that pass as new modes are implemented. Future patches can then handle external disks when the domain is offline, then handle offline --disk-snapshot, and finally, combine with migration to file to gain a complete external system checkpoint snapshot of an active domain without using 'savevm'. * src/qemu/qemu_driver.c (qemuDomainSnapshotDiskPrepare) (qemuDomainSnapshotIsAllowed): Merge... (qemuDomainSnapshotPrepare): ...into one function. (qemuDomainSnapshotCreateXML): Update caller.
-
由 Eric Blake 提交于
Now that the XML supports listing internal snapshots, it is worth always populating the <memory> and <disks> element to match. * src/qemu/qemu_driver.c (qemuDomainSnapshotCreateXML): Always parse disk info and set memory info.
-
由 Eric Blake 提交于
There were not previous callers with require_match set to true. I originally implemented this bool with the intent of supporting ESX snapshot semantics, where the choice of internal vs. external vs. non-checkpointable must be made at domain start, but as ESX has not been wired up to use it yet, we might as well fix it to work with our next qemu patch for now, and worry about any further improvements (changing the bool to a flags argument) if the ESX driver decides to use this function in the future. * src/conf/snapshot_conf.c (virDomainSnapshotAlignDisks): Alter logic when require_match is true to deal with new XML.
-
- 02 11月, 2012 1 次提交
-
-
由 Eric Blake 提交于
Each <domainsnapshot> can now contain an optional <memory> element that describes how the VM state was handled, similar to disk snapshots. The new element will always appear in output; for back-compat, an input that lacks the element will assume 'no' or 'internal' according to the domain state. Along with this change, it is now possible to pass <disks> in the XML for an offline snapshot; this also needs to be wired up in a future patch, to make it possible to choose internal vs. external on a per-disk basis for each disk in an offline domain. At that point, using the --disk-only flag for an offline domain will be able to work. For some examples below, remember that qemu supports the following snapshot actions: qemu-img: offline external and internal disk savevm: online internal VM and disk migrate: online external VM transaction: online external disk ===== <domainsnapshot> <memory snapshot='no'/> ... </domainsnapshot> implies that there is no VM state saved (mandatory for offline and disk-only snapshots, not possible otherwise); using qemu-img for offline domains and transaction for online. ===== <domainsnapshot> <memory snapshot='internal'/> ... </domainsnapshot> state is saved inside one of the disks (as in qemu's 'savevm' system checkpoint implementation). If needed in the future, we can also add an attribute pointing out _which_ disk saved the internal state; maybe disk='vda'. ===== <domainsnapshot> <memory snapshot='external' file='/path/to/state'/> ... </domainsnapshot> This is not wired up yet, but future patches will allow this to control a combination of 'virsh save /path/to/state' plus disk snapshots from the same point in time. ===== So for 1.0.1 (and later, as needed), I plan to implement this table of combinations, with '*' designating new code and '+' designating existing code reached through new combinations of xml and/or the existing DISK_ONLY flag: domain memory disk disk-only | result ----------------------------------------- offline omit omit any | memory=no disk=int, via qemu-img offline no omit any |+memory=no disk=int, via qemu-img offline omit/no no any | invalid combination (nothing to snapshot) offline omit/no int any |+memory=no disk=int, via qemu-img offline omit/no ext any |*memory=no disk=ext, via qemu-img offline int/ext any any | invalid combination (no memory to save) online omit omit off | memory=int disk=int, via savevm online omit omit on | memory=no disk=default, via transaction online omit no/ext off | unsupported for now online omit no on | invalid combination (nothing to snapshot) online omit ext on | memory=no disk=ext, via transaction online omit int off |+memory=int disk=int, via savevm online omit int on | unsupported for now online no omit any |+memory=no disk=default, via transaction online no no any | invalid combination (nothing to snapshot) online no int any | unsupported for now online no ext any |+memory=no disk=ext, via transaction online int/ext any on | invalid combination (disk-only vs. memory) online int omit off |+memory=int disk=int, via savevm online int no/ext off | unsupported for now online int int off |+memory=int disk=int, via savevm online ext omit off |*memory=ext disk=default, via migrate+trans online ext no off |+memory=ext disk=no, via migrate online ext int off | unsupported for now online ext ext off |*memory=ext disk=ext, via migrate+transaction * docs/schemas/domainsnapshot.rng (memory): New RNG element. * docs/formatsnapshot.html.in: Document it. * src/conf/snapshot_conf.h (virDomainSnapshotDef): New fields. * src/conf/domain_conf.c (virDomainSnapshotDefFree) (virDomainSnapshotDefParseString, virDomainSnapshotDefFormat): Manage new fields. * tests/domainsnapshotxml2xmltest.c: New test. * tests/domainsnapshotxml2xmlin/*.xml: Update existing tests. * tests/domainsnapshotxml2xmlout/*.xml: Likewise.
-