- 21 2月, 2015 12 次提交
-
-
由 Peter Krempa 提交于
Do a content-aware check if formatting of the <numatune> element is necessary. Later on the def->numa structure will be always present so we cannot decide only on the basis whether it's allocated.
-
由 Peter Krempa 提交于
Shuffling around the logic will allow to simplify the code quite a bit. As an additional bonus the change in the logic now reports an error if automatic placement is selected and individual placement is configured.
-
由 Peter Krempa 提交于
Collapse few of the conditions so that the program flow is more clear.
-
由 Peter Krempa 提交于
Currently the code would exit without reporting an error as virBitmapParse reports one only if it fails to parse the bitmap, whereas the code was jumping to the error label even in case 0 cpus were correctly parsed in the map.
-
由 Peter Krempa 提交于
It's easier to recalculate the number in the one place it's used as having a separate variable to track it. It will also help with moving the NUMA code to the separate module.
-
由 Peter Krempa 提交于
Name it virNumaMemAccess and add it to conf/numa_conf.[ch] Note that to avoid a circular dependency the type of the NUMA cell memAccess variable was changed to int. It will be turned back later after the circular dependency will not exist.
-
由 Peter Krempa 提交于
The structure will gradually become the only place for NUMA related config, thus rename it appropriately.
-
由 Peter Krempa 提交于
Move the code that formats the /domain/cpu/numa element to numa_conf as it belongs there.
-
由 Peter Krempa 提交于
The mask was stored both as a bitmap and as a string. The string is used for XML output only. Remove the string, as it can be reconstructed from the bitmap. The test change is necessary as the bitmap formatter doesn't "optimize" using the '^' operator.
-
由 Peter Krempa 提交于
Rewrite the function to save a few local variables and reorder the code to make more sense. Additionally the ncells_max member of the virCPUDef structure is used only for tracking allocation when parsing the numa definition, which can be avoided by switching to VIR_ALLOC_N as the array is not resized after initial allocation.
-
由 Peter Krempa 提交于
For weird historical reasons NUMA cells are added as a subelement of <cpu> while the actual configuration is done in <numatune>. This patch splits out the cell parser code from cpu config to NUMA config. Note that the changes to the code are minimal just to make it work and the function will be refactored in the next patch.
-
由 Peter Krempa 提交于
For a while now there are two places that gather information about NUMA related guest configuration. While the XML can't be changed we can at least store the data in one place in the definition. Rename the numatune_conf.[ch] files to numa_conf as later patches will move the rest of the definitions from the cpu definition to this one.
-
- 20 2月, 2015 4 次提交
-
-
由 Pavel Hrdina 提交于
The "virDomainGetInfo" will get for running domain only live info and for offline domain only config info. There was no way how to get config info for running domain. We will use "vshCPUCountCollect" instead to get the correct cpu count that we need to pass to "virDomainGetVcpuPinInfo". Also cleanup some unnecessary variables and checks that are done by drivers. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1160559Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
由 Michal Privoznik 提交于
Not all machine types support all devices, device properties, backends, etc. So until we create a matrix of [machineType, qemuCaps], lets just filter out some capabilities before we return them to the consumer (which is going to make decisions based on them straight away). Currently, as qemu is unable to tell which capabilities are (not) enabled for given machine types, it's us who has to hardcode the matrix. One day maybe the hardcoding will go away and we can create the matrix dynamically on the fly based on a few monitor calls. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
It will come handy in the near future when we will filter some capabilities based on it. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Martin Kletzander 提交于
When editing a domain with 'virsh edit' and failing validation, the usual message pops up: Failed. Try again? [y,n,f,?]: Turning off validation can be useful, mainly for testing (but other purposes too), so this patch adds support for relaxing definition in virsh-edit and makes 'virsh edit <domain>' more usable. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 19 2月, 2015 9 次提交
-
-
由 Mikhail Feoktistov 提交于
1. Delete all boot devices for VM instance 2. Find the first HDD from XML and set it as bootable Now we support only one boot device and it should be HDD. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Mikhail Feoktistov 提交于
-
由 Mikhail Feoktistov 提交于
-
由 Jiri Denemark 提交于
Not all files we want to find using virFileFindResource{,Full} are generated when libvirt is built, some of them (such as RNG schemas) are distributed with sources. The current API was not able to find source files if libvirt was built in VPATH. Both RNG schemas and cpu_map.xml are distributed in source tarball. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1179678 When migrating with storage, libvirt iterates over domain disks and instruct qemu to migrate the ones we are interested in (shared, RO and source-less disks are skipped). The disks are migrated in series. No new disk is transferred until the previous one hasn't been quiesced. This is checked on the qemu monitor via 'query-jobs' command. If the disk has been quiesced, it practically went from copying its content to mirroring state, where all disk writes are mirrored to the other side of migration too. Having said that, there's one inherent error in the design. The monitor command we use reports only active jobs. So if the job fails for whatever reason, we will not see it anymore in the command output. And this can happen fairly simply: just try to migrate a domain with storage. If the storage migration fails (e.g. due to ENOSPC on the destination) we resume the host on the destination and let it run on partly copied disk. The proper fix is what even the comment in the code says: listen for qemu events instead of polling. If storage migration changes state an event is emitted and we can act accordingly: either consider disk copied and continue the process, or consider disk mangled and abort the migration. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
Upon BLOCK_JOB_COMPLETED event delivery, we check if the job has completed (in qemuMonitorJSONHandleBlockJobImpl()). For better image, the event looks something like this: "timestamp": {"seconds": 1423582694, "microseconds": 372666}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive-virtio-disk0", "len": 8412790784, "offset": 409993216, "speed": 8796093022207, "type": "mirror", "error": "No space left on device"}} If "len" does not equal "offset" it's considered an error, and we can clearly see "error" field filled in. However, later in the event processing this case was handled no differently to case of job being aborted via separate API. It's time that we start differentiate these two because of the future work. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
Currently, upon BLOCK_JOB_* event, disk->mirrorState is not updated each time. The callback code handling the events checks if a blockjob was started via our public APIs prior to setting the mirrorState. However, some block jobs may be started internally (e.g. during storage migration), in which case we don't bother with setting disk->mirror (there's nothing we can set it to anyway), or other fields. But it will come handy if we update the mirrorState in these cases too. The event wasn't delivered just for fun - we've started the job after all. So, in this commit, the mirrorState is set to whatever job status we've obtained. Of course, there are some actions on some statuses that we want to perform. But instead of if {} else if {} else {} ... enumeration, let's move to switch(). Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Peter Krempa 提交于
Commit e105dc98 moved some code but didn't adjust the jump labels so that the job would be terminated.
-
由 Pavel Hrdina 提交于
Libvirt could crash with segfault if user issue "service reload" right after "service start". One possible way to crash libvirt is to run reload during initialization of QEMU driver. It could happen when qemu driver will initialize qemu_driver_lock but don't have a time to set it's "config" and the SIGHUP arrives. The reload handler tries to get qemu_drv->config during "virStorageAutostart" and dereference it which ends with segfault. Let's ignore all reload requests until all drivers are initialized. In addition set driversInitialized before we enter virStateCleanup to ignore reload request while we are shutting down. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1179981Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
- 18 2月, 2015 1 次提交
-
-
由 Ján Tomko 提交于
All the addresses from the range are used, not just those that are in use on the host. https://bugzilla.redhat.com/show_bug.cgi?id=1079917
-
- 17 2月, 2015 9 次提交
-
-
由 Daniel P. Berrange 提交于
Introduce some basic docs describing the virtlockd setup. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Daniel P. Berrange 提交于
In preparation for adding docs about virtlockd, split out the sanlock setup docs into a separate page. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Pavel Hrdina 提交于
If 'virNumaGetHostNodeset()' fails then the error path will try to free uninitialized pointer mem_mask. Introduced by commit af2a1f05. Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
由 Prerna Saxena 提交于
PowerPC : Forbid NULL CPU model with 'host-model' mode in qemu command line. This ensures that an XML such as following: ... <cpu mode='host-model'> <model fallback='allow'/> </cpu> ... will not generate a '-cpu host,compat=(null)' command line with qemu-system-ppc64. Signed-off-by: NPrerna Saxena <prerna@linux.vnet.ibm.com>
-
由 Prerna Saxena 提交于
PowerPC : Explicitly associate 'qemu-system-ppc64' as the default emulator for all 64-bit PowerPC guests ( both Big & Little Endian ) Signed-off-by: NPrerna Saxena <prerna@linux.vnet.ibm.com>
-
由 Luyao Huang 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1126762 Commit 43b67f introduced a deadlock issue when we use numatune to change numa settings to a vm in session mode. Jump to endjob instead of jump to cleanup. Signed-off-by: NLuyao Huang <lhuang@redhat.com>
-
由 Michal Privoznik 提交于
So, when building the '-numa' command line, the qemuBuildMemoryBackendStr() function does quite a lot of checks to chose the best backend, or to check if one is in fact needed. However, it returned that backend is needed even for this little fella: <numatune> <memory mode="strict" nodeset="0,2"/> </numatune> This can be guaranteed via CGroups entirely, there's no need to use memory-backend-ram to let qemu know where to get memory from. Well, as long as there's no <memnode/> element, which explicitly requires the backend. Long story short, we wouldn't have to care, as qemu works either way. However, the problem is migration (as always). Previously, libvirt would have started qemu with: -numa node,memory=X in this case and restricted memory placement in CGroups. Today, libvirt creates more complicated command line: -object memory-backend-ram,id=ram-node0,size=X -numa node,memdev=ram-node0 Again, one wouldn't find anything wrong with these two approaches. Both work just fine. Unless you try to migrated from the older libvirt into the newer one. These two approaches are, unfortunately, not compatible. My suggestion is, in order to allow users to migrate, lets use the older approach for as long as the newer one is not needed. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
Well, we can pretend that we've asked numad for its suggestion and let qemu command line be built with that respect. Again, this alone has no big value, but see later commits which build on the top of this. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
This function is going to be needed in the near future. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 16 2月, 2015 1 次提交
-
-
由 Luyao Huang 提交于
Just like the fix for domdisplay in commit 1ba815.
-
- 14 2月, 2015 4 次提交
-
-
由 John Ferlan 提交于
Periodically my Coverity scan will return a checked_return failure for libxlDomainShutdownThread call to libxlDomainStart. Followed the libxlAutostartDomain example in order to check the status, emit a message, and continue on.
-
由 John Ferlan 提交于
Introduced by commit id 'c3d9d3bb' - return from virSecurityManagerCheckModel wasn't VIR_FREE()'ing the virSecurityManagerGetNested allocated memory.
-
由 Luyao Huang 提交于
Jumping to the cleanup label prior to starting the container failed to properly clean everything up that is handled by the virLXCProcessCleanup which is called if virLXCProcessStop is called on failure after the container properly starts. Most importantly is prior to this patch none of the stop/release hooks, host device reattachment, and network cleanup (that is reverse of virLXCProcessSetupInterfaces). Signed-off-by: NLuyao Huang <lhuang@redhat.com>
-
由 John Ferlan 提交于
Modify the VIR_DEBUG message in virLXCProcessCleanup to make it clearer about the path. Also add some more VIR_DEBUG messages in virLXCProcessStart in order to help debug error flow.
-