- 21 2月, 2015 8 次提交
-
-
由 Peter Krempa 提交于
It's easier to recalculate the number in the one place it's used as having a separate variable to track it. It will also help with moving the NUMA code to the separate module.
-
由 Peter Krempa 提交于
Name it virNumaMemAccess and add it to conf/numa_conf.[ch] Note that to avoid a circular dependency the type of the NUMA cell memAccess variable was changed to int. It will be turned back later after the circular dependency will not exist.
-
由 Peter Krempa 提交于
The structure will gradually become the only place for NUMA related config, thus rename it appropriately.
-
由 Peter Krempa 提交于
Move the code that formats the /domain/cpu/numa element to numa_conf as it belongs there.
-
由 Peter Krempa 提交于
The mask was stored both as a bitmap and as a string. The string is used for XML output only. Remove the string, as it can be reconstructed from the bitmap. The test change is necessary as the bitmap formatter doesn't "optimize" using the '^' operator.
-
由 Peter Krempa 提交于
Rewrite the function to save a few local variables and reorder the code to make more sense. Additionally the ncells_max member of the virCPUDef structure is used only for tracking allocation when parsing the numa definition, which can be avoided by switching to VIR_ALLOC_N as the array is not resized after initial allocation.
-
由 Peter Krempa 提交于
For weird historical reasons NUMA cells are added as a subelement of <cpu> while the actual configuration is done in <numatune>. This patch splits out the cell parser code from cpu config to NUMA config. Note that the changes to the code are minimal just to make it work and the function will be refactored in the next patch.
-
由 Peter Krempa 提交于
For a while now there are two places that gather information about NUMA related guest configuration. While the XML can't be changed we can at least store the data in one place in the definition. Rename the numatune_conf.[ch] files to numa_conf as later patches will move the rest of the definitions from the cpu definition to this one.
-
- 20 2月, 2015 2 次提交
-
-
由 Michal Privoznik 提交于
Not all machine types support all devices, device properties, backends, etc. So until we create a matrix of [machineType, qemuCaps], lets just filter out some capabilities before we return them to the consumer (which is going to make decisions based on them straight away). Currently, as qemu is unable to tell which capabilities are (not) enabled for given machine types, it's us who has to hardcode the matrix. One day maybe the hardcoding will go away and we can create the matrix dynamically on the fly based on a few monitor calls. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
It will come handy in the near future when we will filter some capabilities based on it. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 19 2月, 2015 8 次提交
-
-
由 Mikhail Feoktistov 提交于
1. Delete all boot devices for VM instance 2. Find the first HDD from XML and set it as bootable Now we support only one boot device and it should be HDD. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Mikhail Feoktistov 提交于
-
由 Mikhail Feoktistov 提交于
-
由 Jiri Denemark 提交于
Not all files we want to find using virFileFindResource{,Full} are generated when libvirt is built, some of them (such as RNG schemas) are distributed with sources. The current API was not able to find source files if libvirt was built in VPATH. Both RNG schemas and cpu_map.xml are distributed in source tarball. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1179678 When migrating with storage, libvirt iterates over domain disks and instruct qemu to migrate the ones we are interested in (shared, RO and source-less disks are skipped). The disks are migrated in series. No new disk is transferred until the previous one hasn't been quiesced. This is checked on the qemu monitor via 'query-jobs' command. If the disk has been quiesced, it practically went from copying its content to mirroring state, where all disk writes are mirrored to the other side of migration too. Having said that, there's one inherent error in the design. The monitor command we use reports only active jobs. So if the job fails for whatever reason, we will not see it anymore in the command output. And this can happen fairly simply: just try to migrate a domain with storage. If the storage migration fails (e.g. due to ENOSPC on the destination) we resume the host on the destination and let it run on partly copied disk. The proper fix is what even the comment in the code says: listen for qemu events instead of polling. If storage migration changes state an event is emitted and we can act accordingly: either consider disk copied and continue the process, or consider disk mangled and abort the migration. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
Upon BLOCK_JOB_COMPLETED event delivery, we check if the job has completed (in qemuMonitorJSONHandleBlockJobImpl()). For better image, the event looks something like this: "timestamp": {"seconds": 1423582694, "microseconds": 372666}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive-virtio-disk0", "len": 8412790784, "offset": 409993216, "speed": 8796093022207, "type": "mirror", "error": "No space left on device"}} If "len" does not equal "offset" it's considered an error, and we can clearly see "error" field filled in. However, later in the event processing this case was handled no differently to case of job being aborted via separate API. It's time that we start differentiate these two because of the future work. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
Currently, upon BLOCK_JOB_* event, disk->mirrorState is not updated each time. The callback code handling the events checks if a blockjob was started via our public APIs prior to setting the mirrorState. However, some block jobs may be started internally (e.g. during storage migration), in which case we don't bother with setting disk->mirror (there's nothing we can set it to anyway), or other fields. But it will come handy if we update the mirrorState in these cases too. The event wasn't delivered just for fun - we've started the job after all. So, in this commit, the mirrorState is set to whatever job status we've obtained. Of course, there are some actions on some statuses that we want to perform. But instead of if {} else if {} else {} ... enumeration, let's move to switch(). Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Peter Krempa 提交于
Commit e105dc98 moved some code but didn't adjust the jump labels so that the job would be terminated.
-
- 17 2月, 2015 6 次提交
-
-
由 Pavel Hrdina 提交于
If 'virNumaGetHostNodeset()' fails then the error path will try to free uninitialized pointer mem_mask. Introduced by commit af2a1f05. Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
由 Prerna Saxena 提交于
PowerPC : Forbid NULL CPU model with 'host-model' mode in qemu command line. This ensures that an XML such as following: ... <cpu mode='host-model'> <model fallback='allow'/> </cpu> ... will not generate a '-cpu host,compat=(null)' command line with qemu-system-ppc64. Signed-off-by: NPrerna Saxena <prerna@linux.vnet.ibm.com>
-
由 Prerna Saxena 提交于
PowerPC : Explicitly associate 'qemu-system-ppc64' as the default emulator for all 64-bit PowerPC guests ( both Big & Little Endian ) Signed-off-by: NPrerna Saxena <prerna@linux.vnet.ibm.com>
-
由 Luyao Huang 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1126762 Commit 43b67f introduced a deadlock issue when we use numatune to change numa settings to a vm in session mode. Jump to endjob instead of jump to cleanup. Signed-off-by: NLuyao Huang <lhuang@redhat.com>
-
由 Michal Privoznik 提交于
So, when building the '-numa' command line, the qemuBuildMemoryBackendStr() function does quite a lot of checks to chose the best backend, or to check if one is in fact needed. However, it returned that backend is needed even for this little fella: <numatune> <memory mode="strict" nodeset="0,2"/> </numatune> This can be guaranteed via CGroups entirely, there's no need to use memory-backend-ram to let qemu know where to get memory from. Well, as long as there's no <memnode/> element, which explicitly requires the backend. Long story short, we wouldn't have to care, as qemu works either way. However, the problem is migration (as always). Previously, libvirt would have started qemu with: -numa node,memory=X in this case and restricted memory placement in CGroups. Today, libvirt creates more complicated command line: -object memory-backend-ram,id=ram-node0,size=X -numa node,memdev=ram-node0 Again, one wouldn't find anything wrong with these two approaches. Both work just fine. Unless you try to migrated from the older libvirt into the newer one. These two approaches are, unfortunately, not compatible. My suggestion is, in order to allow users to migrate, lets use the older approach for as long as the newer one is not needed. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
This function is going to be needed in the near future. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 14 2月, 2015 5 次提交
-
-
由 John Ferlan 提交于
Periodically my Coverity scan will return a checked_return failure for libxlDomainShutdownThread call to libxlDomainStart. Followed the libxlAutostartDomain example in order to check the status, emit a message, and continue on.
-
由 John Ferlan 提交于
Introduced by commit id 'c3d9d3bb' - return from virSecurityManagerCheckModel wasn't VIR_FREE()'ing the virSecurityManagerGetNested allocated memory.
-
由 Luyao Huang 提交于
Jumping to the cleanup label prior to starting the container failed to properly clean everything up that is handled by the virLXCProcessCleanup which is called if virLXCProcessStop is called on failure after the container properly starts. Most importantly is prior to this patch none of the stop/release hooks, host device reattachment, and network cleanup (that is reverse of virLXCProcessSetupInterfaces). Signed-off-by: NLuyao Huang <lhuang@redhat.com>
-
由 John Ferlan 提交于
Modify the VIR_DEBUG message in virLXCProcessCleanup to make it clearer about the path. Also add some more VIR_DEBUG messages in virLXCProcessStart in order to help debug error flow.
-
由 Luyao Huang 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1176503 Move the two console checks - one for zero nconsoles present and the other for an invalid console type to earlier in the processing rather than getting after performing some setup that has to be undone for what amounts to an invalid configuration. This resolves the above bug since it's not not possible to have changed the security labels when we cause the configuration check failure.
-
- 13 2月, 2015 6 次提交
-
-
由 Erik Skultety 提交于
if (mgr == NULL || mgr->drv == NULL) return ret; This check isn't really necessary, security manager cannot be a NULL pointer as it is either selinux (by default) or 'none', if no other driver is set in the config. Even with no config file driver name yields 'none'. The other hunk checks for domain's security model validity, but we should also check devices' security model as well, therefore this hunk is moved into a separate function which is called by virSecurityManagerCheckAllLabel that checks both the domain's security model and devices' security model. https://bugzilla.redhat.com/show_bug.cgi?id=1165485Signed-off-by: NJán Tomko <jtomko@redhat.com>
-
由 Erik Skultety 提交于
We do have a check for valid per-domain security model, however we still do permit an invalid security model for a domain's device (those which are specified with <source> element). This patch introduces a new function virSecurityManagerCheckAllLabel which compares user specified security model against currently registered security drivers. That being said, it also permits 'none' being specified as a device security model. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1165485Signed-off-by: NJán Tomko <jtomko@redhat.com>
-
由 Ján Tomko 提交于
<interface ...> ... <model type='virtio'/> <driver ...> <host mrg_rxbuf='off'/> </driver> </interface> will result in: -device virtio-net-pci,mrg_rxbuf=off,... https://bugzilla.redhat.com/show_bug.cgi?id=1186886
-
由 Ján Tomko 提交于
Add an XML attribute to allow disabling merge of rx buffers on the host: <interface ...> ... <model type='virtio'/> <driver ...> <host mrg_rxbuf='off'/> </driver> </interface> https://bugzilla.redhat.com/show_bug.cgi?id=1186886
-
由 Michal Privoznik 提交于
The enum converters are defined in the domain_conf.h (so accessible widely across the code), but on the symbol layer, only virDomainNetTypeToString was exposed. However, FromString variant is going to be needed shortly. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Pavel Hrdina 提交于
Commit b6a2828e introduced new functions to set process scheduler. There is a small typo in ELSE path for systems where scheduler is not available. Also some of the definitions were introduced later in kernel. For example RHEL-5 is running on kernel 2.6.18, but SCHED_IDLE was introduces in 2.6.23 [1] and SCHED_BATCH in 2.6.16 [1]. We should not count only on existence of function sched_setscheduler(), we must also check for existence of used macros as they might not be defined. [1] see 'man 7 sched' Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
- 12 2月, 2015 5 次提交
-
-
由 Daniel P. Berrange 提交于
While the main storage driver code allows the flag VIR_STORAGE_VOL_RESIZE_SHRINK to be set, none of the backend drivers are supporting it. At the very least this can work for plain file based volumes since we just ftruncate() them to the new size. It does not work with qcow2 volumes, but we can arguably delegate to qemu-img for error reporting for that instead of second guessing this for ourselves: $ virsh vol-resize --shrink /home/berrange/VirtualMachines/demo.qcow2 2G error: Failed to change size of volume 'demo.qcow2' to 2G error: internal error: Child process (/usr/bin/qemu-img resize /home/berrange/VirtualMachines/demo.qcow2 2147483648) unexpected exit status 1: qemu-img: qcow2 doesn't support shrinking images yet qemu-img: This image does not support resize See also https://bugzilla.redhat.com/show_bug.cgi?id=1021802
-
由 Daniel P. Berrange 提交于
The qemuDomainHelperGetVcpus attempted to report an error when the vcpupids info was NULL. Unfortunately earlier code would clamp the value of 'maxinfo' to 0 when nvcpupids was 0, so the error reporting would end up being skipped. This lead to 'virsh vcpuinfo <dom>' just returning an empty list instead of giving the user a clear error.
-
由 Daniel P. Berrange 提交于
If a previous commit I fixed the incorrect handling of vcpu pids for TCG mode QEMU: commit b07f3d82 Author: Daniel P. Berrange <berrange@redhat.com> Date: Thu Dec 18 16:34:39 2014 +0000 Don't setup fake CPU pids for old QEMU The code assumes that def->vcpus == nvcpupids, so when we setup fake CPU pids for old QEMU with nvcpupids == 1, we cause the later code to read off the end of the array. This has fun results like sche_setaffinity(0, ...) which changes libvirtd's own CPU affinity, or even better sched_setaffinity($RANDOM, ...) which changes the affinity of a random OS process. The intent was that this would merely disable the ability to set per-vCPU affinity. It should still have been possible to set VM level host CPU affinity. Unfortunately, when you set <vcpu cpuset='0-1'>4</vcpu>, the XML parser will internally take this & initialize an entry in the def->cputune.vcpupin array for every VCPU. IOW this is implicitly being treated as <cputune> <vcpupin cpuset='0-1' vcpu='0'/> <vcpupin cpuset='0-1' vcpu='1'/> <vcpupin cpuset='0-1' vcpu='2'/> <vcpupin cpuset='0-1' vcpu='3'/> </cputune> Even more fun, the faked cputune elements are hidden from view when querying the live XML, because their cpuset mask is the same as the VM default cpumask. The upshot was that it was impossible to set VM level CPU affinity. To fix this we must update qemuProcessSetVcpuAffinities so that it only reports a fatal error if the per-VCPU cpu mask is different from the VM level cpu mask. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
When initializing a libxl_domain_build_info struct with libxl_domain_build_info_init(), VNC is enabled by default. As a result, VMs configured with no graphics still have VNC enabled. This behavior is a regression wrt to the legacy Xen driver. Signed-off-by: NMarek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
-
Do not silently ignore its value. LibXL support only one address, so refuse multiple IPs. Signed-off-by: NMarek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
-