- 21 12月, 2014 2 次提交
-
-
由 Martin Kletzander 提交于
There is one problem that causes various errors in the daemon. When domain is waiting for a job, it is unlocked while waiting on the condition. However, if that domain is for example transient and being removed in another API (e.g. cancelling incoming migration), it get's unref'd. If the first call, that was waiting, fails to get the job, it unref's the domain object, and because it was the last reference, it causes clearing of the whole domain object. However, when finishing the call, the domain must be unlocked, but there is no way for the API to know whether it was cleaned or not (unless there is some ugly temporary variable, but let's scratch that). The root cause is that our APIs don't ref the objects they are using and all use the implicit reference that the object has when it is in the domain list. That reference can be removed when the API is waiting for a job. And because each domain doesn't do its ref'ing, it results in the ugly checking of the return value of virObjectUnref() that we have everywhere. This patch changes qemuDomObjFromDomain() to ref the domain (using virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which should be the only function in which the return value of virObjectUnref() is checked. This makes all reference counting deterministic and makes the code a bit clearer. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Martin Kletzander 提交于
Commit 1a80b97d, which added the virCgroupHasEmptyTasks() function forgot that the parameter @cgroup may be NULL and did not check that. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 19 12月, 2014 3 次提交
-
-
由 Daniel P. Berrange 提交于
Although QMP returns info about vCPU threads in TCG mode, the data it returns is mostly lies. Only the first vCPU has a valid thread_id returned. The thread_id given for the other vCPUs is in fact the main emulator thread. All vCPUs actually run under the same thread in TCG mode. Our vCPU pinning code is not at all able to cope with this so if you try to set CPU affinity per-vCPU you end up with wierd errors error: Failed to start domain instance-00000007 error: cannot set CPU affinity on process 24365: Invalid argument Since few people will care about the performance of TCG with strict CPU pinning, lets just disable that for now, so we get a clear error message error: Failed to start domain instance-00000007 error: Requested operation is not valid: cpu affinity is not supported
-
由 Daniel P. Berrange 提交于
The code assumes that def->vcpus == nvcpupids, so when we setup fake CPU pids for old QEMU with nvcpupids == 1, we cause the later code to read off the end of the array. This has fun results like sche_setaffinity(0, ...) which changes libvirtd's own CPU affinity, or even better sched_setaffinity($RANDOM, ...) which changes the affinity of a random OS process.
-
由 Michal Privoznik 提交于
Libvirt BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1175397 QEMU BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1170093 In qemu there are two interesting arguments: 1) -numa to create a guest NUMA node 2) -object memory-backend-{ram,file} to tell qemu which memory region on which host's NUMA node it should allocate the guest memory from. Combining these two together we can instruct qemu to create a guest NUMA node that is tied to a host NUMA node. And it works just fine. However, depending on machine type used, there might be some issued during migration when OVMF is enabled (see QEMU BZ). While this truly is a QEMU bug, we can help avoiding it. The problem lies within the memory backend objects somewhere. Having said that, fix on our side consists on putting those objects on the command line if and only if needed. For instance, while previously we would construct this (in all ways correct) command line: -object memory-backend-ram,size=256M,id=ram-node0 \ -numa node,nodeid=0,cpus=0,memdev=ram-node0 now we create just: -numa node,nodeid=0,cpus=0,mem=256 because the backend object is obviously not tied to any specific host NUMA node. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 18 12月, 2014 4 次提交
-
-
由 Ján Tomko 提交于
Commit ca91ba78 moved qemuSetupDiskCgroup into the qemuDomainPrepareDisk helper, but failed to call it for usb disks. https://bugzilla.redhat.com/show_bug.cgi?id=1175668`
-
由 Boris Fiuczynski 提交于
On a system with 160 CPUs the /proc/cpuinfo size grows beyond the currently set limit of 10KB causing an internal error. This patch increases the buffer size to 1MB. Signed-off-by: NBoris Fiuczynski <fiuczy@linux.vnet.ibm.com>
-
由 Eric Blake 提交于
Coverity flagged commit 0282ca45 as introducing a memory leak; in all my refactoring to make capacity probing conditional on whether the image is non-raw, I missed deleting the unconditional probe. * src/qemu/qemu_driver.c (qemuStorageLimitsRefresh): Drop redundant assignment. Signed-off-by: NEric Blake <eblake@redhat.com>
- 17 12月, 2014 22 次提交
-
-
由 Ján Tomko 提交于
-
由 John Ferlan 提交于
A recent lvm change has resulted in a change for the "default" type of logical volume created when the "--virtualsize" or "--V" is supplied on the command line (e.g. when the allocation and capacity values of a to be created volume differ). It seems that at the very least the following change adjusts the default type: https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=e0164f21 and the following may also have some impact. https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=87fc3b71 When using the virsh vol-create-as or vol-create xmlfile commands, the result is that libvirt will now create a "thin logical volume" and a "thin logical volume pool" rather than just a "thin snapshot logical volume". For example the following sequence: # lvcreate --name test -L 2M -V 5M lvm_test Rounding up size to full physical extent 4.00 MiB Rounding up size to full physical extent 8.00 MiB Logical volume "test" created. # lvs lvm_test LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lvol1 lvm_test twi-a-tz-- 4.00m 0.00 0.98 test lvm_test Vwi-a-tz-- 8.00m lvol1 0.00 compared to the former code which had the following: LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert test LVM_Test swi-a-s--- 4.00m [test_vorigin] 0.00 Since libvirt doesn't know how to parse the thin logical volume and pool, it will fail to find the newly created volume and pool even though it exists in the volume group. It cannot find since the command used to find/parse returns a thin volume 'test' with no associated device, for example the output is: lvol1##UgUwkp-fTFP-C0rc-ufue-xrYh-dkPr-FGPFPx#lvol1_tdata(0)#thin-pool#1#4194304#4194304#4194304#twi-a-tz-- test##NcaIoH-4YWJ-QKu3-sJc3-EOcS-goff-cThLIL##thin#0#8388608#4194304#8388608#Vwi-a-tz-- as compared to the former which had the following: test#[test_vorigin]#Dt5Of3-4WE6-buvw-CWJ4-XOiz-ywOU-YULYw6#/dev/sda3(1300)#linear#1#4194304#4194304#4194304#swi-a-s--- While it's possible to generate code to handle the new thin lv and pool, this patch will add a "--type snapshot" onto the lvcreate command libvirt uses in order to "for now" be able to continue to utilize the thin snapshots
-
由 Luyao Huang 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1174569 There's nothing we need to do for shared iSCSI devices in qemuAddSharedHostdev and qemuRemoveSharedHostdev. The iSCSI layer takes care about that for us. Signed-off-by: NLuyao Huang <lhuang@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Eric Blake 提交于
Wire up backing chain recursion. For the first time, it is now possible to get libvirt to expose that qemu tracks read statistics on backing files, as well as report maximum extent written on a backing file during a block-commit operation. For a running domain, where one of the two images has a backing file, I see the traditional output: $ virsh domstats --block testvm2 Domain: 'testvm2' block.count=2 block.0.name=vda block.0.path=/tmp/wrapper.qcow2 block.0.rd.reqs=1 block.0.rd.bytes=512 block.0.rd.times=28858 block.0.wr.reqs=0 block.0.wr.bytes=0 block.0.wr.times=0 block.0.fl.reqs=0 block.0.fl.times=0 block.0.allocation=0 block.0.capacity=1310720000 block.0.physical=200704 block.1.name=vdb block.1.path=/dev/sda7 block.1.rd.reqs=0 block.1.rd.bytes=0 block.1.rd.times=0 block.1.wr.reqs=0 block.1.wr.bytes=0 block.1.wr.times=0 block.1.fl.reqs=0 block.1.fl.times=0 block.1.allocation=0 block.1.capacity=1310720000 vs. the new output: $ virsh domstats --block --backing testvm2 Domain: 'testvm2' block.count=3 block.0.name=vda block.0.path=/tmp/wrapper.qcow2 block.0.rd.reqs=1 block.0.rd.bytes=512 block.0.rd.times=28858 block.0.wr.reqs=0 block.0.wr.bytes=0 block.0.wr.times=0 block.0.fl.reqs=0 block.0.fl.times=0 block.0.allocation=0 block.0.capacity=1310720000 block.0.physical=200704 block.1.name=vda block.1.path=/dev/sda6 block.1.backingIndex=1 block.1.rd.reqs=0 block.1.rd.bytes=0 block.1.rd.times=0 block.1.wr.reqs=0 block.1.wr.bytes=0 block.1.wr.times=0 block.1.fl.reqs=0 block.1.fl.times=0 block.1.allocation=327680 block.1.capacity=786432000 block.2.name=vdb block.2.path=/dev/sda7 block.2.rd.reqs=0 block.2.rd.bytes=0 block.2.rd.times=0 block.2.wr.reqs=0 block.2.wr.bytes=0 block.2.wr.times=0 block.2.fl.reqs=0 block.2.fl.times=0 block.2.allocation=0 block.2.capacity=1310720000 I may later do a patch that trims the output to avoid 0 stats, particularly for backing files (which are more likely to have 0 stats, at least for write statistics when no block-commit is performed). Also, I still plan to expose physical size information (qemu doesn't expose it yet, so it requires a stat, and for block devices, a further open/seek operation). But this patch is good enough without worrying about that yet. * src/qemu/qemu_driver.c (QEMU_DOMAIN_STATS_BACKING): New internal enum bit. (qemuConnectGetAllDomainStats): Recognize new user flag, and pass details to... (qemuDomainGetStatsBlock): ...here, where we can do longer recursion. (qemuDomainGetStatsOneBlock): Output new field. Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Eric Blake 提交于
In order to report stats on backing chains, we need to separate the output of stats for one block from how we traverse blocks. * src/qemu/qemu_driver.c (qemuDomainGetStatsBlock): Split... (qemuDomainGetStatsOneBlock): ...into new helper. Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Eric Blake 提交于
This patch introduces access to allocation information about a backing chain of a live domain. While querying storage volumes for read-only disks could provide some of the details, we do NOT want to read() a file while qemu is writing it. Also, there is one case where we have to rely on qemu: when doing a block commit into a backing file, where that file is stored in qcow2 format on a host block device, we want to know the current highest write offset into that image, in order to know if the disk must be resized larger. qemu-img does not (currently) show this information, and none of the earlier block APIs were extensible enough to expose it. But virDomainListGetStats is perfect for the job! We don't need a new group of statistics, as the existing block group is sufficient. On the other hand, as existing libvirt releases already report 1:1 mapping of block.count to <disk> devices, changing the array size could confuse older clients; and even with newer clients, the time and memory taken to report additional statistics is not always necessary (backing files are generally read-only except for block-commit, so while read statistics may change, sizing statistics will not). So the choice here is to add a new flag that only newer callers will pass, when they are prepared for the additional information. This patch introduces the new API, but it will take more patches to get it implemented for qemu. * include/libvirt/libvirt-domain.h (VIR_CONNECT_GET_ALL_DOMAINS_STATS_BACKING): New flag. * src/libvirt-domain.c (virConnectGetAllDomainStats): Document it, and add a new field when it is in use. * tools/virsh-domain-monitor.c (cmdDomstats): Use new flag. * tools/virsh.pod (domstats): Document it. Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Eric Blake 提交于
A coming patch will make it optionally possible to list backing chain block stats; in this mode of operation, block.counts is no longer the number of <disks> in the domain, but the number of blocks in the array being reported. We still want block.count listed first, but rather than iterate the tree twice (once to count, and once to list stats), it's easier to just touch things up after the fact. * src/qemu/qemu_driver.c (qemuDomainGetStatsBlock): Compute count after the fact. Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Eric Blake 提交于
The prior refactoring can now be put to use. With the same domain as the earlier commit 7b499262 (one qcow2 disk and an empty cdrom drive): $ virsh domstats --block foo Domain: 'foo' block.count=2 block.0.name=hda block.0.path=/var/lib/libvirt/images/foo.qcow2 block.0.allocation=1309614080 block.0.capacity=42949672960 block.0.physical=1309671424 block.1.name=hdc * src/qemu/qemu_driver.c (qemuDomainGetStatsBlock): Use qemuStorageLimitsRefresh to report offline statistics. Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Eric Blake 提交于
Create a helper function that can be reused for gathering block info from virDomainListGetStats. * src/qemu/qemu_driver.c (qemuDomainGetBlockInfo): Split guts... (qemuStorageLimitsRefresh): ...into new helper function. Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Eric Blake 提交于
The documentation for virDomainBlockInfo was confusing: it stated that 'physical' was the size of the container, then gave an example of it being the amount of storage used by a sparse file (that is, for a sparse raw image on a regular file, the wording implied capacity==physical, while allocation was smaller; but the example instead claimed physical==allocation). Since we use 'physical' for the last offset of a block device, we should do likewise for regular files. Furthermore, the example claimed that for a qcow2 regular file, allocation==physical. At the time the code was first written, this was true (qcow2 files were allocated sequentially, and were never sparse, so the last sector written happened to also match the disk space occupied); but modern qemu does much better and can punch holes for a qcow2 with allocation < physical. Basically, after this patch, the three fields are now reliably mapped as: 'capacity' - how much storage the guest can see (equal to physical for raw images, determined by image metadata otherwise) 'allocation' - how much storage the image occupies (similar to what 'du' would report) 'physical' - the last offset of the image (similar to what 'ls' would report) 'capacity' can be larger than 'physical' (such as for a qcow2 image that does not vary much from a backing file) or smaller (such as for a qcow2 file with lots of internal snapshots). Likewise, 'allocation' can be (slightly) larger than 'physical' (such as counting the tail of cluster allocations required to round a file size up to filesystem granularity) or smaller (for a sparse file). A block-resize operation changes capacity (which, for raw images, also changes physical); many non-raw images automatically grow physical and allocation as necessary when starting with an allocation smaller than capacity; and even when capacity and physical stay unchanged, allocation can change when converting sectors from holes to data or back. Note that this does not change semantics for qcow2 images stored on block devices; there, we still rely on qemu to report the highest written extent for allocation. So using this API to track when to extend a block device because a qcow2 image is about to exceed a threshold will not see any changes. Also, note that virStorageVolInfo is unfortunately limited to just 'capacity' and 'allocation' (we can't expand it to add 'physical', although we can expand the XML to add it there); historically, that struct's 'allocation' value has reported file size for qcow2 files (what this patch terms 'physical' for a domain block device), but disk usage for raw files (what this patch terms 'allocation'). So follow-up patches will be needed to make storage volumes report the same allocation values and get at physical values, where those differ. * include/libvirt/libvirt-domain.h (_virDomainBlockInfo): Tweak documentation to match saner definition. * src/qemu/qemu_driver.c (qemuDomainGetBlockInfo): For regular files, physical size is capacity, not allocation. Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Eric Blake 提交于
Ultimately, we want to avoid read()ing a file while qemu is running. We still have to open() block devices to determine their physical size, but that is safer. This patch rearranges code to group together all code that reads the image, to make it easier for later patches to skip the metadata collection when possible. * src/qemu/qemu_driver.c (qemuDomainGetBlockInfo): Check for empty disk up front. Place metadata reading next to use. Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Eric Blake 提交于
When requested in a later patch, the QMP command results are now examined recursively. As qemu_driver will eventually have to read items out of the hash table as stored by this patch, the computation of backing alias string is done in a shared location. * src/qemu/qemu_domain.h (qemuDomainStorageAlias): New prototype. * src/qemu/qemu_domain.c (qemuDomainStorageAlias): Implement it. * src/qemu/qemu_monitor_json.c (qemuMonitorJSONGetOneBlockStatsInfo) (qemuMonitorJSONBlockStatsUpdateCapacityOne): Perform recursion. (qemuMonitorJSONGetAllBlockStatsInfo) (qemuMonitorJSONBlockStatsUpdateCapacity): Update callers. Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Eric Blake 提交于
A future patch will allow recursion into backing chains when collecting block stats. This patch should not change behavior, but merely moves out the common code that will be reused once recursion is enabled, and adds the parameter that will turn on recursion. * src/qemu/qemu_monitor.h (qemuMonitorGetAllBlockStatsInfo) (qemuMonitorBlockStatsUpdateCapacity): Add recursion parameter, although it is ignored for now. * src/qemu/qemu_monitor.h (qemuMonitorGetAllBlockStatsInfo) (qemuMonitorBlockStatsUpdateCapacity): Likewise. * src/qemu/qemu_monitor_json.h (qemuMonitorJSONGetAllBlockStatsInfo) (qemuMonitorJSONBlockStatsUpdateCapacity): Likewise. * src/qemu/qemu_monitor_json.c (qemuMonitorJSONGetAllBlockStatsInfo) (qemuMonitorJSONBlockStatsUpdateCapacity): Add parameter, and split... (qemuMonitorJSONGetOneBlockStatsInfo) (qemuMonitorJSONBlockStatsUpdateCapacityOne): ...into helpers. (qemuMonitorJSONGetBlockStatsInfo): Update caller. * src/qemu/qemu_driver.c (qemuDomainGetStatsBlock): Update caller. * src/qemu/qemu_migration.c (qemuMigrationCookieAddNBD): Likewise. Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Eric Blake 提交于
Right now, grabbing blockinfo always calls stat on the disk, then opens the image to determine the capacity, using a throw-away virStorageSourcePtr. This has a couple of drawbacks: 1. We are calling stat and opening a file on every invocation of the API. However, there are cases where the stats should NOT be changing between successive calls (if a domain is running, no one should be changing the physical size of a block device or raw image behind our backs; capacity of read-only files should not be changing; and we are the gateway to the block-resize command to know when the capacity of read-write files should be changing). True, we still have to use stat in some cases (a sparse raw file changes allocation if it is read-write and the amount of holes is changing, and a read-write qcow2 image stored in a file changes physical size if it was not fully pre-allocated). But for read-only images, even this should be something we can remember from the previous time, rather than repeating every call. 2. We want to enhance the power of virDomainListGetStats, by sharing code. But we already have a virStorageSourcePtr for each disk, and it would be easier to reuse the common structure than to have to worry about the one-off virDomainBlockInfoPtr. While this patch does not optimize reuse of information in point 1, it does get us closer to being able to do so; by updating a structure that survives between consecutive calls. * src/util/virstoragefile.h (_virStorageSource): Add physical, to mirror virDomainBlockInfo; rearrange fields to match public struct. (virStorageSourceCopy): Copy the new field. * src/qemu/qemu_driver.c (qemuDomainGetBlockInfo): Store into storage source, then copy to block info. Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Eric Blake 提交于
In order for a future patch to virDomainListGetStats to reuse some code for determining disk usage of offline domains, we need to make it easier to pull out part of the guts of grabbing blockinfo. The current implementation grabs a job fairly late in the game, while getstats will already own a job; reordering things so that the job is always grabbed up front in both functions will make it easier to pull out the common code. This patch results in grabbing a job in cases where one was not previously needed, but as it is a query job, it should not be noticeably slower. This patch touches the same code as the fix for CVE-2014-6458 (commit b7992595); in that patch, we avoided hotplug changing a disk reference during the time of obtaining a monitor lock by copying all data we needed and no longer referencing disk; this patch goes the other way and ensures that by holding the job, the disk cannot be changed so we no longer need to worry about the disk being invalidated across the monitor lock. * src/qemu/qemu_driver.c (qemuDomainGetBlockInfo): Rearrange job control to be outside of disk information. Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Eric Blake 提交于
* src/util/virfile.c (safezero_mmap): Fix missing semicolon. Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Martin Kletzander 提交于
When any of the functions modified in commit 214c687b took false branch, the function itself used none of its parameters resulting in "unused parameter" error. Rewriting these functions to the stubs we use elsewhere should fix the problem. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Martin Kletzander 提交于
Commit e3435caf added cleanup code to qemuDomainSetVcpusFlags() that was not supposed to reset the error. Usual procedure was done, saving the error to temporary variable, but it was never free'd, but rather leaked. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Martin Kletzander 提交于
Commit af2a1f05 tried clearly separating each condition in qemuRestoreCgroupState() for the sake of readability, however somehow one condition body was missing. That means that the body of the next condition got executed only if both of there were true, which is impossible, thus resulting in a dead code and a logic error. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Martin Kletzander 提交于
In commit d2632d60 we agreed taht we want the parsed uid to properly overflow but only to -1, however the value was read into long and then wrapped into uid_t. That meaned it failed on 32-bit systems. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 John Ferlan 提交于
Currently virStorageFileResize() function uses build conditionals to choose either the posix_fallocate() or syscall(SYS_fallocate) with no fallback in order to preallocate the space in the newly resized file. Since the safezero code has a similar set of conditionals modify the resize and safezero code in order to allow the resize logic to make use of safezero to unify the look/feel of the code paths. Add a new boolean (resize) to safezero() to make the optional decision whether to try syscall(SYS_fallocate) if the posix_fallocate fails because HAVE_POSIX_FALLOCATE is not defined (eg, return -1 and errno == 0). Create a local safezero_sys_fallocate in order to handle the resize code paths that support that. If not present, the set errno = ENOSYS in order to allow the caller to handle the failure scenarios. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Currently build conditionals decide which of two safezero() functions should be built - either the posix_fallocate() or mmap() with a fallback to a slower safewrite() algorithm in order to preallocate space in a raw file. This patch will refactor safezero to utilize static functions for either posix_fallocate or mmap/safewrite. The build conditional still exist, but are only for shorter sections of code. The posix_fallocate path will make use of the ret/errno setting to contain the logic for safezero to decide whether it needs to fallback to other algorithms. A return of -1 with errno not changed will indicate the conditional is not present; otherwise, a return of -1 with errno change indicates the call was made and it failed (no functional difference to current algorithm). The mmap/safewrite option changes only slightly to handle the ftruncate failure for mmap. That is, previously if the ftruncate failed, there was no fallback to the slow safewrite option. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
- 16 12月, 2014 9 次提交
-
-
由 Martin Kletzander 提交于
Currently, when there is an API that's blocking with locked domain and second API that's waiting in virDomainObjListFindByUUID() for the domain lock (with the domain list locked) no other API can be executed on any domain on the whole hypervisor because all would wait for the domain list to be locked. This patch adds new optional approach to this in which the domain is only ref'd (reference counter is incremented) instead of being locked and is locked *after* the list itself is unlocked. We might consider only ref'ing the domain in the future and leaving locking on particular APIs, but that's no tonight's fairy tale. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Martin Kletzander 提交于
Volume and pool formatting functions took different approaches to unspecified uids/gids. When unknown, it is always parsed as -1, but one of the functions formatted it as unsigned int (wrong) and one as int (better). Due to that, our two of our XML files from tests cannot be parsed on 32-bit machines. RNG schema needs to be modified as well, but because both storagepool.rng and storagevol.rng need same schema for permission element, save some space by moving it to storagecommon.rng. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Martin Kletzander 提交于
When hot-plugging a VCPU into the guest, kvm needs to allocate some data from the DMA zone, which might be in a memory node that's not allowed in cpuset.mems. Basically the same problem as there was with starting the domain and due to which commit 7e72ac78 exists. This patch just extends it to hotplugging as well. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1161540Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Martin Kletzander 提交于
Instead of setting the value of cpuset.mems once when the domain starts and then re-calculating the value every time we need to change the child cgroup values, leave the cgroup alone and rather set the child data every time there is new cgroup created. We don't leave any task in the parent group anyway. This will ease both current and future code. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Martin Kletzander 提交于
Thanks to that we don't need to drag the pointer everywhere and future code will get cleaner. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Martin Kletzander 提交于
Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Martin Kletzander 提交于
That function tries its best to create a bitmap of host NUMA nodes. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Martin Kletzander 提交于
That function helps checking whether there's a task in that cgroup. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Daniel P. Berrange 提交于
In systemd >= 218, the udev_set_log_fn method has been marked deprecated and turned into a no-op. Nothing in the udev client library will print to stderr by default anymore, so we can just stop installing a logging hook for new enough udev.
-