- 15 12月, 2016 5 次提交
-
-
由 Michal Privoznik 提交于
This part of code that LXC currently uses will be reused so move to a generic function. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Joao Martins 提交于
libvirt libxl picks its own default with respect to the default NIC to use. libxlMakeNic is the one responsible for this and on boot it picks LIBXL_NIC_TYPE_VIF_IOEMU for HVM domains such that it accomodates both PV and emulated one. The good behaving guest at boot will then select the pv and unplug the emulated device. Now, on HVM when attaching an interface it will pick the same default that is LIBXL_NIC_TYPE_VIF_IOEMU which as a result will fail the attach (see xen commit 32e9d0f ("libxl: nic type defaults to vif in hotplug for hvm guest"). Xen doesn't yet support the hotplug of emulated devices, but we don't want to rule out that case either, which might get support in the future. Hence we simply reverse the defaults when we are attaching the interface which allows libvirt to prefer the PV nic first without adding "model='netfront'" following the same pattern as above commit. Also to avoid ruling out the emulated one we set to LIBXL_NIC_TYPE_IOEMU when setting a model type that is not 'netfront'. Signed-off-by: NJoao Martins <joao.m.martins@oracle.com> Signed-off-by: NJim Fehlig <jfehlig@suse.com>
-
由 Cédric Bosdonnat 提交于
If libxl has QED disk format support, then pass the feature over to the user.
-
由 Cédric Bosdonnat 提交于
Without a default: case in the switches in xenParseXLDisk(), build would fail with every new disk backend or image format added in libxl, as this is the case in this error: http://logs.test-lab.xenproject.org/osstest/logs/103325/build-amd64-libvirt/5.ts-libvirt-build.log
-
由 Daniel P. Berrange 提交于
The virDomainSendProcessSignal method says the flags values come from virDomainProcessSignalFlag, but this enum has never existed. No flags are needed for this method. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 14 12月, 2016 10 次提交
-
-
由 Jiri Denemark 提交于
Almost none of our virJSONValue*Get* functions accept const virJSONValue pointers and it wouldn't even make sense since we sometimes modify what we get. And because there is no reason for preventing callers of virJSONValueObjectForeachKeyValue from modifying the values they get in each iteration we can just stop doing it. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Daniel P. Berrange 提交于
Using a variable named 'stat' clashes with the system function 'stat()' causing compiler warnings on some platforms cc1: warnings being treated as errors ../../src/qemu/qemu_monitor_text.c: In function 'parseMemoryStat': ../../src/qemu/qemu_monitor_text.c:604: error: declaration of 'stat' shadows a global declaration [-Wshadow] /usr/include/sys/stat.h:455: error: shadowed declaration is here [-Wshadow] Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Jiri Denemark 提交于
This revealed bugs in RNG schema for /network/dns/srv. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Peter Krempa 提交于
'log_outputs' would be read into the variable for log_filters
-
由 Peter Krempa 提交于
'log_outputs' would be read into the variable for log_filters
-
由 Viktor Mihajlovski 提交于
If the cpuset cgroup controller is disabled in /etc/libvirt/qemu.conf QEMU virtual machines can in principle use all host CPUs, even if they are hot plugged, if they have no explicit CPU affinity defined. However, there's libvirt code supposed to handle the situation where the libvirt daemon itself is not using all host CPUs. The code in qemuProcessInitCpuAffinity attempts to set an affinity mask including all defined host CPUs. Unfortunately, the resulting affinity mask for the process will not contain the offline CPUs. See also the sched_setaffinity(2) man page. That means that even if the host CPUs come online again, they won't be used by the QEMU process anymore. The same is true for newly hot plugged CPUs. So we are effectively preventing that QEMU uses all processors instead of enabling it to use them. It only makes sense to set the QEMU process affinity if we're able to actually grow the set of usable CPUs, i.e. if the process affinity is a subset of the online host CPUs. There's still the chance that for some reason the deliberately chosen libvirtd affinity matches the online host CPU mask by accident. In this case the behavior remains as it was before (CPUs offline while setting the affinity will not be used if they show up later on). Signed-off-by: NViktor Mihajlovski <mihajlov@linux.vnet.ibm.com> Tested-by: NMatthew Rosato <mjrosato@linux.vnet.ibm.com>
-
由 Viktor Mihajlovski 提交于
The functions to retrieve online and present host CPU information are only supported on Linux for the time being. This leads to runtime errors if these function are used on other platforms. To avoid that, code in higher levels using the functions must replicate the conditional compilation in higher level which is error prone (and is plainly spoken ugly). Adding a function virHostCPUHasBitmap that can be used to check for host CPU bitmap support. NB: There are other functions including the host CPU count that are lacking support on all platforms, but they are too essential in order to be bypassed. Signed-off-by: NViktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
-
由 Jiri Denemark 提交于
virQEMUCapsFindTarget is supposed to find an alternative QEMU binary if qemu-system-$GUEST_ARCH doesn't exist. The alternative is using host architecture when it is compatible with $GUEST_ARCH. But a special treatment has to be applied for ppc64le since the QEMU binary is always called qemu-system-ppc64. Broken by me in v2.2.0-171-gf2e71550. https://bugzilla.redhat.com/show_bug.cgi?id=1403745Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 John Ferlan 提交于
Seems commit id '0257d06b' forgot to include formatstorage when updating the docs to describe allowing zfs as a pool type and to furthermore note that the pool's target path element will be generated rather than read. Similarly commit id 'efab27af' neglected to indicate that the target path for a logical pool will now be generated by libvirt. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
- 13 12月, 2016 17 次提交
-
-
由 Jiri Denemark 提交于
Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Let's make sure all examples fit into their grey boxes. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Almost all XML examples use <tag .../> rather than <tag ...></tag> if the element is empty. Let's remove the two instances of the latter. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Nitesh Konkar 提交于
This patch adds support and documentation for the branch_misses perf event. Signed-off-by: NNitesh Konkar <nitkon12@linux.vnet.ibm.com>
-
由 Nikolay Shirokovskiy 提交于
qemuAgentNotifyEvent accesses monitor structure and is called on qemu reset/shutdown/suspend events under domain lock. Other monitor functions on the other hand take monitor lock and don't hold domain lock. Thus it is possible to have risky simultaneous access to the structure from 2 threads. Let's take monitor lock here to make access exclusive.
-
由 Nikolay Shirokovskiy 提交于
Current call to qemuAgentGetFSInfo in qemuDomainGetFSInfo is unsafe. Domain lock is dropped and we use vm->def. Let's make def copy to fix that.
-
由 Nikolay Shirokovskiy 提交于
In case of 0 filesystems *info is not set while according to virDomainGetFSInfo contract user should call free on it even in case of 0 filesystems. Thus we need to properly set it. NULL will be enough as free eats NULLs ok.
-
由 John Ferlan 提交于
-
由 John Ferlan 提交于
The libvirt-domain.h documentation indicates that for a qcow2 file in a filesystem being used for a backing store should report the disk space occupied by a file; however, commit id '15fa84ac' altered the code to trust that the wr_highest_offset should be used whenever wr_highest_offset_valid was set. As it turns out this will lead to indeterminite results. For an active domain when qemu hasn't yet had the need to find the wr_highest_offset value, qemu will report 0 even though qemu-img will report the proper disk size. This causes reporting of the following XML: <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/path/to/test-1g.qcow2'/> to be as follows: Capacity: 1073741824 Allocation: 0 Physical: 1074139136 with qemu-img indicating: image: /path/to/test-1g.qcow2 file format: qcow2 virtual size: 1.0G (1073741824 bytes) disk size: 1.0G Once the backing source file is opened on the guest, then wr_highest_offset is updated, but only to the high water mark and not the size of the file. This patch will adjust the logic to check for the file backed qcow2 image and enforce setting the allocation to the returned 'physical' value, which is the 'actual-size' value from a 'query-block' operation. NB: The other consumer of the wr_highest_offset output (GetAllDomainStats) has a contract that indicates 'allocation' is the offset of the highest written sector, so it doesn't need adjustment. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Instead of having duplicated code in qemuStorageLimitsRefresh and virStorageBackendUpdateVolTargetInfo to get capacity specific data about the storage backing source or volume -- create a common API to handle the details for both. As a side effect, virStorageFileProbeFormatFromBuf returns to being a local/static helper to virstoragefile.c For the QEMU code - if the probe is done, then the format is saved so as to avoid future such probes. For the storage backend code, there is no need to deal with the probe since we cannot call the new API if target->format == NONE. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Instead of having duplicated code in qemuStorageLimitsRefresh and virStorageBackendUpdateVolTargetInfoFD to fill in the storage backing source or volume allocation, capacity, and physical values - create a common API that will handle the details for both. The common API will fill in "default" capacity values as well - although those more than likely will be overridden by subsequent code. Having just one place to make the determination of what the values should be will make things be more consistent. For the QEMU code - the data filled in will be for inactive domains for the GetBlockInfo and DomainGetStatsOneBlock API's. For the storage backend code - the data will be filled in during the volume updates. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Commit id '8dc27259' introduced virStorageSourceUpdateBlockPhysicalSize in order to retrieve the physical size for a block backed source device for an active domain since commit id '15fa84ac' changed to use the qemuMonitorGetAllBlockStatsInfo and qemuMonitorBlockStatsUpdateCapacity API's to (essentially) retrieve the "actual-size" from a 'query-block' operation for the source device. However, the code only was made functional for a BLOCK backing type and it neglected to use qemuOpenFile, instead using just open. After the open the block lseek would find the end of the block and set the physical value, close the fd and return. Since the code would return 0 immediately if the source device wasn't a BLOCK backed device, the physical would be displayed incorrectly, such as follows in domblkinfo for a file backed source device: Capacity: 1073741824 Allocation: 0 Physical: 0 This patch will modify the algorithm to get the physical size for other backing types and it will make use of the qemuDomainStorageOpenStat helper in order to open/stat the source file depending on its type. The qemuDomainGetStatsOneBlock will no longer inhibit printing errors, but it will still ignore them leaving the physical value set to 0. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Currently just a shim to call virStorageSourceUpdateBlockPhysicalSize Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Split out the opening of the file and fetch of the stat buffer into a helper qemuDomainStorageOpenStat. This will handle either opening the local or remote storage. Additionally split out the cleanup of that into a separate helper qemuDomainStorageCloseStat which will either close the file or call the virStorageFileDeinit function. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Originally added by commit id '89646e69' prior to commit id '15fa84ac' and '71d2c172' which ensured that qemuStorageLimitsRefresh was only called for inactive domains. Adjust the comment describing the need for FIXME and move all the text to the function description. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
- 12 12月, 2016 2 次提交
- 11 12月, 2016 2 次提交
-
-
由 John Ferlan 提交于
Looks like the <timestamps> and <encryption> were put in the wrong place... They're not <pool> elements, rather they are <volume> elements
-
由 John Ferlan 提交于
Lost during merge of commit id '8546adf8' and '585ad00b'
-
- 09 12月, 2016 4 次提交
-
-
由 Peter Krempa 提交于
1ec22be5 added code that detects the maximum cpu count according to domain capabilities. The code fell back to the old command only if the API was not supported. If the API fails for other reasons the command would fail. There's no point in not trying the old API in such case. https://bugzilla.redhat.com/show_bug.cgi?id=1402690
-
由 Pavel Glushchak 提交于
This flag is used in Virtuozzo backend implicitly, thus we need to support it and don't fail if it's set. Signed-off-by: NPavel Glushchak <pglushchak@virtuozzo.com>
-
由 Pavel Glushchak 提交于
This flag tells backend not to create instance disks making behavior the same as in qemu driver. Disk files have to be created beforehand on target host manually or by upper management layer i.e. OpenStack Nova. Signed-off-by: NPavel Glushchak <pglushchak@virtuozzo.com>
-
由 Nikolay Shirokovskiy 提交于
When save/migrate a domain and we autogenerated a port, then if we print the inactive domain config, write out a -1 for the socket value; otherwise, it's possible that the subsequent start will fail if the autogenerated websocket used conflicts with an existing running config that also used autogenerated websockets. Examples: == A. Can not restore domain with autoconfigured websocket. domain 1 and 2 have autoconfigured websocket. 1. domain 1 is started then, saved 2. domain 2 is started 3. domain 1 restoration is failed: error: internal error: qemu unexpectedly closed the monitor: 2016-11-21T10:23:11.356687Z qemu-kvm: -vnc 0.0.0.0:2,websocket=5700: Failed to start VNC server on `(null)': Failed to bind socket: Address already in use == B. Can not migrate domain with autoconfigured websocket. domain 1 on host A, domain 2 on host B, both have autoconfigured websocket 1. domain 1 started, domain 2 started 2. domain 1 migration to host B is failed with the above error.
-