- 07 9月, 2017 11 次提交
-
-
由 Nikolay Shirokovskiy 提交于
Looks like it is more simple to drop this optimization as we are going to add getting disks stats during migration via quering qemu process and checking if we have to acquire job condition becomes more complicate. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Nikolay Shirokovskiy 提交于
Instead of checking stat.status let's set status to migrating as soon as migrate command is send (waiting for completion is a good place too). Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Nikolay Shirokovskiy 提交于
Setting status to none has little value - getting job status will not return even elapsed time. After this patch getting job stats stays correct in a sence it will not fetch migration stats because it consults stats.status before doing the fetch. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Nikolay Shirokovskiy 提交于
Querying destination migration statistics may result in getting a failure or getting a elapsed time value depending on stats.status value which is odd. Instead let's always fail. Clients should be ready to handle this as currently getting failure period can be considerable. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Nikolay Shirokovskiy 提交于
Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Nikolay Shirokovskiy 提交于
qemuMigrationFetchJobStatus is rather inconvinient. Some of its callers don't need status to be updated, some don't need to update elapsed time right away. So let's update status or elapsed time in callers instead. This patch drops updating job status on getting job stats by client. This way we will not provide status 'completed' while it is not yet updated by migration routine. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Nikolay Shirokovskiy 提交于
qemuMonitorGetMigrationStats will do it for us anyway. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Nikolay Shirokovskiy 提交于
This way we get stats only in one place. The former code waits for complete/postcopy status basically and don't need to mess with stats. The patch drops raising an error on stats updates failure. This does not make much sense anyway. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Nikolay Shirokovskiy 提交于
Let's introduce QEMU_DOMAIN_JOB_STATUS_POSTCOPY state for job.current->status instead of checking job.current->stats.status. The latter can be changed when fetching migration statistics. Moving state function from the variable and leave only store function seems more managable. This patch removes all state checking usage of stats except for qemuDomainGetJobStatsInternal. This place will be handled separately. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Nikolay Shirokovskiy 提交于
This patch simply switches code from using VIR_DOMAIN_JOB_* to introduced QEMU_DOMAIN_JOB_STATUS_*. Later this gives us freedom to introduce states for postcopy and mirroring phases. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Nikolay Shirokovskiy 提交于
qemu driver does not have VIR_DOMAIN_JOB_BOUNDED jobs and timeRemaining is always 0. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 06 9月, 2017 1 次提交
-
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1439991 Whenever a device is being updated via virDomainUpdateDeviceFlags() API, we parse the device XML and ideally run some generic checks to validate the configuration (e.g. if device defines per-device boot order but the domain has os/boot element already). Well, that's the theory - due to a missing check we've jumped early from that check function. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NErik Skultety <eskultet@redhat.com>
-
- 05 9月, 2017 6 次提交
-
-
由 John Ferlan 提交于
Neither @cfg nor (now) @driver is used in the API, so remove them and mark @opaque as UNUSED. NB: Commit id 'fa3c5585' dropped the unused @qemuCaps which was the last consumer of @driver other than @cfg, but even @cfg was never used even in the original implementation from commit id 'd987f63a'.
-
由 Cole Robinson 提交于
arm/aarch64 -M virt on KVM doesn't and will never work with standard VGA card emulation. The recommended method is to use type=virtio, so let's make it the default for video devices without an explicit type set by the user. https://bugzilla.redhat.com/show_bug.cgi?id=1404112Signed-off-by: NCole Robinson <crobinso@redhat.com>
-
由 Cole Robinson 提交于
And not generic domain_conf code. We will need qemu private functions in a bit. Signed-off-by: NCole Robinson <crobinso@redhat.com>
-
由 Cole Robinson 提交于
This allows drivers to set their own default. But if a driver neglects to fill one in, we still error like we previously would at parse time. Signed-off-by: NCole Robinson <crobinso@redhat.com>
-
由 Cole Robinson 提交于
Will be needed for future patches to pull the default video type setting out of XML parsing routines. Signed-off-by: NCole Robinson <crobinso@redhat.com>
-
由 Erik Skultety 提交于
There were a few places in our code where the following pattern in 'if' condition occurred: if ((foo = bar() < 0)) do something; This patch adjusts the conditions to the expected format: if ((foo = bar()) < 0) do something; Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1488192Signed-off-by: NErik Skultety <eskultet@redhat.com> Reviewed-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 04 9月, 2017 4 次提交
-
-
由 Andrea Bolognani 提交于
Suggested-by: NMartin Kletzander <mkletzan@redhat.com> Signed-off-by: NAndrea Bolognani <abologna@redhat.com> Reviewed-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Daniel P. Berrange 提交于
Although not previously explicitly documented, the expectation for the libvirt event loop is that an implementation is registered early in application startup, before calling any libvirt APIs and then run forever after. Replacing a previously registered event loop is not safe & subject to races even if virConnectClose has been called on open handles, due to delayed deregistration of callbacks during conenction close. Reviewed-by: NAndrea Bolognani <abologna@redhat.com> Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Michal Privoznik 提交于
Funny thing. So when initializing LXC driver's capabilities, firstly the virLXCDriverGetCapabilities() is called. This creates new capabilities, stores them under driver->caps, ref() them and return them. However, the return value is ignored. Secondly, the function is called yet again and since we have driver->caps set, they are ref()-ed again an returned. So in the end, driver's capabilities have refcount of three when in fact they should have refcount of one. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Richard W.M. Jones 提交于
If you use the VDDK library to access virtual machines remotely, you really need to know the Managed Object Reference ("moref") of the VM. This must be passed each time you connect to the API. For example nbdkit's VDDK plugin requires a moref to be passed to mount up a VM's disk remotely: nbdkit vddk user=root password=+/tmp/rootpw \ server=esxi.example.com thumbprint=xx:xx:xx:... \ vm=moref=2 \ file="[datastore1] Fedora/Fedora.vmdk" Getting the moref is a huge pain. To get some idea of what it is, why it is needed, and how much trouble it is to get it, see: https://blogs.vmware.com/vsphere/2012/02/uniquely-identifying-virtual-machines-in-vsphere-and-vcloud-part-1-overview.html https://blogs.vmware.com/vsphere/2012/02/uniquely-identifying-virtual-machines-in-vsphere-and-vcloud-part-2-technical.html However the moref is available conveniently in the internals of the libvirt VMX driver. This patch exposes it as a custom XML element using the same "vmware:" namespace which was previously used for the datacenterpath (see libvirt commit 636a9905). It appears in the XML like this: <domain type='vmware' xmlns:vmware='http://libvirt.org/schemas/domain/vmware/1.0'> <name>Fedora</name> ... <vmware:datacenterpath>ha-datacenter</vmware:datacenterpath> <vmware:moref>2</vmware:moref> </domain> Note that the moref can appear as either a simple ID (for esx:// connections) or as a "vm-<ID>" (for vpx:// connections). It should be treated by users as an opaque string. Signed-off-by: NRichard W.M. Jones <rjones@redhat.com>
-
- 01 9月, 2017 3 次提交
-
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1487322 In ace45e67 I tried to fix a problem that we get the reply to a D-Bus call while we were sleeping. In that case the callback was never set. So I changed the code that the callback is called directly in this case. However, I hadn't realized that since the callback is called out of order it locks the virNetDaemon. Exactly the very same virNetDaemon object that we are dealing with right now and that we have locked already (in virNetDaemonAddShutdownInhibition()) Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Nikolay Shirokovskiy 提交于
We call qemuDomainGetMachineName on domain start. On first start (after daemon start) pid is 0 and virSystemdGetMachineNameByPID don't get called. But after domain shutting down pid became -1 so on next start virSystemdGetMachineNameByPID is called and returned an error. Error is ignored so it is not critical. But at least on my system (systemd-219 with extra patches) systemd-machined is crashed on this request. This behaviour is triggered by eaf2c9f8. Reviewed-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1484230 When updating a virtio enabled vNIC and trying to change either of rx_queue_size or tx_queue_size success is reported although no operation is actually performed. Moreover, there's no way how to change these on the fly. This is due to way we check for changes: explicitly for each struct member. Therefore it's easy to miss one. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 31 8月, 2017 2 次提交
-
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1437797 Rather than using refreshVol which essentially only updates the allocation, capacity, and permissions for the volume, but not the format which does get updated in a pool refresh - let's use the same helper that pool refresh uses in order to update the volume target.
-
由 John Ferlan 提交于
Create a separate function to handle the volume target update via probe processing.
-
- 30 8月, 2017 3 次提交
-
-
由 Pavel Hrdina 提交于
Currently while parsing domain XML we clear the UNIX path if it matches one of the auto-generated paths by libvirt. After that when the guest is started new path is generated but the mode is also changed to "bind". In the real-world use-case the mode should not change, it only happens if a user provides a mode='connect' and path that matches one of the auto-generated path or not provides a path at all. Before *reconnect* feature was introduced there was no issue, but with the new feature we need to make sure that it's used only with "connect" mode, therefore we need to move the mode change into parsing in order to have a proper error reported by validation code. Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
由 Pavel Hrdina 提交于
Missed by 9aa72a6d. Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
由 Daniel P. Berrange 提交于
Inspired by the recent GIT / Mercurial security flaws (http://blog.recurity-labs.com/2017-08-10/scm-vulns), consider someone/something manages to feed libvirt a bogus URI such as: virsh -c qemu+ssh://-oProxyCommand=gnome-calculator/system In this case, the hosname "-oProxyCommand=gnome-calculator" will get interpreted as an argument to ssh, not a hostname. Fortunately, due to the set of args we have following the hostname, SSH will then interpret our bit of shell script that runs 'nc' on the remote host as a cipher name, which is clearly invalid. This makes ssh exit during argv parsing and so it never tries to run gnome-calculator. We are lucky this time, but lets be more paranoid, by using '--' to explicitly tell SSH when it has finished seeing command line options. This forces it to interpret "-oProxyCommand=gnome-calculator" as a hostname, and thus see a fail from hostname lookup. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 29 8月, 2017 10 次提交
-
-
由 Martin Kletzander 提交于
When recreating folders with namespaces, the directory type was not being handled at all. It's not special, we probably just didn't know that that can be used as a volume path as well. The code failed gracefully, but we want to allow that so that we can use <disk type='dir'> in domains again. Partially-resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1443434Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Martin Kletzander 提交于
Our backing probing code handles directory file types properly in virStorageFileGetMetadataRecurse(), by that I mean it leaves them alone. However its caller, the virStorageFileGetMetadata() resets the type to raw before probing, without even checking the type. We need to special-case TYPE_DIR in order to achieve desired results. Also, in order to properly test this, we need to stop resetting format of volumes in tests for TYPE_DIR (probably the reason why we didn't catch that and why the test data didn't need to be modified). Partially-resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1443434Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Kothapally Madhu Pavan 提交于
This commit adds qemu driver implementation to edit xml configuration of managed save state file of a domain. Signed-off-by: NKothapally Madhu Pavan <kmp@linux.vnet.ibm.com>
-
由 Kothapally Madhu Pavan 提交于
This commit adds qemu driver implementation to get xml description for managed save state domain. Signed-off-by: NKothapally Madhu Pavan <kmp@linux.vnet.ibm.com>
-
由 Kothapally Madhu Pavan 提交于
Similar to domainSaveImageDefineXML this commit adds domainManagedSaveDefineXML API which allows to edit domain's managed save state xml configuration. Signed-off-by: NKothapally Madhu Pavan <kmp@linux.vnet.ibm.com>
-
由 Kothapally Madhu Pavan 提交于
Similar to domainSaveImageGetXMLDesc this commit adds domainManagedSaveGetXMLDesc API which allows to get the xml of managed save state domain. Signed-off-by: NKothapally Madhu Pavan <kmp@linux.vnet.ibm.com>
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1476866 For some reason, we completely ignore <on_reboot/> setting for domains. The implementation is simply not there. It never was. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
This API is definitely modifying state of @vm. Therefore it should grab a job. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NJohn Ferlan <jferlan@redhat.com>
-
由 Michal Privoznik 提交于
At some places we either already have synchronous job or we just released it. Also, some APIs might want to use this code without having to release their job. Anyway, the job acquire code is moved out to qemuDomainRemoveInactiveJob so that qemuDomainRemoveInactive does just what it promises. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NJohn Ferlan <jferlan@redhat.com>
-
由 Martin Kletzander 提交于
Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-