- 06 10月, 2016 4 次提交
-
-
由 John Ferlan 提交于
It was missing... Also since I'm using the soft link from qemuxml2xmloutdata to the qemuxml2argvdata file, modify the output file to have the necessary <address> elements plus the mouse and keyboard. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
The upstream qemu commit 'dce13204' changed the wording just slightly to add 'in bursts' essentially. Just following that model here. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Change Marco to Macro Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 Laine Stump 提交于
When I added support for the pcie-expander-bus controller in commit bc07251f, I incorrectly thought that it only had a single slot available. Actually it has 32 slots, just like the root complex aka pcie-root (the part that I *did* get correct is that unlike pcie-root a pcie-expander-bus doesn't allow any integrated endpoint devices - only pcie-root-ports and dmi-to-pci-controllers are allowed).
-
- 05 10月, 2016 19 次提交
-
-
由 Jiri Denemark 提交于
GCC complained that vsh.c: In function 'vshReadlineOptionsGenerator': vsh.c:2622:29: warning: unused variable 'opt' [-Wunused-variable] const vshCmdOptDef *opt = &cmd->opts[list_index]; ^ vsh.c: In function 'vshReadlineParse': vsh.c:2830:44: warning: 'opt' may be used uninitialized in this function [-Wmaybe-uninitialized] completed_list = opt->completer(autoCompleteOpaque, Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 John Ferlan 提交于
This will fetch "this device" from the recently returned 'dev' and perform common error checking for the paths that call it.
-
由 John Ferlan 提交于
This will grab the 'dev' from devices and do the common validation checks.
-
由 John Ferlan 提交于
Reduce some cut-n-paste code by creating common helper. Make use of the recently added virJSONValueObjectStealArray to grab the devices list as part of the common code (we we can Free the reply) and return devices for each of the callers to continue to parse. NB: This also adds error checking to qemuMonitorJSONDiskNameLookup Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Provide the Steal API for any code paths that will desire to grab the object array and then free it afterwards rather than relying to freeing the whole chain from the reply.
-
由 John Ferlan 提交于
No sense opening a connection only to fail because we don't support the type of build being attempted. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Rather than use stack allocated state context pointers, let's allocate and free the state context pointer. In doing so, we'll shrink the code a bit since many routines perform the same initialization sequence. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Since none of the callers check the status, let's just alter it to a static void. While we're at it - scrap the local runtime variable and just do the math in the VIR_DEBUG directly. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 Peter Krempa 提交于
Implement support for VIR_DOMAIN_VCPU_HOTPLUGGABLE so that users can choose to make vcpus added by the API removable.
-
由 Peter Krempa 提交于
For compatibility reasons virDomainSetVcpus needs to add vcpus as non hotpluggable which means that the users will not be able to unplug it after the VM has started. Add a flag that will allow to tell the API that the unpluggable vcpus are okay.
-
由 Peter Krempa 提交于
If attaching to a qemu process fails after opening the monitor socket libvirt does not clean up the monitor. As the monitor also holds a reference to the domain object the qemu attach API basically leaks it. QEMU also does not interact on a second monitor connection and thus a further attempt to attach to it would lock up. Prevent libvirt from leaking the monitor by explicitly closing it. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1378401
-
由 Peter Krempa 提交于
Attaching to a existing qemu process allows to get us into a situation when qemu is new enough to have JSON monitor and new vCPU hotplug but the json monitor is not used. The vCPU detection code would require it though. This broke attaching to qemu processes. Make the condition less strict and just skip the vCPU hotplug detection if JSON monitor is not available. Resolves one of the symptoms in: https://bugzilla.redhat.com/show_bug.cgi?id=1378401
-
由 Nehal J Wani 提交于
Libvirt, on its own, shouldn't decide whether an expired lease should stay in the custom leases database or not. It should rather rely on the 'DEL' event from dnsmasq. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Nehal J Wani 提交于
The NSS module shouldn't rely on custom leases database to not have entries for leases which have expired. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 John Ferlan 提交于
We are about to add 6 new values to fetch. This will put us over the current limit of 16 (we're at 13 now). Once there are more than 16 parameters, this will affect existing clients that attempt to fetch blockiotune config values for the domain from the remote host since the server side has no mechanism to determine whether the capability for the emulator exists and thus would attempt to return all known values from the persistentDef. If attempting to fetch the blockiotune values from a running domain, the code will check the emulator capabilities and set maxparams (in qemuDomainGetBlockIoTune) appropriately. On the client side of the remote connection, it uses this constant in xdr_remote_domain_get_block_io_tune_ret and virTypedParamsDeserialize calls, so if a remote server returns more than 16 parameters, then the client will fail with "Unable to decode message payload". Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
The REMOTE_DOMAIN_MEMORY_PARAMETERS_MAX was erroneously used in the remoteDomainBlockStatsFlags and remoteDomainGetBlockIoTune calls. Change the constant to be the right one. Fortunately, all 3 are defined as 16. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
-
由 Daniel Veillard 提交于
* docs/news.html.in: updated for release * po/*.po*: regenerated
-
由 Michal Privoznik 提交于
This breaks vCPU hotplug, because when starting a domain, we create a copy of domain definition (which becomes live XML) and during the post parse callbacks we might adjust some tunings so that vCPU hotplug is possible. This reverts commit 581b7756.
-
- 04 10月, 2016 1 次提交
-
-
由 Michal Privoznik 提交于
This breaks vCPU hotplug, because when starting a domain, we create a copy of domain definition (which becomes live XML) and during the post parse callbacks we might adjust some tunings so that vCPU hotplug is possible. This reverts commit c0f90799.
-
- 30 9月, 2016 6 次提交
-
-
由 Peter Krempa 提交于
Certain operations may make the vcpu order information invalid. Since the order is primarily used to ensure migration compatibility and has basically no other user benefits, clear the order prior to certain operations and document that it may be cleared. All the operations that would clear the order can still be properly executed by defining a new domain configuration rather than using the helper APIs. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1370357
-
由 Peter Krempa 提交于
virDomainDefSetVcpus was not designed to handle coldplug of vcpus now that we can set state of vcpus individually. Introduce qemuDomainSetVcpusConfig that properly handles state changes of vcpus when coldplugging so that invalid configurations are not created. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1375939
-
由 Peter Krempa 提交于
The current code that validates duplicate vcpu order would not work properly if the order would exceed def->maxvcpus. Limit the order to the interval described.
-
由 Peter Krempa 提交于
Allocate a one larger bitmap rather than shifting the indexes back to zero.
-
由 Peter Krempa 提交于
The bitmap indexes for the order duplicate check are shifted to 0 since vcpu order 0 is not allowed. The error message doesn't need such treating though. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1370360
-
由 Laine Stump 提交于
When support was added for the kvm hidden='on' attribute in commit d07116, the version requirement was listed as "2.1.0 (QEMU only)". However, this was added when libvirt was at version 1.2.8 - it is *QEMU* that must be at version 2.1.0 or later. This went unnoticed for a very long time (over 2 years). Then a week or two ago a new Windows convert in the #virt channel on OFTC was told he needed to use this feature (to prevent nvidia drivers in a guest from refusing to work due to being run in a virtual machine). There was some problem with it being recognized and "someone" (it may have been me, or may have been someone else, I don't remember) pointed out that the documentation at http://www.libvirt.org/formatdomain.html says that it requires libvirt 2.1.0. The next several days were filled with agony as a new convert to Linux first tried to upgrade a Linux Mint host running their "LTS" version to something newer, then tried to install a libvirt build built for Ubuntu onto this, and later back to the old LTS Linux Mint. After this he tried building his own libvirt from source (with all the expected problems), and finally switched to Fedora. In the end it was hours and hours of everybody's lives that they will never get back. To now learn that he didn't need to do this (his original libvirt version was 1.3.3, so whatever his problem was, it was elsewhere) makes the pain all that much worse. To prevent this from happening again, this simple patch changes the version requirement for the kvm hidden attribute from "2.1.0 (QEMU only)" to "1.2.8 (QEMU 2.1.0)".
-
- 29 9月, 2016 6 次提交
-
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1292984 Hold on to your hats, because this is gonna be wild. In bd3e16a3 I've tried to expose sanlock io_timeout. What I had not realized (because there is like no documentation for sanlock at all) was very unusual way their APIs work. Basically, what we do currently is: sanlock_add_lockspace_timeout(&ls, io_timeout); which adds a lockspace to sanlock daemon. One would expect that io_timeout sets the io_timeout for it. Nah! That's where you are completely off the tracks. It sets timeout for next lockspace you will probably add later. Therefore: sanlock_add_lockspace_timeout(&ls, io_timeout = 10); /* adds new lockspace with default io_timeout */ sanlock_add_lockspace_timeout(&ls, io_timeout = 20); /* adds new lockspace with io_timeout = 10 */ sanlock_add_lockspace_timeout(&ls, io_timeout = 40); /* adds new lockspace with io_timeout = 20 */ And so on. You get the picture. Fortunately, we don't allow setting io_timeout per domain or per domain disk. So we just need to set the default used in the very first step and hope for the best (as all the io_timeout-s used later will have the same value). Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
Currently, we are checking for sanlock_add_lockspace_timeout which is good for now. But in a subsequent patch we are going to use sanlock_write_lockspace (which sets an initial value for io timeout for sanlock). Now, there is no reason to check for both functions in sanlock library as the sanlock_write_lockspace was introduced in 2.7 release and the one we are currently checking for in the 2.5 release. Therefore it is safe to assume presence of sanlock_add_lockspace_timeout when sanlock_write_lockspace is detected. Moreover, the macro for conditional compilation is renamed to HAVE_SANLOCK_IO_TIMEOUT (as it now encapsulates two functions). Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
Global variables are bad, we should avoid using them. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Martin Kletzander 提交于
If this reminds you of a commit message from around a year ago, it's 41c2aa72 and yes, we're dealing with "the same thing" again. Or f309db1f and it's similar. There is a logic in place that if there is no real need for memory-backend-file, qemuBuildMemoryBackendStr() returns 0. However that wasn't the case with hugepage backing. The reason for that was that we abused the 'pagesize' variable for storing that information, but we should rather have a separate one that specifies whether we really need the new object for hugepage backing. And that variable should be set only if this particular NUMA cell needs special treatment WRT hugepages. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1372153Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1379895 Introduced by commit id '834c5720'. During the code motion and creation of vsh.c, the function 'vshDeinit()' in the (new) vsh.c was altered from whence it came in virsh.c such that calling 'vshReadlineDeinit(ctl)' was conditional on "ctl->imode". This causes a problem for the interactive running if the "quit" and "exit" commands are used because 'cmdQuit' will clear ctl->imode, thus when the interactive loop in main() of virsh.c exits because ctl->mode is clear and virshDeinit is called which calls vshDeinit, the history file is now not written. Conversely, if one had exited the interactive loop via pressing <ctrl>D the file would be created because loop control is broken on EOF and ctl->imode is not set to false. This patch will remove the conditional call to vshReadlineDeinit and restore the former behaviour.
-
由 Roman Bogorodskiy 提交于
In commit 7f127ded cpuCompareXML was renamed to virCPUCompareXML, so change the bhyve driver to use the new function and thus fix the build.
-
- 28 9月, 2016 4 次提交
-
-
由 Jim Fehlig 提交于
Commit 6c504d6a added a note to the virsh man page about the deprecation of 'cap' and 'weight' settings for the credit scheduler. To this day, the default scheduler in Xen is credit and it supports setting 'cap' and 'weight'. Remove the deprecation notice from the note on the Xen credit scheduler. Reported-by: NVolo M. <vm@vovs.net>
-
由 Jim Fehlig 提交于
Due to a copy and paste error, the scheduler 'cap' parameter was over-writing the 'weight' parameter when preparing the return parameters in libxlDomainGetSchedulerParametersFlags. As a result, the scheduler weight was never shown when getting schedinfo and setting the weight failed as well virsh schedinfo testvm Scheduler : credit cap : 0 virsh schedinfo testvm --cap 50 --weight 500 Scheduler : credit error: invalid scheduler option: weight The obvious fix is to assign the 'caps' parameter to the correct item in the parameter list. Reported-by: NVolo M. <vm@vovs.net>
-
由 Joao Martins 提交于
Signed-off-by: NJoao Martins <joao.m.martins@oracle.com> Acked-by: NJim Fehlig <jfehlig@suse.com> Signed-off-by: NJim Fehlig <jfehlig@suse.com>
-
由 Joao Martins 提交于
Add support for formating/parsing libxl channels. Syntax on xen libxl goes as following: channel=["connection=pty|socket,path=/path/to/socket,name=XXX",...] Signed-off-by: NJoao Martins <joao.m.martins@oracle.com> Acked-by: NJim Fehlig <jfehlig@suse.com> Signed-off-by: NJim Fehlig <jfehlig@suse.com>
-