You need to sign in or sign up before continuing.
- 09 4月, 2015 2 次提交
-
-
由 Peter Krempa 提交于
We need to check that qemu supports block jobs in multiple places. Add a helper to do the check.
-
由 Peter Krempa 提交于
In some cases where the function does not need to access the private data this helper may be used to retrieve the monitor object.
-
- 02 4月, 2015 2 次提交
-
-
由 Ján Tomko 提交于
Instead of always using controller 0 and incrementing port number, respect the maximum port numbers of controllers and use all of them. Ports for virtio consoles are quietly reserved, but not formatted (neither in XML nor on QEMU command line). Also rejects duplicate virtio-serial addresses. https://bugzilla.redhat.com/show_bug.cgi?id=890606 https://bugzilla.redhat.com/show_bug.cgi?id=1076708 Test changes: * virtio-auto.args Filling out the port when just the controller is specified. switched from using maxport + 1 to: first free port on the controller * virtio-autoassign.args Filling out the address when no <address> is specified. Started using all the controllers instead of 0, also discards the bus value. * xml -> xml output of virtio-auto The port assignment is no longer done as a part of XML parsing, so the unspecified values stay 0.
-
由 Peter Krempa 提交于
The automatic cpuset can be stored along with automatic nodeset and it does not have to be recreated when used.
-
- 25 3月, 2015 2 次提交
-
-
由 Jiri Denemark 提交于
Whenever we fail to acquire a job, we can report how long ago it was locked by another API. https://bugzilla.redhat.com/show_bug.cgi?id=853839Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
This is very helpful when we want to log and report why we could not acquire a state change lock. Reporting what job keeps it locked helps with understanding the issue. Moreover, after calling virDomainGetControlInfo, it's possible to tell whether libvirt is just stuck somewhere within the API (or it just forgot to cleanup the job) or whether libvirt is waiting for QEMU to reply. The error message will look like the following: # virsh resume cd error: Failed to resume domain cd error: Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainSuspend) https://bugzilla.redhat.com/show_bug.cgi?id=853839Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 23 3月, 2015 2 次提交
-
-
由 Peter Krempa 提交于
Add support to start qemu instance with 'pc-dimm' device. Thanks to the refactors we are able to reuse the existing function to determine the parameters.
-
由 Peter Krempa 提交于
When using 'dimm' memory devices with qemu, some of the information like the slot number and base address need to be reloaded from qemu after process start so that it reflects the actual state. The state then allows to use memory devices across migrations.
-
- 16 3月, 2015 3 次提交
-
-
由 Peter Krempa 提交于
The memory sizes in qemu are aligned up to 1 MiB boundaries. There are two places where this was done once for the total size and then for individual NUMA cell sizes. Add a function that will align the sizes in one place so that it's clear where the sizes are aligned.
-
由 Peter Krempa 提交于
While qemu may be prepared to do this libvirt is not. Forbid the block ops until we fix our code.
-
由 Peter Krempa 提交于
Surprisingly we did not grab a VM job when a block job finished and we'd happily rewrite the backing chain data. This made it possible to crash libvirt when queueing two backing chains tightly and other badness. To fix it, add yet another handler to the helper thread that handles monitor events that require a job.
-
- 02 3月, 2015 1 次提交
-
-
由 Ján Tomko 提交于
Not just the DomainObj's private data.
-
- 19 1月, 2015 1 次提交
-
-
由 Ján Tomko 提交于
Depending on the context, either error out if the domain has disappeared in the meantime, or just ignore the value to allow marking the function as ATTRIBUTE_RETURN_CHECK.
-
- 15 1月, 2015 1 次提交
-
-
由 Ján Tomko 提交于
The domain might disappear during the time in monitor when the virDomainObjPtr is unlocked, so the caller needs to check if it's still alive. Since most of the callers are going to need it, put the check inside qemuDomainObjExitMonitor and return -1 if the domain died in the meantime.
-
- 21 12月, 2014 1 次提交
-
-
由 Martin Kletzander 提交于
There is one problem that causes various errors in the daemon. When domain is waiting for a job, it is unlocked while waiting on the condition. However, if that domain is for example transient and being removed in another API (e.g. cancelling incoming migration), it get's unref'd. If the first call, that was waiting, fails to get the job, it unref's the domain object, and because it was the last reference, it causes clearing of the whole domain object. However, when finishing the call, the domain must be unlocked, but there is no way for the API to know whether it was cleaned or not (unless there is some ugly temporary variable, but let's scratch that). The root cause is that our APIs don't ref the objects they are using and all use the implicit reference that the object has when it is in the domain list. That reference can be removed when the API is waiting for a job. And because each domain doesn't do its ref'ing, it results in the ugly checking of the return value of virObjectUnref() that we have everywhere. This patch changes qemuDomObjFromDomain() to ref the domain (using virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which should be the only function in which the return value of virObjectUnref() is checked. This makes all reference counting deterministic and makes the code a bit clearer. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 17 12月, 2014 1 次提交
-
-
由 Eric Blake 提交于
When requested in a later patch, the QMP command results are now examined recursively. As qemu_driver will eventually have to read items out of the hash table as stored by this patch, the computation of backing alias string is done in a shared location. * src/qemu/qemu_domain.h (qemuDomainStorageAlias): New prototype. * src/qemu/qemu_domain.c (qemuDomainStorageAlias): Implement it. * src/qemu/qemu_monitor_json.c (qemuMonitorJSONGetOneBlockStatsInfo) (qemuMonitorJSONBlockStatsUpdateCapacityOne): Perform recursion. (qemuMonitorJSONGetAllBlockStatsInfo) (qemuMonitorJSONBlockStatsUpdateCapacity): Update callers. Signed-off-by: NEric Blake <eblake@redhat.com>
-
- 16 12月, 2014 1 次提交
-
-
由 Martin Kletzander 提交于
Thanks to that we don't need to drag the pointer everywhere and future code will get cleaner. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 04 12月, 2014 1 次提交
-
-
由 Peter Krempa 提交于
Move entering the job into the thread to simplify the program flow. Also as the code holds a separate reference to the domain object some conditions can be simplified. After this patch qemuDomainObjTransferJob is no longer needed so this patch removes it.
-
- 28 11月, 2014 1 次提交
-
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1160084 As of b6d4dad1 (1.2.5) we are trying to keep the status of FSFreeze in the guest. Even though I've tried to fixed couple of corner cases (6ea54769), it occurred to me just recently, that the approach is broken by design. Firstly, there are many other ways to talk to qemu-ga (even through libvirt) that filesystems can be thawed (e.g. qemu-agent-command) without libvirt noticing. Moreover, there are plenty of ways to thaw filesystems without even qemu-ga noticing (yes, qemu-ga keeps internal track of FSFreeze status). So, instead of keeping the track ourselves, or asking qemu-ga for stale state, it's the best to let qemu-ga deal with that (and possibly let guest kernel propagate an error). Moreover, there's one bug with the following approach, if fsfreeze command failed, we've executed fsthaw subsequently. So issuing domfsfreeze in virsh gave the following result: virsh # domfsfreeze gentoo Froze 1 filesystem(s) virsh # domfsfreeze gentoo error: Unable to freeze filesystems error: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been disabled for this instance virsh # domfsfreeze gentoo Froze 1 filesystem(s) virsh # domfsfreeze gentoo error: Unable to freeze filesystems error: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been disabled for this instance Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 21 11月, 2014 1 次提交
-
-
由 Peter Krempa 提交于
New qemu added a new event that is emitted when a virtio serial channel is opened in the guest OS. This allows us to update the state of the port in the output-only XML element. This patch implements the monitor callbacks and necessary handlers to update the state in the definition.
-
- 07 10月, 2014 1 次提交
-
-
由 Laine Stump 提交于
NIC_RX_FILTER_CHANGED is sent by qemu any time a NIC driver in the guest modified the NIC's RX Filter (for example, if the MAC address of the NIC is changed by the guest). This patch doesn't do anything useful with that event; it just sets up all the plumbing to get news of the event into a worker thread with all proper locking/reference counting, and provide an easy place to add in desired functionality. See src/qemu/EVENTHANDLERS.txt for information/instructions on adding a libvirt-internal handler for a qemu event (using NIC_RX_FILTER_CHANGED as an example).
-
- 24 9月, 2014 2 次提交
-
-
由 Peter Krempa 提交于
Request erroring out from the backing chain traveller and drop qemu's internal backing chain integrity tester. The backing chain traveller reports errors by itself with possibly more detail than qemuDiskChainCheckBroken ever could. We also need to make sure that we reconnect to existing qemu instances even at the cost of losing the backing chain info (this really should be stored in the XML rather than reloaded from disk, but that needs some work).
-
由 Peter Krempa 提交于
Reuse virStorageSourceIsEmpty and rename "force" argument to "force_probe".
-
- 19 9月, 2014 1 次提交
-
-
由 John Ferlan 提交于
Mimic the "Disk" processing for 'rawio', but for a scsi_host hostdev lun device.
-
- 16 9月, 2014 1 次提交
-
-
由 John Ferlan 提交于
Add new 'niothreadpids' and 'iothreadpids' to mimic the 'ncpupids' and 'vcpupids' that already exist.
-
- 10 9月, 2014 3 次提交
-
-
由 Jiri Denemark 提交于
Total time of a migration and total downtime transfered from a source to a destination host do not count with the transfer time to the destination host and with the time elapsed before guest CPUs are resumed. Thus, source libvirtd remembers when migration started and when guest CPUs were paused. Both timestamps are transferred to destination libvirtd which uses them to compute total migration time and total downtime. Obviously, this requires the time to be synchronized between the two hosts. The reported times are useless otherwise but they would be equally useless if we didn't do this recomputation so don't lose anything by doing it. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
virDomainGetJobStats gains new VIR_DOMAIN_JOB_STATS_COMPLETED flag that can be used to fetch statistics of a completed job rather than a currently running job. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Jiri Denemark 提交于
Job statistics data were tracked in several structures and variables. Let's make a new qemuDomainJobInfo structure which can be used as a single source of statistics data as a preparation for storing data about completed a job. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 08 9月, 2014 1 次提交
-
-
由 Peter Krempa 提交于
Be consistent with naming of private defines. Also line up code correctly in few places where the macro is used.
-
- 14 8月, 2014 1 次提交
-
-
由 Sam Bobroff 提交于
During a QEMU live migration several warning messages about job handling could be written to syslog on the destination host: "entering monitor without asking for a nested job is dangerous" The messages are written because the job handling during migration uses hard coded asyncJob values in several places that are incorrect. This patch passes the required asyncJob value around and prevents the warnings as well as any issues that the warnings may be referring to. https://bugzilla.redhat.com/show_bug.cgi?id=1130089Signed-off-by: NSam Bobroff <sam.bobroff@au1.ibm.com> Signed-off-by: NJán Tomko <jtomko@redhat.com>
-
- 08 7月, 2014 1 次提交
-
-
由 Peter Krempa 提交于
Add a wrapper that determines the correct uid and gid for a certain storage file and domain.
-
- 25 6月, 2014 1 次提交
-
-
由 Julio Faracco 提交于
As we are doing with the enum structures, a cleanup in "src/qemu/" directory was done now. All the enums that were defined in the header files were converted to typedefs in this directory. This patch includes all the adjustments to remove conflicts when you do this kind of change. "Enum-to-typedef"'s conversions were made in "src/qemu/qemu_{capabilities, domain, migration, hotplug}.h". Signed-off-by: NJulio Faracco <jcfaracco@gmail.com>
-
- 21 6月, 2014 1 次提交
-
-
由 Ján Tomko 提交于
Just code movement and rename.
-
- 03 6月, 2014 1 次提交
-
-
由 Julio Faracco 提交于
In "src/conf/domain_conf.h" there are many enum declarations. The cleanup in this header filer was started, but it wasn't enough and there are many other files that has enum variables declared. So, the commit was starting to be big. This commit finish the cleanup in this header file and in other files that has enum variables, parameters, or functions declared. Signed-off-by: NJulio Faracco <jcfaracco@gmail.com> Signed-off-by: NEric Blake <eblake@redhat.com>
-
- 02 6月, 2014 1 次提交
-
-
由 Jiri Denemark 提交于
Currently, we don not acquire any job when removing a device after DEVICE_DELETED event was received from QEMU. This means that if there is another API running at the time DEVICE_DELETED is delivered and the API acquired a job, we may happily change the definition of the domain the API is working with whenever it unlocks the domain object (e.g., to talk with its monitor). That said, we have to acquire a job before finishing device removal to make things safe. However, doing so in the main event loop would cause a deadlock so we need to move most of the event handler into a separate thread. Another good reason for both acquiring a job and handling the event in a separate thread is that we currently remove a device backend immediately after removing its frontend while we should only remove the backend once we already received DEVICE_DELETED event. That is, we will have to talk to QEMU monitor from the event handler. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 14 5月, 2014 2 次提交
-
-
由 Jiri Denemark 提交于
It's only used within qemu_domain.c. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Roman Bogorodskiy 提交于
Introduce new files (domain_addr.[ch]) to provide an API for domain device handling that could be shared across the drivers. A list of data types were extracted and moved there: qemuDomainPCIAddressBus -> virDomainPCIAddressBus qemuDomainPCIAddressBusPtr -> virDomainPCIAddressBusPtr _qemuDomainPCIAddressSet -> virDomainPCIAddressSet qemuDomainPCIAddressSetPtr -> virDomainPCIAddressSetPtr qemuDomainPCIConnectFlags -> virDomainPCIConnectFlags Also, move the related definitions and macros.
-
- 07 5月, 2014 1 次提交
-
-
由 Tomoki Sekiyama 提交于
Adds 'quiesced' status into qemuDomainObjPrivate that tracks whether FSFreeze is requested in the domain. It modifies error code from qemuDomainSnapshotFSFreeze and qemuDomainSnapshotFSThaw, so that a caller can know whether the command is actually sent to the guest agent. If the error is caused before sending a freeze command, a counterpart thaw command shouldn't be sent either, not to confuse fsfreeze status tracking. Signed-off-by: NTomoki Sekiyama <tomoki.sekiyama@hds.com> Signed-off-by: NEric Blake <eblake@redhat.com>
-
- 24 4月, 2014 1 次提交
-
-
由 Peter Krempa 提交于
The function isn't used in any other source file. Move it so that it doesn't need a declaration.
-
- 18 3月, 2014 1 次提交
-
-
由 Martin Kletzander 提交于
Eliminate all the code re-use which checks for priv->agentError or priv->agent. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-