- 08 2月, 2016 2 次提交
-
-
由 Peter Krempa 提交于
In b3d2a42e I've refactored the code and moved the 'cleanup' label. Unfortunately the code that was originally in the 'endjob' label and wanted to jump to cleanup is now in the cleanup label. Remove the jump and let the function finish.
-
由 Peter Krempa 提交于
If we are attempting to thaw the filesystems on error, the code would overwrite the error code that caused the snapshot to fail with the error of thawing the filesystem. Since the thawing function allows control of error reporting behavior we can use this feature.
-
- 05 2月, 2016 2 次提交
-
-
由 Joao Martins 提交于
The virDomainSnapshotDefFormat calls into virDomainDefFormat, so should be providing a non-NULL virCapsPtr instance. On the qemu driver we change qemuDomainSnapshotWriteMetadata to also include caps since it calls virDomainSnapshotDefFormat. Signed-off-by: NJoao Martins <joao.m.martins@oracle.com>
-
由 Daniel P. Berrange 提交于
The virDomainObjFormat and virDomainSaveStatus methods both call into virDomainDefFormat, so should be providing a non-NULL virCapsPtr instance. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 04 2月, 2016 1 次提交
-
-
由 Joao Martins 提交于
virDomainSaveConfig calls virDomainDefFormat which was setting the caps to NULL, thus keeping the old behaviour (i.e. not looking at netprefix). This patch adds the virCapsPtr to the function and allows the configuration to be saved and skipping interface names that were registered with virCapabilitiesSetNetPrefix(). Signed-off-by: NJoao Martins <joao.m.martins@oracle.com>
-
- 03 2月, 2016 4 次提交
-
-
由 Nikolay Shirokovskiy 提交于
A pretty nasty deadlock occurs while trying to rename a VM in parallel with virDomainObjListNumOfDomains. The short description of the problem is as follows: Thread #1: qemuDomainRename: ------> aquires domain lock by qemuDomObjFromDomain ---------> waits for domain list lock in any of the listed functions: - virDomainObjListFindByName - virDomainObjListRenameAddNew - virDomainObjListRenameRemove Thread #2: virDomainObjListNumOfDomains: ------> aquires domain list lock ---------> waits for domain lock in virDomainObjListCount Introduce generic virDomainObjListRename function for renaming domains. It aquires list lock in right order to avoid deadlock. Callback is used to make driver specific domain updates. Signed-off-by: NNikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Peter Krempa 提交于
Avoid using virDomainDefSetVcpus when we can set it directly in the structure.
-
由 Dmitry Andreev 提交于
In case of guest panicked, preserved crashed domain has stopped CPUs. It's not possible to use tools like WinDbg for the problem investigation until we start CPUs back.
-
由 Nikolay Shirokovskiy 提交于
Error paths after sending the event that domain is started written as if ret = -1 which is set at the beginning of the function. It's common idioma to keep 'ret' equal to -1 until the end of function where it is set to 0. But here we use ret to keep result of restore operation too and thus breaks the idioma and its users :) Let's use different variable to hold restore result. Signed-off-by: NNikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
-
- 01 2月, 2016 1 次提交
-
-
由 Boris Fiuczynski 提交于
Having on_crash set to either coredump-destroy or coredump-restart creates core dumps with option memory-only in the directory specified by auto_dump_path. When a watchdog is triggered with the action dump the core dump is also placed into the directory specified by auto_dump_path but is created without the option memory-only. This patch sets the option memory-only also for core dumps created by the watchdog event. Signed-off-by: NBoris Fiuczynski <fiuczy@linux.vnet.ibm.com> Reviewed-by: NBjoern Walk <bwalk@linux.vnet.ibm.com> Reviewed-by: NStefan Zimmermann <stzi@linux.vnet.ibm.com>
-
- 28 1月, 2016 1 次提交
-
-
由 Peter Krempa 提交于
Iterate over all cpus skipping inactive ones.
-
- 26 1月, 2016 2 次提交
-
-
由 Daniel P. Berrange 提交于
The VIR_DOMAIN_STATS_VCPU flag to virDomainListGetStats enables reporting of stats about vCPUs. Currently we only report the cumulative CPU running time and the execution state. This adds reporting of the wait time - time the vCPU wants to run, but the host scheduler has something else running ahead of it. The data is reported per-vCPU eg $ virsh domstats --vcpu demo Domain: 'demo' vcpu.current=4 vcpu.maximum=4 vcpu.0.state=1 vcpu.0.time=1420000000 vcpu.0.wait=18403928 vcpu.1.state=1 vcpu.1.time=130000000 vcpu.1.wait=10612111 vcpu.2.state=1 vcpu.2.time=110000000 vcpu.2.wait=12759501 vcpu.3.state=1 vcpu.3.time=90000000 vcpu.3.wait=21825087 In implementing this I notice our reporting of CPU execute time has very poor granularity, since we are getting it from /proc/$PID/stat. As a future enhancement we should prefer to get CPU execute time from /proc/$PID/schedstat or /proc/$PID/sched (if either exist on the running kernel) Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
由 Peter Krempa 提交于
Remove unnecessary condition and variable.
-
- 21 1月, 2016 1 次提交
-
-
由 Dmitry Andreev 提交于
When acpi is used to reboot/shutdown qemu domain, qemu emits SHUTDOWN event. Libvirt uses fakeReboot variable in order to differentiate reboot or shutdown. fakeReboot value is reseted to false after domain restart/reset. When mode=agent is used to reboot qemu domain, qemu doesn't emit SHUTDOWN event and libvirt doesn't reset fakeReboot value to false. In this case next 'shutdown -h now' performs reboot. That's why we don't need to set fakeReboot=true for mode=agent. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 12 1月, 2016 1 次提交
-
-
由 Michal Privoznik 提交于
In qemu driver we listen to virtio channel events like an agent connected to or disconnected from the guest part of socket. However, with a little exception - when we find out that the socket in question is the guest agent one, we connect or disconnect guest agent which is done prior setting new state in internal structure. Due to a bug in our code it may happen that we got the event but failed to set it in internal structure representing the channel. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 09 1月, 2016 3 次提交
-
-
由 Cole Robinson 提交于
Rather than open coding calls. I can't see any reason not to
-
由 Cole Robinson 提交于
Rather than open coding calls. I can't see any reason not to
-
由 Jiri Denemark 提交于
The structure actually contains migration statistics rather than just the status as the name suggests. Renaming it as qemuMonitorMigrationStats removes the confusion. Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
- 07 1月, 2016 1 次提交
-
-
由 Peter Krempa 提交于
When doing a memory-only snapshot libvirt would still issue the 'transaction' command without any disk. Skip it if it isn't necessary.
-
- 05 1月, 2016 1 次提交
-
-
由 Michal Privoznik 提交于
While reviewing 1b43885d I've noticed a virReportError() followed by a goto endjob; without setting the correct return value. Problem is, if block job is so fast that it's bandwidth does not fit into ulong, an error is reported. However, by that time @ret is already set to 1 which means success. Since the scenario can be hardly considered successful, we should return a value meaning error. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 04 1月, 2016 1 次提交
-
-
由 Michael Chapman 提交于
Valgrind complained: ==23975== Conditional jump or move depends on uninitialised value(s) ==23975== at 0x22255FA6: qemuDomainGetBlockJobInfo (qemu_driver.c:16538) ==23975== by 0x538E97C: virDomainGetBlockJobInfo (libvirt-domain.c:9685) ==23975== by 0x12F740: remoteDispatchDomainGetBlockJobInfoHelper (remote.c:2834) ==23975== by 0x53FF287: virNetServerProgramDispatch (virnetserverprogram.c:437) ==23975== by 0x540523D: virNetServerProcessMsg (virnetserver.c:135) ==23975== by 0x54052C7: virNetServerHandleJob (virnetserver.c:156) ==23975== by 0x52F515B: virThreadPoolWorker (virthreadpool.c:145) ==23975== by 0x52F4668: virThreadHelper (virthread.c:206) ==23975== by 0x6E08A50: start_thread (in /lib64/libpthread-2.12.so) ==23975== by 0x82BE93C: clone (in /lib64/libc-2.12.so) ==23975== ==23975== Conditional jump or move depends on uninitialised value(s) ==23975== at 0x22255FB4: qemuDomainGetBlockJobInfo (qemu_driver.c:16542) ==23975== by 0x538E97C: virDomainGetBlockJobInfo (libvirt-domain.c:9685) ==23975== by 0x12F740: remoteDispatchDomainGetBlockJobInfoHelper (remote.c:2834) ==23975== by 0x53FF287: virNetServerProgramDispatch (virnetserverprogram.c:437) ==23975== by 0x540523D: virNetServerProcessMsg (virnetserver.c:135) ==23975== by 0x54052C7: virNetServerHandleJob (virnetserver.c:156) ==23975== by 0x52F515B: virThreadPoolWorker (virthreadpool.c:145) ==23975== by 0x52F4668: virThreadHelper (virthread.c:206) ==23975== by 0x6E08A50: start_thread (in /lib64/libpthread-2.12.so) ==23975== by 0x82BE93C: clone (in /lib64/libc-2.12.so) If no matching block job is found, qemuMonitorGetBlockJobInfo returns 0 and we should not write anything to the caller-supplied virDomainBlockJobInfo pointer. Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
-
- 21 12月, 2015 1 次提交
-
-
由 Andrea Bolognani 提交于
This replaces the virPCIKnownStubs string array that was used internally for stub driver validation. Advantages: * possible values are well-defined * typos in driver names will be detected at compile time * avoids having several copies of the same string around * no error checking required when setting / getting value The names used mirror those in the virDomainHostdevSubsysPCIBackendType enumeration.
-
- 17 12月, 2015 2 次提交
-
-
由 Ján Tomko 提交于
This function calls qemuDomainAttachControllerDevice for SCSI controllers and reports an error for all other controllers. Move the error inside qemuDomainAttachControllerDevice and delete this wrapper.
-
由 John Ferlan 提交于
A closer review of the code shows that for the transition from paused to running which was supposed to emit the VIR_DOMAIN_EVENT_RESUMED - no event would be generated. Rather the event is generated when going from running to running. Following the 'was_running' boolean shows it is set when the domain obj is active and the domain obj state is VIR_DOMAIN_RUNNING. So rather than using was_running to generate the RESUMED event, use !was_running
-
- 09 12月, 2015 16 次提交
-
-
由 Peter Krempa 提交于
Change some of the control structures and switch to using the new vcpu structure.
-
由 Peter Krempa 提交于
Instead of directly accessing the array add a helper to do this.
-
由 Peter Krempa 提交于
Add qemuDomainHasVCpuPids to do the checking and replace in place checks with it. We no longer need checking whether the thread contains fake data (vcpupids[0] == vm->pid) as in b07f3d82 and 65686e5a this was removed.
-
由 Peter Krempa 提交于
The vCPU threads make sense in the counterparts that set the vCPU bandwidth/quota, not in the emulator one. The emulator tunables are set all the time anyways. Drop the extra check and remove the now unneeded vm argument.
-
由 Peter Krempa 提交于
Refactor the code flow so that 'exit_monitor:' can be removed. This patch moves the auditing functions into places where it's certain that hotunplug was or was not successful and reports errors from qemuMonitorGetCPUInfo properly.
-
由 Peter Krempa 提交于
Refactor the code flow so that 'exit_monitor:' can be removed. This patch also moves the auditing and setting of the new vCPU count right to the place where the hotplug happens, since it's possible that the hotplug succeeds and adds a cpu while other stuff fails. Lastly, failures of qemuMonitorGetCPUInfo are now reported rather than ignored. The function retuns 0 if it "successfully" detected 0 threads.
-
由 Peter Krempa 提交于
qemuDomainHotplugVcpus/qemuDomainHotunplugVcpus are complex enough in regards of adding one CPU. Additionally it will be desired to reuse those functions later with specific vCPU hotplug. Move the loops for adding vCPUs into qemuDomainSetVcpusFlags so that the helpers can be made simpler and more straightforward.
-
由 Peter Krempa 提交于
Let the function report errors internally and change it to return standard return codes.
-
由 Peter Krempa 提交于
The cpu hotplug helper functions used negative error handling in a part of them, although some code that was added later didn't properly set the error codes in some cases. This would cause improper error messages in cases where we couldn't modify the numa cpu mask and a few other cases. Fix the logic by converting it to the regularly used pattern.
-
由 Peter Krempa 提交于
There's only very little common code among the two operations. Split the functions so that the internals are easier to understand and refactor later.
-
由 Peter Krempa 提交于
With a very unfortunate timing, the agent might vanish before we do the second call while the locks were down. Re-check that the agent is available before attempting it again.
-
由 Peter Krempa 提交于
Separate the code so that qemuDomainSetVcpusFlags contains only code relevant to hardware hotplug/unplug.
-
由 Peter Krempa 提交于
-
由 Peter Krempa 提交于
-
由 Peter Krempa 提交于
Finalize the refactor by adding the 'virDomainDefGetVCpusMax' getter and reusing it accross libvirt.
-
由 Peter Krempa 提交于
The code can be unified into the new accessor rather than being scattered accross the drivers.
-