- 15 4月, 2015 18 次提交
-
-
由 Peter Krempa 提交于
-
由 Peter Krempa 提交于
Fix line spacing between functions, ensure that function return type is on a separate line and reflow arguments for VIR_DEBUG statements.
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1200149 Even though we have a mutex mechanism so that two clients don't spawn two daemons, it's not strong enough. It can happen that while one client is spawning the daemon, the other one fails to connect. Basically two possible errors can happen: error: Failed to connect socket to '/home/mprivozn/.cache/libvirt/libvirt-sock': Connection refused or: error: Failed to connect socket to '/home/mprivozn/.cache/libvirt/libvirt-sock': No such file or directory The problem in both cases is, the daemon is only starting up, while we are trying to connect (and fail). We should postpone the connecting phase until the daemon is started (by the other thread that is spawning it). In order to do that, create a file lock 'libvirt-lock' in the directory where session daemon would create its socket. So even when called from multiple processes, spawning a daemon will serialize on the file lock. So only the first to come will spawn the daemon. Tested-by: NRichard W. M. Jones <rjones@redhat.com> Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Martin Kletzander 提交于
Two non-static functions in virjson.c were missing their export info in libvirt_private.syms, so they couldn't be used anywhere it the code (and that's about to get changed). Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Martin Kletzander 提交于
It already had a virMutex inside, so this is just a cleanup. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Martin Kletzander 提交于
Luckily we are allocating structs as clean memory and PTHREAD_MUTEX_INITIALIZER is "{ 0 }", so nothing happened, but it should still be created as lockable object. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 John Ferlan 提交于
Check proposed pool definitions to ensure they aren't trying to use the same devices as currently defined definitions - disallow the duplicate
-
由 John Ferlan 提交于
Check the proposed pool source host XML definition against existing gluster pools to ensure the incoming definition doesn't use the same source dir and soure host XML definition as an existing pool.
-
由 John Ferlan 提交于
Check the proposed pool source host XML definition against existing sheepdog pools to ensure the incoming definition doesn't use the same source host XML definition as an existing pool.
-
由 John Ferlan 提交于
So that we can cover all the cases.
-
由 John Ferlan 提交于
Rather than have duplicate code doing the same check, have the netfs matching processing code use the new virStoragePoolSourceMatchSingleHost. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
In virStoragePoolSourceMatchSingleHost, add a comparison for port number being different prior to checking the 'name' field.
-
由 John Ferlan 提交于
Split out the nhost == 1 and hosts[0].name logic into a separate routine Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Create a separate iSCSI Source matching subroutine. Makes the calling code a bit cleaner as well as sets up for future patches which need to do better source hosts[0].name processing/checking. As part of the effort the logic will be inverted from a multi-level if statement to a series of single level checks for better readability and further separation Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 Jiri Denemark 提交于
When acquiring resource via sanlock fails, we would report it as VIR_ERR_INTERNAL_ERROR, which is not very friendly to applications using libvirt. Moreover, the lockd driver would report the same failure as VIR_ERR_RESOURCE_BUSY, which looks better. Unfortunately, in sanlock driver we don't really know if acquiring the resource failed because it was already locked or there was another reason behind. But the end result is the same and I think using VIR_ERR_RESOURCE_BUSY reason for all acquire failures is still better than what we have now. https://bugzilla.redhat.com/show_bug.cgi?id=1165119Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
-
由 Eric Blake 提交于
Commit 49ed6cff is broken on mingw and other non-linux platforms: CCLD libvirt.la Cannot export virNetDevSysfsFile: symbol not defined collect2: error: ld returned 1 exit status * src/util/virnetdev.c: Provide virNetDevSysfsFile fallback. Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Eric Blake 提交于
Found by ./autobuild.sh during a mingw cross-compile: Commit 8a96e87e was not innocuous - glibc happens to leak the definition of time() through other headers, so that even without <sys/select.h>, virrandom.c compiled just fine. But on mingw, we were not so lucky; <sys/select.h> was important for its side effect of dragging in <time.h>, and we now have nothing providing the declaration of time(): ../../src/util/virrandom.c: In function 'virRandomOnceInit': ../../src/util/virrandom.c:65:5: error: implicit declaration of function 'time' [-Werror=implicit-function-declaration] unsigned int seed = time(NULL) ^ getpid(); ^ ../../src/util/virrandom.c:65:5: error: nested extern declaration of 'time' [-Werror=nested-externs] Signed-off-by: NEric Blake <eblake@redhat.com>
-
由 Michal Privoznik 提交于
This is yet another test for check of basic functionality of our NIC state handling code. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 14 4月, 2015 22 次提交
-
-
由 John Ferlan 提交于
Forthcoming syntax check rule will disallow usage of 'int index', so change it for nwfilter
-
由 John Ferlan 提交于
Forthcoming syntax check rule will disallow usage of 'int index', so change it for snapshot
-
由 John Ferlan 提交于
Impending syntax checker will disallow 'int index', so change it here.
-
由 John Ferlan 提交于
Changing the prototype to not have "int *index" since we'll soon be disallowing index as a name. Curiously the original commit (a4504ac1) for the function used 'int idx' in the function - so they didn't match. Now they do.
-
由 Ján Tomko 提交于
Reported by John Ferlan.
-
由 Pavel Hrdina 提交于
../../src/xen/block_stats.c:82: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing] Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
-
由 Martin Kletzander 提交于
It is there even with -nodefaults and -no-user-config, so count with that so we can start sparc domains. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
由 Huanle Han 提交于
The variable 'last_processed_hostdev_vf' indicates index of the last successfully configed vf. When resetvfnetconfig because of failure, hostdevs[last_processed_hostdev_vf] should also be reset. Signed-off-by: NHuanle Han <hanxueluo@gmail.com>
-
由 Huanle Han 提交于
1. 'last_good_net' indicates the index of last successfully configured net. so def->nets[last_good_net] should also be clean up if error occurs. 2. if error occurs in 'virNetDevMacVLanVPortProfileRegisterCallback' (second 'goto err_exit' in loop), we should also do 'virNetDevVPortProfileDisassociate' cleanup for the 'virNetDevVPortProfileAssociate'(first code block in loop). So we should consider the net is successfully configured after first code block in loop finishes. Signed-off-by: NHuanle Han <hanxueluo@gmail.com>
-
由 Serge Hallyn 提交于
The original bug report was at https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1393842 Also skip abstract unix sockets. Signed-off-by: NSerge Hallyn <serge.hallyn@ubuntu.com> Signed-off-by: NJán Tomko <jtomko@redhat.com>
-
由 Shanzhi Yu 提交于
After set memory parameters for running domain, save the change to live xml is needed otherwise it will disappear after restart libvirtd. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1211548Signed-off-by: NShanzhi Yu <shyu@redhat.com> Signed-off-by: NJán Tomko <jtomko@redhat.com>
-
由 John Ferlan 提交于
Apparently for Xen-devel 'index' is a global and causes a build failure, so just use the shortened 'idx' instead to avoid the conflict. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 Peter Krempa 提交于
Change few variable names and refactor the code flow. As an additional bonus the function now fails if the event state is not as expected.
-
由 Peter Krempa 提交于
QEMU does not abandon the mirror. The job carries on in the synchronised phase and it might be either pivoted again or cancelled. The commit hints that the described behavior was happening in a downstream version. If the command returns false there are two possible options: 1) qemu did not reach the point where it would ask the block job to pivot 2) pivotting failed in the actual qemu coroutine If either of those would happen we return failure and reset the condition that waits for the block job to complete. This makes the API fail but in case where qemu would actually abandon the mirror the fact is notified via the event and handled asynchronously. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1202704
-
由 Peter Krempa 提交于
Since it now handles only block pull code paths we can refactor it and remove tons of cruft.
-
由 Peter Krempa 提交于
Sacrifice a few lines of code in favor of the code being more readable.
-
由 Peter Krempa 提交于
qemuDomainBlockJobImpl become an unmaintainable mess over the years of adding new stuff to it. This patch starts splitting up individual functions from it until it can be killed entirely. In bulk this will add lines of code rather than delete them but it will be traded for maintainability.
-
由 Peter Krempa 提交于
My intention is to split qemuMonitorJSONBlockJob() into simpler separate functions for every block job type. Since the error handling code is the same for all block jobs, this patch extracts the code into a separate function that will later be reused in more places. With the new helper qemuMonitorJSONErrorIsClass we can save a few function calls as we can extract the error object once.
-
由 Peter Krempa 提交于
Split out the function that checks the actual error class string into a separate helper as it will be useful later and refactor qemuMonitorJSONHasError to return bool type and remove few useless checks. Basically virJSONValueObjectHasKey are useless here since the next call to virJSONValueObjectGet is checking the return value again (which can't fail at that point). By removing the first check we save a function call.
-
由 Peter Krempa 提交于
Previously we checked that the vcpu we are trying to set is in range of the number of threads presented by qemu. The problem is that if the VM is offline the count is 0. Since the condition subtracted 1 from the count the number would overflow and the check would never trigger. Change the condition for more sensible ones with specific error messages. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1208434
-
由 Peter Krempa 提交于
Refactor the code to parse the vcpupin in a similar way the iothreadpin code is now structured. This allows to get rid of some very strange conditions and error messages. Additionally since a existing bug ( https://bugzilla.redhat.com/show_bug.cgi?id=1208434 ) allows to add vcpupin definitions for vcpus that don't exist, this patch makes the parser to ignore all vcpupins that don't have a matching vCPU in the definition rather than just offlined ones.
-
由 Peter Krempa 提交于
Defining a domain with the following config: <domain ...> ... <iothreads>1</iothreads> <cputune> <iothreadpin cpuset='1'/> will result in the following config formatted back: <domain type='kvm'> ... <iothreads>1</iothreads> <cputune> <iothreadpin iothread='0' cpuset='1'/> After restart the VM would vanish. Since our schema requires the @iothread field to be present in <iothreadpin> make it required by the code too.
-