- 11 5月, 2017 15 次提交
-
-
由 Jiri Denemark 提交于
/etc/libvirt/nwfilter/*.xml files are installed with no UUID, which means libvirtd will automatically alter all of them once it starts. Thus RPM verification will always fail on them. Let's use a trick similar to the default network XML and store nwfilter XMLs in /usr/share. They will be copied into /etc in %post. Additionally the /etc files are marked as %ghost so that they are uninstalled if the RPM package is removed. Note that the %post script overwrites existing files with new ones on upgrade, which is what has always been happening. https://bugzilla.redhat.com/show_bug.cgi?id=1431581 https://bugzilla.redhat.com/show_bug.cgi?id=1378774Signed-off-by: NJiri Denemark <jdenemar@redhat.com> (cherry picked from commit 1d3963db)
-
由 Pavel Hrdina 提交于
The port is stored in graphics configuration and it will also get released in qemuProcessStop in case of error. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1397440Signed-off-by: NPavel Hrdina <phrdina@redhat.com> (cherry picked from commit c23b7b81)
-
由 Nikolay Shirokovskiy 提交于
Rather than waiting until we've free'd up all the resources, cause the 'workerPool' thread pool to flush as soon as possible during stateCleanup. Otherwise, it's possible something waiting to run will SEGV such as is the case during race conditions of simultaneous exiting libvirtd and qemu process. Resolves the following crash: [1] crash backtrace: (bt is shortened a bit): 0 0x00007ffff7282f2b in virClassIsDerivedFrom (klass=0xdeadbeef, parent=0x55555581d650) at util/virobject.c:169 1 0x00007ffff72835fd in virObjectIsClass (anyobj=0x7fffd024f580, klass=0x55555581d650) at util/virobject.c:365 2 0x00007ffff7283498 in virObjectLock (anyobj=0x7fffd024f580) at util/virobject.c:317 3 0x00007ffff722f0a3 in virCloseCallbacksUnset (closeCallbacks=0x7fffd024f580, vm=0x7fffd0194db0, cb=0x7fffdf1af765 <qemuProcessAutoDestroy>) at util/virclosecallbacks.c:164 4 0x00007fffdf1afa7b in qemuProcessAutoDestroyRemove (driver=0x7fffd00f3a60, vm=0x7fffd0194db0) at qemu/qemu_process.c:6365 5 0x00007fffdf1adff1 in qemuProcessStop (driver=0x7fffd00f3a60, vm=0x7fffd0194db0, reason=VIR_DOMAIN_SHUTOFF_CRASHED, asyncJob=QEMU_ASYNC_JOB_NONE, flags=0) at qemu/qemu_process.c:5877 6 0x00007fffdf1f711c in processMonitorEOFEvent (driver=0x7fffd00f3a60, vm=0x7fffd0194db0) at qemu/qemu_driver.c:4545 7 0x00007fffdf1f7313 in qemuProcessEventHandler (data=0x555555832710, opaque=0x7fffd00f3a60) at qemu/qemu_driver.c:4589 8 0x00007ffff72a84c4 in virThreadPoolWorker (opaque=0x555555805da0) at util/virthreadpool.c:167 Thread 1 (Thread 0x7ffff7fb1880 (LWP 494472)): 1 0x00007ffff72a7898 in virCondWait (c=0x7fffd01c21f8, m=0x7fffd01c21a0) at util/virthread.c:154 2 0x00007ffff72a8a22 in virThreadPoolFree (pool=0x7fffd01c2160) at util/virthreadpool.c:290 3 0x00007fffdf1edd44 in qemuStateCleanup () at qemu/qemu_driver.c:1102 4 0x00007ffff736570a in virStateCleanup () at libvirt.c:807 5 0x000055555556f991 in main (argc=1, argv=0x7fffffffe458) at libvirtd.c:1660 (cherry picked from commit 97338eaa)
-
由 Nikolay Shirokovskiy 提交于
Do not dereference the 'dmn' until after the virStateCleanup is completed. During initialization, virStateInitialize requires/uses the "dmn" as the argument to/for the daemonInhibitCallback functions. Thus, cleanup cannot dereference 'dmn' until after calling the virStateCleanup which calls the the daemonInhibitCallback using 'dmn'; otherwise, the following crash occurs: backtrace (shortened a bit) 1 0x00007fd3a791b2e6 in virCondWait (c=<optimized out>, m=<optimized out>) at util/virthread.c:154 2 0x00007fd3a791bcb0 in virThreadPoolFree (pool=0x7fd38024ee00) at util/virthreadpool.c:266 3 0x00007fd38edaa00e in qemuStateCleanup () at qemu/qemu_driver.c:1116 4 0x00007fd3a79abfeb in virStateCleanup () at libvirt.c:808 5 0x00007fd3a85f2c9e in main (argc=<optimized out>, argv=<optimized out>) at libvirtd.c:1660 Thread 1 (Thread 0x7fd38722d700 (LWP 32256)): 0 0x00007fd3a7900910 in virClassIsDerivedFrom (klass=0xdfd36058d4853, parent=0x7fd3a8f394d0) at util/virobject.c:169 1 0x00007fd3a7900c4e in virObjectIsClass (anyobj=anyobj@entry=0x7fd3a8f2f850, klass=<optimized out>) at util/virobject.c:365 2 0x00007fd3a7900c74 in virObjectLock (anyobj=0x7fd3a8f2f850) at util/virobject.c:317 3 0x00007fd3a7a24d5d in virNetDaemonRemoveShutdownInhibition (dmn=0x7fd3a8f2f850) at rpc/virnetdaemon.c:547 4 0x00007fd38ed722cf in qemuProcessStop (driver=driver@entry=0x7fd380103810, vm=vm@entry=0x7fd38025b6d0, reason=reason@entry=VIR_DOMAIN_SHUTOFF_SHUTDOWN, asyncJob=asyncJob@entry=QEMU_ASYNC_JOB_NONE, flags=flags@entry=0) at qemu/qemu_process.c:5786 5 0x00007fd38edd9428 in processMonitorEOFEvent (vm=0x7fd38025b6d0, driver=0x7fd380103810) at qemu/qemu_driver.c:4588 6 qemuProcessEventHandler (data=<optimized out>, opaque=0x7fd380103810) at qemu/qemu_driver.c:4632 7 0x00007fd3a791bb55 in virThreadPoolWorker (opaque=opaque@entry=0x7fd3a8f1e4c0) at util/virthreadpool.c:145 (cherry picked from commit 85c3a182)
-
由 Ján Tomko 提交于
For domains with no USB address cache, we should not attempt to generate a USB address. https://bugzilla.redhat.com/show_bug.cgi?id=1387665 (cherry picked from commit 00c5386c)
-
由 Michal Privoznik 提交于
When trying to migrate a huge page enabled guest, I've noticed the following crash. Apparently, if no specific hugepages are requested: <memoryBacking> <hugepages/> </memoryBacking> and there are no hugepages configured on the destination, we try to dereference a NULL pointer. Program received signal SIGSEGV, Segmentation fault. 0x00007fcc907fb20e in qemuGetHugepagePath (hugepage=0x0) at qemu/qemu_conf.c:1447 1447 if (virAsprintf(&ret, "%s/libvirt/qemu", hugepage->mnt_dir) < 0) (gdb) bt #0 0x00007fcc907fb20e in qemuGetHugepagePath (hugepage=0x0) at qemu/qemu_conf.c:1447 #1 0x00007fcc907fb2f5 in qemuGetDefaultHugepath (hugetlbfs=0x0, nhugetlbfs=0) at qemu/qemu_conf.c:1466 #2 0x00007fcc907b4afa in qemuBuildMemoryBackendStr (size=4194304, pagesize=0, guestNode=0, userNodeset=0x0, autoNodeset=0x0, def=0x7fcc70019070, qemuCaps=0x7fcc70004000, cfg=0x7fcc5c011800, backendType=0x7fcc95087228, backendProps=0x7fcc95087218, force=false) at qemu/qemu_command.c:3297 #3 0x00007fcc907b4f91 in qemuBuildMemoryCellBackendStr (def=0x7fcc70019070, qemuCaps=0x7fcc70004000, cfg=0x7fcc5c011800, cell=0, auto_nodeset=0x0, backendStr=0x7fcc70020360) at qemu/qemu_command.c:3413 #4 0x00007fcc907c0406 in qemuBuildNumaArgStr (cfg=0x7fcc5c011800, def=0x7fcc70019070, cmd=0x7fcc700040c0, qemuCaps=0x7fcc70004000, auto_nodeset=0x0) at qemu/qemu_command.c:7470 #5 0x00007fcc907c5fdf in qemuBuildCommandLine (driver=0x7fcc5c07b8a0, logManager=0x7fcc70003c00, def=0x7fcc70019070, monitor_chr=0x7fcc70004bb0, monitor_json=true, qemuCaps=0x7fcc70004000, migrateURI=0x7fcc700199c0 "defer", snapshot=0x0, vmop=VIR_NETDEV_VPORT_PROFILE_OP_MIGRATE_IN_START, standalone=false, enableFips=false, nodeset=0x0, nnicindexes=0x7fcc95087498, nicindexes=0x7fcc950874a0, domainLibDir=0x7fcc700047c0 "/var/lib/libvirt/qemu/domain-1-fedora") at qemu/qemu_command.c:9547 Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> (cherry picked from commit 647db05e)
-
由 Maxim Nestratov 提交于
There is a possibility that qemu driver frees by unreferencing its closeCallbacks pointer as it has the only reference to the object, while in fact not all users of CloseCallbacks called thier virCloseCallbacksUnset. Backtrace is the following: Thread #1: 0 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 1 in virCondWait (c=<optimized out>, m=<optimized out>) at util/virthread.c:154 2 in virThreadPoolFree (pool=0x7f0810110b50) at util/virthreadpool.c:266 3 in qemuStateCleanup () at qemu/qemu_driver.c:1116 4 in virStateCleanup () at libvirt.c:808 5 in main (argc=<optimized out>, argv=<optimized out>) at libvirtd.c:1660 Thread #2: 0 in virClassIsDerivedFrom (klass=0xdeadbeef, parent=0x7f0837c694d0) at util/virobject.c:169 1 in virObjectIsClass (anyobj=anyobj@entry=0x7f08101d4760, klass=<optimized out>) at util/virobject.c:365 2 in virObjectLock (anyobj=0x7f08101d4760) at util/virobject.c:317 3 in virCloseCallbacksUnset (closeCallbacks=0x7f08101d4760, vm=vm@entry=0x7f08101d47b0, cb=cb@entry=0x7f081d078fc0 <qemuProcessAutoDestroy>) at util/virclosecallbacks.c:163 4 in qemuProcessAutoDestroyRemove (driver=driver@entry=0x7f081018be50, vm=vm@entry=0x7f08101d47b0) at qemu/qemu_process.c:6368 5 in qemuProcessStop (driver=driver@entry=0x7f081018be50, vm=vm@entry=0x7f08101d47b0, reason=reason@entry=VIR_DOMAIN_SHUTOFF_SHUTDOWN, asyncJob=asyncJob@entry=QEMU_ASYNC_JOB_NONE, flags=flags@entry=0) at qemu/qemu_process.c:5854 6 in processMonitorEOFEvent (vm=0x7f08101d47b0, driver=0x7f081018be50) at qemu/qemu_driver.c:4585 7 qemuProcessEventHandler (data=<optimized out>, opaque=0x7f081018be50) at qemu/qemu_driver.c:4629 8 in virThreadPoolWorker (opaque=opaque@entry=0x7f0837c4f820) at util/virthreadpool.c:145 9 in virThreadHelper (data=<optimized out>) at util/virthread.c:206 10 in start_thread () from /lib64/libpthread.so.0 Let's reference CloseCallbacks object in virCloseCallbacksSet and unreference in virCloseCallbacksUnset. Signed-off-by: NMaxim Nestratov <mnestratov@virtuozzo.com> (cherry picked from commit f47b9114)
-
由 Peter Krempa 提交于
If a transient storage pool is deemed inactive after libvirtd restart it would not be deleted from the list. Reuse virStoragePoolUpdateInactive along with a refactor necessary to properly update the state. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1242801 (cherry picked from commit f3a8e80c)
-
由 Peter Krempa 提交于
After a pool is made inactive the definition objects need to be updated (if a new definition is prepared) and transient pools need to be completely removed. Split out the code doing these steps into a separate function for later reuse. (cherry picked from commit aced6b23)
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1405269 If a secret was not provided for what was determined to be a LUKS encrypted disk (during virStorageFileGetMetadata processing when called from qemuDomainDetermineDiskChain as a result of hotplug attach qemuDomainAttachDeviceDiskLive), then do not attempt to look it up (avoiding a libvirtd crash) and do not alter the format to "luks" when adding the disk; otherwise, the device_add would fail with a message such as: "unable to execute QEMU command 'device_add': Property 'scsi-hd.drive' can't find value 'drive-scsi0-0-0-0'" because of assumptions that when the format=luks that libvirt would have provided the secret to decrypt the volume. Access to unlock the volume will thus be left to the application. (cherry picked from commit 7f7d9904)
-
由 Ján Tomko 提交于
Since commit fcbbb289 we steal the pointer to the storage pool source name if there was no pool name specified. Properly duplicate the string to avoid freeing it twice. https://bugzilla.redhat.com/show_bug.cgi?id=1436400 (cherry picked from commit e9f96909)
-
由 Ján Tomko 提交于
Pool types that have the VIR_STORAGE_POOL_SOURCE_NAME flag set allow omitting the <name> element and instead fill out the pool name from the <source><name> element. Relax the schema to make <name> optional for these pools. Expressing that at least one of these is required is out of scope of the schema. (cherry picked from commit 8ef12b96)
-
由 Andrea Bolognani 提交于
Commit 839a0608 tied the lifecycle of virtlogd more closely to that of libvirtd. Unfortunately, while starting virtlogd when libvirtd is started is definitely a good idea, restarting virtlogd or shutting it down at any time outside of system poweroff is not. Revert part of that commit by removing the PartOf= lines, meaning that only startup requests will be propagated from libvirtd to virtlogd. Resolves: https://bugzilla.redhat.com/1372576 (cherry picked from commit f496ce1d)
-
由 Andrea Bolognani 提交于
We already guarantee that virtlogd.socket is enabled/disabled along with libvirtd.service, but if libvirtd.service has just been installed and is started before rebooting, then virtlogd.socket will not be running and guest startup will fail. Add Requires=virtlogd.socket to libvirtd.service to make sure virtlogd.socket is always started along with libvirtd.service, and add Before=libvirtd.service to both virtlogd.socket and virtlogd.service so that virtlogd never disappears before libvirtd has exited. Also add PartOf=libvirtd.service to both virtlogd.socket and virtlogd.service, so that virtlogd can be shut down when not needed. Resolves: https://bugzilla.redhat.com/1372576 (cherry picked from commit 839a0608)
-
由 Cole Robinson 提交于
New maint release version numbers of just A.B.C format, not the old A.B.C.D format. Adjust the check that dynamically changes the Source URL for maint releases Signed-off-by: NCole Robinson <crobinso@redhat.com> Acked-by: NAndrea Bolognani <abologna@redhat.com> (cherry picked from commit 1d07a5bf)
-
- 28 11月, 2016 1 次提交
-
-
由 Peter Krempa 提交于
Thanks to the complex capability caching code virQEMUCapsProbeQMP was never called when we were starting a new qemu VM. On the other hand, when we are reconnecting to the qemu process we reload the capability list from the status XML file. This means that the flag preventing the function being called was not set and thus we partially reprobed some of the capabilities. The recent addition of CPU hotplug clears the QEMU_CAPS_QUERY_HOTPLUGGABLE_CPUS if the machine does not support it. The partial re-probe on reconnect results into attempting to call the unsupported command and then killing the VM. Remove the partial reprobe and depend on the stored capabilities. If it will be necessary to reprobe the capabilities in the future, we should do a full reprobe rather than this partial one. (cherry picked from commit b87a1134)
-
- 11 11月, 2016 1 次提交
-
-
由 Laine Stump 提交于
commit 9065cfaa added the ability to disable DNS services for a libvirt virtual network. If neither DNS nor DHCP is needed for a network, then we don't need to start dnsmasq, so code was added to check for this. Unfortunately, it was written with a great lack of attention to detail (I can say that, because I was the author), and the loop that checked if DHCP is needed for the network would never end if the network had multiple IP addresses and the first <ip> had no <dhcp> subelement (which would have contained a <range> or <host> subelement, thus requiring DHCP services). This patch rewrites the check to be more compact and (more importantly) finite. This bug was present in release 2.2.0 and 2.3.0, so will need to be backported to any relevant maintainence branches. Reported here: https://www.redhat.com/archives/libvirt-users/2016-October/msg00032.html https://www.redhat.com/archives/libvirt-users/2016-October/msg00045.html (cherry picked from commit bbb333e4)
-
- 06 10月, 2016 1 次提交
-
-
由 Laine Stump 提交于
When I added support for the pcie-expander-bus controller in commit bc07251f, I incorrectly thought that it only had a single slot available. Actually it has 32 slots, just like the root complex aka pcie-root (the part that I *did* get correct is that unlike pcie-root a pcie-expander-bus doesn't allow any integrated endpoint devices - only pcie-root-ports and dmi-to-pci-controllers are allowed). (cherry picked from commit 22afd441)
-
- 04 10月, 2016 1 次提交
-
-
由 Martin Kletzander 提交于
If this reminds you of a commit message from around a year ago, it's 41c2aa72 and yes, we're dealing with "the same thing" again. Or f309db1f and it's similar. There is a logic in place that if there is no real need for memory-backend-file, qemuBuildMemoryBackendStr() returns 0. However that wasn't the case with hugepage backing. The reason for that was that we abused the 'pagesize' variable for storing that information, but we should rather have a separate one that specifies whether we really need the new object for hugepage backing. And that variable should be set only if this particular NUMA cell needs special treatment WRT hugepages. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1372153Signed-off-by: NMartin Kletzander <mkletzan@redhat.com> (cherry picked from commit 4372a7845acbc6974f6027ef68e7dd3eeb47f425)
-
- 02 9月, 2016 2 次提交
-
-
由 Daniel Veillard 提交于
* docs/news.html.in: update for release * po/*po*: regenerate
-
由 Kothapally Madhu Pavan 提交于
--postcopy-after-precopy is just an aditional flag for postcopy migration. Signed-off-by: NKothapally Madhu Pavan <kmp@linux.vnet.ibm.com>
-
- 31 8月, 2016 1 次提交
-
-
由 Michal Privoznik 提交于
When we wanted to break huge and unmaintainable virsh into smaller files first thing we did was to just move funcs into virsh-.c files and then #include them from virsh. Having it done this way we also needed to have them listed under EXTRA_DIST. However, things got changed since then and now all the virsh-*.c files are proper source files. Therefore they are listed under virsh_SOURCES too. But for some reason we forgot to remove them from EXTRA_DIST. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 30 8月, 2016 1 次提交
-
-
由 Jim Fehlig 提交于
The libxl driver has long supported migration V3 but has never indicated so in the connectSupportsFeature API. As a result, apps such as virt-manager that use the more generic virDomainMigrate API fail with libvirtError: this function is not supported by the connection driver: virDomainMigrate Add VIR_DRV_FEATURE_MIGRATION_V3 to the list of features marked as supported in the connectSupportsFeature API.
-
- 29 8月, 2016 2 次提交
-
-
由 Roman Bogorodskiy 提交于
Test 12 from objecteventtest (createXML add event) segaults on FreeBSD with bus error. At some point it calls testNodeDeviceDestroy() from the test driver. And it fails when it tries to unlock the device in the "out:" label of this function. Unlocking fails because the previous step was a call to virNodeDeviceObjRemove from conf/node_device_conf.c. This function removes the given device from the device list and cleans up the object, including destroying of its mutex. However, it does not nullify the pointer that was given to it. As a result, we end up in testNodeDeviceDestroy() here: out: if (obj) virNodeDeviceObjUnlock(obj); And instead of skipping this, we try to do Unlock and fail because of malformed mutex. Change virNodeDeviceObjRemove to use double pointer and set pointer to NULL.
-
由 Roman Bogorodskiy 提交于
As bhyve currently doesn't use controller addressing and simply uses 1 implicit controller for 1 disk device, the scheme looks the following: pci addrees -> (implicit controller) -> disk device So in fact we identify disk devices by pci address of implicit controller and just pass it this way to bhyve in a form: -s pci_addr,ahci-(cd|hd),/path/to/disk Therefore, we cannot use virDeviceInfoPCIAddressWanted() because it does not expect that disk devices might need PCI address assignment. As a result, if a disk was specified without address, it will not be generated and domain will to start. Until proper controller addressing is implemented in the bhyve driver, force each disk to have PCI address generated if it was not specified by user.
-
- 27 8月, 2016 2 次提交
-
-
由 Kothapally Madhu Pavan 提交于
Unlike postcopy migration there is no --live flag check for postcopy-after-precopy. Signed-off-by: NKothapally Madhu Pavan <kmp@linux.vnet.ibm.com>
-
由 Christophe Fergeau 提交于
The iothread example for virtio-scsi should be <driver iothread='4'/> rather than <driver iothread='4'> for the XML to be valid.
-
- 26 8月, 2016 13 次提交
-
-
由 Peter Krempa 提交于
../../src/conf/domain_conf.c:4425:21: error: potential null pointer dereference [-Werror=null-dereference] switch (vcpu->hotpluggable) { ~~~~^~~~~~~~~~~~~~
-
由 Peter Krempa 提交于
Setting vcpu count when cpu topology is specified may result into an invalid configuration. Since the topology can't be modified, reject the setting if it doesn't match the requested topology. This will allow fixing the topology in case it was broken. Partially fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1370066
-
由 Peter Krempa 提交于
Validating the vcpu count is more intricate and doing it in the XML parser will make previously valid configs (with older qemus) vanish. Now that we have a very similar check in the qemu domain validation callback we can do it in a more appropriate place. This basically reverts commit b54de083. Partially resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1370066
-
由 Peter Krempa 提交于
Make it clear that vcpu order is valid for online vcpus only and state that it has to be specified for all vcpus or not provided at all. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1370043
-
由 Peter Krempa 提交于
ce43cca0 refactored the helper to prepare it for sparse topologies but forgot to fix the iterator used to fill the structures. This would result into a weirdly sparse populated array and possible out of bounds access and crash once sparse vcpu topologies were allowed. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1369988
-
由 Peter Krempa 提交于
virVcpuInfo contains the vcpu number that the data refers to. Report what's returned by the daemon rather than the sequence number as with sparse vcpu topologies they won't match.
-
由 Mikhail Feoktistov 提交于
We should query bus type for containers too, like for VM. In openstack we add volume disk like SCSI, so we can't hardcode SATA bus.
-
由 Nikolay Shirokovskiy 提交于
-
由 Olga Krishtal 提交于
While dettaching/attaching device in OpenStack, nova calls vzDomainDettachDevice twice, because the update of the internal configuration of the ct comes a bit latter than the update event. As the result, we suffer from the second call to dettach the same device. Signed-off-by: NOlga Krishtal <okrishtal@virtuozzo.com>
-
由 Pavel Glushchak 提交于
libvirt-python passes parameter bandwidth = 0 by default. This means that bandwidth is unlimited. VZ driver doesn't support bandwidth rate limiting, but we still need to handle it and fail if bandwidth > 0. Signed-off-by: NPavel Glushchak <pglushchak@virtuozzo.com>
-
由 Pavel Glushchak 提交于
* Added VIR_MIGRATE_LIVE, VIR_MIGRATE_UNDEFINE_SOURCE and VIR_MIGRATE_PERSIST_DEST to supported migration flags Signed-off-by: NPavel Glushchak <pglushchak@virtuozzo.com>
-
由 Laine Stump 提交于
When support for auto-creating tap devices was added to <interface type='ethernet'> in commit 9c17d6, the code assumed that virNetDevTapCreate() would honor the VIR_NETDEV_TAP__CREATE_IFUP flag that is supported by virNetDevTapCreateInBridgePort(). That isn't the case - the latter function performs several operations, and one of them is setting the tap device online. But virNetDevTapCreate() *only* creates the tap device, and relies on the caller to do everything else, so qemuInterfaceEthernetConnect() needs to call virNetDevSetOnline() after the device is successfully created.
-
由 Laine Stump 提交于
The linkstate setting of an <interface> is only meant to change the online status reported to the guest system by the emulated network device driver in qemu, but when support for auto-creating tap devices for <interface type='ethernet'> was added in commit 9717d6, a chunk of code was also added to qemuDomainChangeNetLinkState() that sets the online status of the tap device (i.e. the *host* side of the interface) for type='ethernet'. This was never done for tap devices used in type='bridge' or type='network' interfaces, nor was it done in the past for tap devices created by external scripts for type='ethernet', so we shouldn't be doing it now. This patch removes the bit of code in qemuDomainChangeNetLinkState() that modifies online status of the tap device.
-