1. 24 12月, 2015 1 次提交
    • P
      cgroup: Drop resource partition from virSystemdMakeScopeName · 17a87135
      Peter Krempa 提交于
      The scope name, even according to our docs is
      "machine-$DRIVER\x2d$VMNAME.scope" virSystemdMakeScopeName would use the
      resource partition name instead of "machine-" if it was specified thus
      creating invalid scope paths.
      
      This makes libvirt drop cgroups for a VM that uses custom resource
      partition upon reconnecting since the detected scope name would not
      match the expected name generated by virSystemdMakeScopeName.
      
      The error is exposed by the following log entry:
      
      debug : virCgroupValidateMachineGroup:302 : Name 'machine-qemu\x2dtestvm.scope' for controller 'cpu' does not match 'testvm', 'testvm.libvirt-qemu' or 'machine-test-qemu\x2dtestvm.scope'
      
      for a "/machine/test" resource and "testvm" vm.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1238570
      (cherry picked from commit 88f6c007)
      17a87135
  2. 29 8月, 2015 1 次提交
  3. 28 4月, 2015 37 次提交
    • Z
      qemu: Don't fail to reboot domains with unresponsive agent · f708713e
      zhang bo 提交于
      just as what b8e25c35 did, we
      fall back to the ACPI method when the guest agent is unresponsive
      in qemuDomainReboot().
      Signed-off-by: NYueWenyuan <yuewenyuan@huawei.com>
      Signed-off-by: NZhang Bo <oscar.zhangbo@huawei.com>
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      (cherry picked from commit eadf41fe)
      f708713e
    • L
      qemu: set macvtap physdevs online when macvtap is set online · 24e0cecb
      Laine Stump 提交于
      A further fix for:
      
        https://bugzilla.redhat.com/show_bug.cgi?id=1113474
      
      Since there is no possibility that any type of macvtap will work if
      the parent physdev it's attached to is offline, we should bring the
      physdev online at the same time as the macvtap. When taking the
      macvtap offline, it's also necessary to take the physdev offline for
      macvtap passthrough mode (because the physdev has the same MAC address
      as the macvtap device, so could potentially cause problems with
      misdirected packets during migration, as outlined in commits 829770
      and 879c13). We can't set the physdev offline for other macvtap modes
      1) because there may be other macvtap devices attached to the same
      physdev (and/or the host itself may be using the device) in the other
      modes whereas passthrough mode is exclusive to one macvtap at a time,
      and 2) there's no practical reason to do so anyway.
      
      (cherry picked from commit 38172ed8)
      24e0cecb
    • Z
      qemuDomainShutdownFlags: Set fakeReboot more frequently · 0636291d
      zhang bo 提交于
      When a qemu domain is to be rebooted, from outside, at libvirt
      level it looks like regular shutdown. To really restart the
      domain, libvirt needs to issue reset command on the monitor once
      SHUTDOWN event appeared. So, in order to differentiate bare
      shutdown and reboot libvirt uses a variable within domain private
      data. It's called fakeReboot. When the reboot API is called, the
      variable is set, but when the shutdown API is called it must be
      cleared out. But it was not for every possible case. So if user
      called virDomainReboot(), and there was no ACPI daemon running
      inside the guest (so guest didn't initiated shutdown sequence)
      and then virDomainShutdown(mode=agent) was called bad thing
      happened. We remembered the fakeReboot and instead of shutting
      the domain down, we just rebooted it.
      Signed-off-by: NZhang Bo <oscar.zhangbo@huawei.com>
      Signed-off-by: NWang Yufei <james.wangyufei@huawei.com>
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      (cherry picked from commit 8be502fd)
      0636291d
    • M
      qemuMigrationPrecreateStorage: Fix debug message · 9d0c2053
      Michal Privoznik 提交于
      When pre-creating storage for domains, we need to find corresponding
      disk in the XML on the destination (domain XML may differ there, e.g.
      disk is accessible under different path). For better debugging, I'm
      printing all info I received on a disk. But there was a typo when
      printing the disk capacity: "%lluu" instead of "%llu".
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      (cherry picked from commit 65a88572)
      9d0c2053
    • X
      qemu_migration.c: sleep first before checking for migration status. · 8ab12f1a
      Xing Lin 提交于
      The problem with the previous implementation is,
      even when qemuMigrationUpdateJobStatus() detects a migration job
      has completed, it will do a sleep for 50 ms (which is unnecessary
      and only adds up to the VM pause time).
      Signed-off-by: NXing Lin <xinglin@cs.utah.edu>
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      (cherry picked from commit 522e81cb)
      8ab12f1a
    • M
      qemu_driver: check caps after starting block job · d17b2d9d
      Michael Chapman 提交于
      Currently we check qemuCaps before starting the block job. But qemuCaps
      isn't available on a stopped domain, which means we get a misleading
      error message in this case:
      
        # virsh domstate example
        shut off
      
        # virsh blockjob example vda
        error: unsupported configuration: block jobs not supported with this QEMU binary
      
      Move the qemuCaps check into the block job so that we are guaranteed the
      domain is running.
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      (cherry picked from commit cfcdf5ff)
      d17b2d9d
    • M
      qemu_migrate: use nested job when adding NBD to cookie · 0ff86a47
      Michael Chapman 提交于
      qemuMigrationCookieAddNBD is usually called from within an async
      MIGRATION_OUT or MIGRATION_IN job, so it needs to start a nested job.
      
      (The one exception is during the Begin phase when change protection
      isn't enabled, but qemuDomainObjEnterMonitorAsync will behave the same
      as qemuDomainObjEnterMonitor in this case.)
      
      This bug was encountered with a libvirt client that repeatedly queries
      the disk mirroring block job info during a migration. If one of these
      queries occurs just as the Perform migration cookie is baked, libvirt
      crashes.
      
      Relevant logs are as follows:
      
          6701: warning : qemuDomainObjEnterMonitorInternal:1544 : This thread seems to be the async job owner; entering monitor without asking for a nested job is dangerous
      [1] 6701: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block","id":"libvirt-629"}
      [2] 6699: info : qemuMonitorIOWrite:503 : QEMU_MONITOR_IO_WRITE: mon=0x7fefdc004700 buf={"execute":"query-block","id":"libvirt-629"}
      [3] 6704: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block-jobs","id":"libvirt-630"}
      [4] 6699: info : qemuMonitorJSONIOProcessLine:203 : QEMU_MONITOR_RECV_REPLY: mon=0x7fefdc004700 reply={"return": [...], "id": "libvirt-629"}
          6699: error : qemuMonitorJSONIOProcessLine:211 : internal error: Unexpected JSON reply '{"return": [...], "id": "libvirt-629"}'
      
      At [1] qemuMonitorBlockStatsUpdateCapacity sends its request, then waits
      on mon->notify. At [2] the request is written out to the monitor socket.
      At [3] qemuMonitorBlockJobInfo sends its request, and also waits on
      mon->notify. The reply from the first request is received at [4].
      However, qemuMonitorJSONIOProcessLine is not expecting this reply since
      the second request hadn't completed sending. The reply is dropped and an
      error is returned.
      
      qemuMonitorIO signals mon->notify twice during its error handling,
      waking up both of the threads waiting on it. One of them clears mon->msg
      as it exits qemuMonitorSend; the other crashes:
      
        qemuMonitorSend (mon=0x7fefdc004700, msg=<value optimized out>) at qemu/qemu_monitor.c:975
        975         while (!mon->msg->finished) {
        (gdb) print mon->msg
        $1 = (qemuMonitorMessagePtr) 0x0
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      (cherry picked from commit 72df8314)
      0ff86a47
    • M
      qemu: fix race between disk mirror fail and cancel · 9a0e0d3f
      Michael Chapman 提交于
      If a VM migration is aborted, a disk mirror may be failed by QEMU before
      libvirt has a chance to cancel it. The disk->mirrorState remains at
      _ABORT in this case, and this breaks subsequent mirrorings of that disk.
      
      We should instead check the mirrorState directly and transition to _NONE
      if it is already aborted. Do the check *after* aborting the block job in
      QEMU to avoid a race.
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      (cherry picked from commit e5d729ba)
      9a0e0d3f
    • M
      qemu: fix error propagation in qemuMigrationBegin · 188e5367
      Michael Chapman 提交于
      If virCloseCallbacksSet fails, qemuMigrationBegin must return NULL to
      indicate an error occurred.
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      (cherry picked from commit 77ddd0bb)
      188e5367
    • M
      qemu: fix crash in qemuProcessAutoDestroy · 9b4dd2c7
      Michael Chapman 提交于
      The destination libvirt daemon in a migration may segfault if the client
      disconnects immediately after the migration has begun:
      
        # virsh -c qemu+tls://remote/system list --all
         Id    Name                           State
        ----------------------------------------------------
        ...
      
        # timeout --signal KILL 1 \
            virsh migrate example qemu+tls://remote/system \
              --verbose --compressed --live --auto-converge \
              --abort-on-error --unsafe --persistent \
              --undefinesource --copy-storage-all --xml example.xml
        Killed
      
        # virsh -c qemu+tls://remote/system list --all
        error: failed to connect to the hypervisor
        error: unable to connect to server at 'remote:16514': Connection refused
      
      The crash is in:
      
         1531 void
         1532 qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj)
         1533 {
         1534     qemuDomainObjPrivatePtr priv = obj->privateData;
         1535     qemuDomainJob job = priv->job.active;
         1536
         1537     priv->jobs_queued--;
      
      Backtrace:
      
        #0  at qemuDomainObjEndJob at qemu/qemu_domain.c:1537
        #1  in qemuDomainRemoveInactive at qemu/qemu_domain.c:2497
        #2  in qemuProcessAutoDestroy at qemu/qemu_process.c:5646
        #3  in virCloseCallbacksRun at util/virclosecallbacks.c:350
        #4  in qemuConnectClose at qemu/qemu_driver.c:1154
        ...
      
      qemuDomainRemoveInactive calls virDomainObjListRemove, which in this
      case is holding the last remaining reference to the domain.
      qemuDomainRemoveInactive then calls qemuDomainObjEndJob, but the domain
      object has been freed and poisoned by then.
      
      This patch bumps the domain's refcount until qemuDomainRemoveInactive
      has completed. We also ensure qemuProcessAutoDestroy does not return the
      domain to virCloseCallbacksRun to be unlocked in this case. There is
      similar logic in bhyveProcessAutoDestroy and lxcProcessAutoDestroy
      (which call virDomainObjListRemove directly).
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      (cherry picked from commit 7578cc17)
      9b4dd2c7
    • P
      qemu: blockCopy: Pass adjusted bandwidth when called via blockRebase · fd270808
      Peter Krempa 提交于
      The block copy API takes the speed in bytes/s rather than MiB/s that was
      the prior approach in virDomainBlockRebase. We correctly converted the
      speed to bytes/s in the old API but we still called the common helper
      virDomainBlockCopyCommon with the unadjusted variable.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1207122
      (cherry picked from commit 3c6a72d5)
      fd270808
    • S
      qemu: end the job when try to blockcopy to non-file destination · 52ef86af
      Shanzhi Yu 提交于
      Blockcopy to non-file destination is not supported according the code,
      but a 'goto endjob' is missed after checking the destination.
      
      This leads to calling drive-mirror with wrong parameters.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1206406Signed-off-by: NShanzhi Yu <shyu@redhat.com>
      Signed-off-by: NJán Tomko <jtomko@redhat.com>
      (cherry picked from commit c5fbad66)
      52ef86af
    • J
      qemu: Give hint about -noTSX CPU model · a7c8b30e
      Jiri Denemark 提交于
      Because of the microcode update to Haswell/Broadwell CPUs, existing
      domains using these CPUs may fail to start even though they used to run
      just fine. To help users solve this issue we try to suggest switching to
      -noTSX variant of the CPU model:
      
          virsh # start cd
          error: Failed to start domain cd
          error: unsupported configuration: guest and host CPU are not
          compatible: Host CPU does not provide required features: rtm, hle;
          try using 'Haswell-noTSX' CPU model
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      (cherry picked from commit 53c8062f)
      a7c8b30e
    • J
      Fix typo in error message · 6d1430ec
      Ján Tomko 提交于
      by rewriting it completely from:
      error: unsupported configuration: virtio only support device address
      type 'PCI'
      
      to:
      
      error: unsupported configuration: virtio disk cannot have an address of type
      drive
      
      Since we now support CCW addresses as well.
      
      (cherry picked from commit 68545ea6)
      6d1430ec
    • L
      qemu: change accidental VIR_WARNING back to VIR_DEBUG · b1972ce5
      Laine Stump 提交于
      While debugging the support for responding to qemu RX_FILTER_CHANGED
      events, I had changed the "ignoring this event" log message from
      VIR_DEBUG to VIR_WARN, but forgot to change it back before
      pushing. Since many guest OSes make enough changes to multicast lists
      and/or promiscuous mode settings to trigger this message, it's
      starting to show up as a red herring in bug reports.
      
      (cherry picked from commit dae3e246)
      b1972ce5
    • M
      qemu: skip precreation of network disks · a883fb9c
      Michael Chapman 提交于
      Commit cf54c606 introduced the ability
      to create missing storage volumes during migration. For network disks,
      however, we may not necessarily be able to detect whether they already
      exist -- there is no straight-forward way to map the disk to a storage
      volume, and even if there were it's possible no configured storage pool
      actually contains the disk.
      
      It is better to assume the network disk exists in this case, rather than
      aborting the migration completely. If the volume really is missing, QEMU
      will generate an appropriate error later in the migration.
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      (cherry picked from commit a1b18051)
      a883fb9c
    • L
      qemu: do not overwrite the error in qemuDomainObjExitMonitor · 93c5841e
      Luyao Huang 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1196934
      
      When qemu exits during startup, libvirt includes the error from
      /var/log/libvirt/qemu/vm.log in the error message:
      
      $ virsh start test3
      error: Failed to start domain test3
      error: internal error: early end of file from monitor: possible problem:
      2015-02-27T03:03:16.985494Z qemu-kvm: -numa memdev is not supported by
      machine rhel6.5.0
      
      The check for domain liveness added to qemuDomainObjExitMonitor
      in commit dc2fd51f sometimes overwrites this error:
      $ virsh start test3
      error: Failed to start domain test3
      error: operation failed: domain is no longer running
      
      Fix the check to only report an error if there is none set.
      Signed-off-by: NLuyao Huang <lhuang@redhat.com>
      Signed-off-by: NJán Tomko <jtomko@redhat.com>
      (cherry picked from commit 4f068209)
      93c5841e
    • P
      qemu: hotplug: Use checker function to check if disk is empty · 2c3aba71
      Peter Krempa 提交于
      (cherry picked from commit e7974b4f)
      2c3aba71
    • P
      qemu: driver: Fix cold-update of removable storage devices · 7d11e8de
      Peter Krempa 提交于
      Only selected fields from the disk source were copied when cold updating
      source in a CDROM drive. When such drive was backed by a network file
      this resulted into corruption of the definition:
      
          <disk type='network' device='cdrom'>
            <driver name='qemu' type='raw' cache='none'/>
            <source protocol='gluster' name='gluster-vol1(null)'>
              <host name='localhost'/>
            </source>
            <target dev='vdc' bus='virtio'/>
            <readonly/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
          </disk>
      
      Update the whole source instead of cherry-picking elements.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1166024
      (cherry picked from commit d0dc6c03)
      7d11e8de
    • E
      qemu: Check for negative port values in network drive configuration · b41d99b7
      Erik Skultety 提交于
      We interpret port values as signed int (convert them from char *),
      so if a negative value is provided in network disk's configuration,
      we accept it as valid, however there's an 'unknown cause' error raised later.
      This error is only accidental because we return the port value in the return code.
      This patch adds just a minor tweak to the already existing check so we
      reject negative values the same way as we reject non-numerical strings.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1163553
      (cherry picked from commit 84646165)
      b41d99b7
    • L
      qemu: Remove unnecessary virReportError on networkGetNetworkAddress return · 5f3db971
      Luyao Huang 提交于
      Error messages are already set in all code paths returning -1 from
      networkGetNetworkAddress, so we don't want to overwrite them.
      Signed-off-by: NLuyao Huang <lhuang@redhat.com>
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      (cherry picked from commit 64595431)
      5f3db971
    • M
      virQEMUCapsInitQMP: Don't dispose locked @vm · 55576d35
      Michal Privoznik 提交于
      When creating qemu capabilities, a dummy virDomainObj is created just
      because our monitor code expects that. However, the object is created
      locked already. Then, under cleanup label, we simply unref the object
      which results in whole domain object to be disposed. The object lock
      is destroyed subsequently, but hey - it's still locked:
      
      ==24845== Thread #14's call to pthread_mutex_destroy failed
      ==24845==    with error code 16 (EBUSY: Device or resource busy)
      ==24845==    at 0x4C3024E: pthread_mutex_destroy (in /usr/lib64/valgrind/vgpreload_helgrind-amd64-linux.so)
      ==24845==    by 0x531F72E: virMutexDestroy (virthread.c:83)
      ==24845==    by 0x5302977: virObjectLockableDispose (virobject.c:237)
      ==24845==    by 0x5302A89: virObjectUnref (virobject.c:265)
      ==24845==    by 0x1DD37866: virQEMUCapsInitQMP (qemu_capabilities.c:3397)
      ==24845==    by 0x1DD37CC6: virQEMUCapsNewForBinary (qemu_capabilities.c:3481)
      ==24845==    by 0x1DD381E2: virQEMUCapsCacheLookup (qemu_capabilities.c:3609)
      ==24845==    by 0x1DD30F8A: virQEMUCapsInitGuest (qemu_capabilities.c:744)
      ==24845==    by 0x1DD31889: virQEMUCapsInit (qemu_capabilities.c:1020)
      ==24845==    by 0x1DD7DD36: virQEMUDriverCreateCapabilities (qemu_conf.c:888)
      ==24845==    by 0x1DDC57C0: qemuStateInitialize (qemu_driver.c:803)
      ==24845==    by 0x53DC743: virStateInitialize (libvirt.c:777)
      ==24845==
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      (cherry picked from commit 954427c3)
      55576d35
    • M
      qemu: Allow spaces in disk serial · 61ab2921
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1195660
      
      There's been a bug report appearing on the qemu-devel list, that
      libvirt is unable to pass spaces in disk serial number [1]. Not only
      our RNG schema forbids that, the code is not prepared either. However,
      with a bit of escaping (if needed) we can allow spaces there.
      
      1: https://lists.gnu.org/archive/html/qemu-devel/2015-02/msg04041.htmlSigned-off-by: NMichal Privoznik <mprivozn@redhat.com>
      (cherry picked from commit 5aee81a0)
      61ab2921
    • S
      qemu: snapshot: Don't skip check for qcow2 format with network disks · 97015f2f
      Shanzhi Yu 提交于
      When the domain's source disk type is network, if source protocol is rbd
      or sheepdog, the 'if().. break' will end the current case, which lead to
      miss check the driver type is raw or qcow2. Libvirt will allow to create
      internal snapshot for a running domain with raw format disk which based
      on rbd storage.
      
      While both protocols support internal snapshots of the disk qemu is not
      able to use it as it requires some place to store the memory image. The
      check if the disk is backed by a qcow2 image needs to be executed
      always.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1179533Signed-off-by: NShanzhi Yu <shyu@redhat.com>
      (cherry picked from commit f7c1410b)
      97015f2f
    • M
      qemuProcessReconnect: Fill in pid file path · 91cf6052
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1197600
      
      So, libvirt uses pid file to track pid of started qemus. Whenever
      a domain is started, its pid is put into corresponding pid file.
      The pid file path is generated based on domain name and stored
      into domain object internals. However, it's not stored in the
      status XML and therefore lost on daemon restarts. Hence, later,
      when domain is being shut down, the daemon does not know which
      pid file to unlink, and the correct pid file is left behind. To
      avoid this, lets generate the pid file path again in
      qemuProcessReconnect().
      Reported-by: NLuyao Huang <lhuang@redhat.com>
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      (cherry picked from commit 63889e0c)
      91cf6052
    • M
      conf: De-duplicate scheduling policy enums · d048e8ec
      Martin Kletzander 提交于
      Since adding the support for scheduler policy settings in commit
      8680ea97, there are two enums with the same information.  That was
      caused by rewriting the patch since first draft.
      
      Find out thanks to clang, but there was no impact whatsoever.
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      (cherry picked from commit 2fd5880b)
      d048e8ec
    • M
      qemu: Don't crash in qemuDomainOpenChannel() · 5b3d6873
      Martin Kletzander 提交于
      The problem here was that when opening a channel, we were checking
      whether the channel given is alias (can't be NULL for running domain) or
      it's name, which can be NULL (for example with spicevmc).  In case of
      such domain qemuDomainOpenChannel() made the daemon crash.
      STREQ_NULLABLE() is safe to use since the code in question is wrapped in
      "if (name)" and is more readable, so use that instead of checking for
      non-NULL "vm->def->channels[i]->target.name".
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      (cherry picked from commit b3ea0a8f)
      5b3d6873
    • J
      Check if domain is running in qemuDomainAgentIsAvailable · 37448a86
      Ján Tomko 提交于
      If the domain is not running, the agent will not respond.
      Do not even try.
      
      https://bugzilla.redhat.com/show_bug.cgi?id=872424
      (cherry picked from commit 72352232)
      37448a86
    • J
      Pass virDomainObjPtr to qemuDomainAgentAvailable · 4f712a2e
      Ján Tomko 提交于
      Not just the DomainObj's private data.
      
      (cherry picked from commit fbb94044)
      4f712a2e
    • J
      Check for qemu guest agent availability after getting the job · 99680243
      Ján Tomko 提交于
      This way checks requiring the job can be done in qemuDomainAgentAvailable.
      
      (cherry picked from commit c8b80b49)
      99680243
    • M
      domcaps: Check for architecture more wisely · 6d79e8b3
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1209948
      
      So we have this bug. The virConnectGetDomainCapabilities() API
      performs a couple of checks before it produces any result. One of
      the checks is if the architecture requested by user can be run by
      the binary (again user provided). However, the check is pretty
      dumb. It merely compares if the default binary architecture
      matches the one provided by user. However, a qemu binary can run
      multiple architectures. For instance: qemu-system-ppc64 can run:
      ppc, ppcle, ppc64, ppc64le and ppcemb. The default is ppc64, so
      if user requested something else, like ppc64le, the check would
      have failed without obvious reason.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      (cherry picked from commit 0af9325e)
      6d79e8b3
    • C
      qemu: Always refresh capabilities if no <guests> found · dc7c0de5
      Cole Robinson 提交于
      - Remove all qemu emulators
      - Restart libvirtd
      - Install qemu emulators
      - Call 'virsh version' -> errors
      
      The only thing that will force the qemu driver to refresh it's cached
      capablities info is an explict API call to GetCapabilities.
      
      However in the case when the initial caps lookup at driver connect didn't
      find a single qemu emulator to poll, the driver is effectively useless
      and really can't do anything until it's populated some qemu capabilities
      info.
      
      With the above steps, the user would have to either know about the
      magic refresh capabilities call, or restart libvirtd to pick up the
      changes.
      
      Instead, this patch changes things so that every time a part of th
      driver requests access to capabilities info, check to see if
      we've previously seen any emulators. If not, force a refresh.
      
      In the case of 'still no emulators found', this is still very quick, so
      I can't think of a downside.
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1000116
      (cherry picked from commit 95546c43)
      (cherry picked from commit 9ebc1631)
      dc7c0de5
    • C
      qemu: Build nvram directory at driver startup · ab87fb1c
      Cole Robinson 提交于
      Similar to what was done for the channel socket in the previous commit.
      
      (cherry picked from commit 19425d11)
      ab87fb1c
    • C
      qemu: Build channel autosocket directory at driver startup · 9ed89d78
      Cole Robinson 提交于
      Rather than depend on the RPM to put it in place, since this doesn't
      cover the qemu:///session case. Currently auto allocated socket path is
      completely busted with qemu:///session
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1105274
      
      And because we chown the directory at driver startup now, this also fixes
      autosocket startup failures when using user/group=root
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1044561
      https://bugzilla.redhat.com/show_bug.cgi?id=1146886
      (cherry picked from commit e31ab02f)
      9ed89d78
    • M
      virQEMUDriverGetConfig: Fix memleak · c163111b
      Michal Privoznik 提交于
      ==19015== 968 (416 direct, 552 indirect) bytes in 1 blocks are definitely lost in loss record 999 of 1,049
      ==19015==    at 0x4C2C070: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
      ==19015==    by 0x52ADF14: virAllocVar (viralloc.c:560)
      ==19015==    by 0x5302FD1: virObjectNew (virobject.c:193)
      ==19015==    by 0x1DD9401E: virQEMUDriverConfigNew (qemu_conf.c:164)
      ==19015==    by 0x1DDDF65D: qemuStateInitialize (qemu_driver.c:666)
      ==19015==    by 0x53E0823: virStateInitialize (libvirt.c:777)
      ==19015==    by 0x11E067: daemonRunStateInit (libvirtd.c:905)
      ==19015==    by 0x53201AD: virThreadHelper (virthread.c:206)
      ==19015==    by 0xA1EE1F2: start_thread (in /lib64/libpthread-2.19.so)
      ==19015==    by 0xA4EFC8C: clone (in /lib64/libc-2.19.so)
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      (cherry picked from commit 225aa802)
      c163111b
    • C
      qemu: chown autoDumpPath on driver startup · be95a1f1
      Cole Robinson 提交于
      Not sure if this is required, but it makes things consistent with the
      rest of the directories.
      
      (cherry picked from commit db3ccd58)
      be95a1f1
    • C
      qemu: conf: Clarify paths that are relative to libDir · 0dea832a
      Cole Robinson 提交于
      Rather than duplicate libDir for each new path
      
      (cherry picked from commit c19f43ae)
      0dea832a
  4. 03 4月, 2015 1 次提交
    • E
      qemu: read backing chain names from qemu · ece6debb
      Eric Blake 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1199182 documents that
      after a series of disk snapshots into existing destination images,
      followed by active commits of the top image, it is possible for
      qemu 2.2 and earlier to end up tracking a different name for the
      image than what it would have had when opening the chain afresh.
      That is, when starting with the chain 'a <- b <- c', the name
      associated with 'b' is how it was spelled in the metadata of 'c',
      but when starting with 'a', taking two snapshots into 'a <- b <- c',
      then committing 'c' back into 'b', the name associated with 'b' is
      now the name used when taking the first snapshot.
      
      Sadly, older qemu doesn't know how to treat different spellings of
      the same filename as identical files (it uses strcmp() instead of
      checking for the same inode), which means libvirt's attempt to
      commit an image using solely the names learned from qcow2 metadata
      fails with a cryptic:
      
      error: internal error: unable to execute QEMU command 'block-commit': Top image file /tmp/images/c/../b/b not found
      
      even though the file exists.  Trying to teach libvirt the rules on
      which name qemu will expect is not worth the effort (besides, we'd
      have to remember it across libvirtd restarts, and track whether a
      file was opened via metadata or via snapshot creation for a given
      qemu process); it is easier to just always directly ask qemu what
      string it expects to see in the first place.
      
      As a safety valve, we validate that any name returned by qemu
      still maps to the same local file as we have tracked it, so that
      a compromised qemu cannot accidentally cause us to act on an
      incorrect file.
      
      * src/qemu/qemu_monitor.h (qemuMonitorDiskNameLookup): New
      prototype.
      * src/qemu/qemu_monitor_json.h (qemuMonitorJSONDiskNameLookup):
      Likewise.
      * src/qemu/qemu_monitor.c (qemuMonitorDiskNameLookup): New function.
      * src/qemu/qemu_monitor_json.c (qemuMonitorJSONDiskNameLookup)
      (qemuMonitorJSONDiskNameLookupOne): Likewise.
      * src/qemu/qemu_driver.c (qemuDomainBlockCommit)
      (qemuDomainBlockJobImpl): Use it.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      (cherry picked from commit f9ea3d60)
      ece6debb