1. 10 7月, 2015 3 次提交
    • J
      qemu: Enable migration events on QMP monitor · 3df4d2a4
      Jiri Denemark 提交于
      Even if QEMU supports migration events it doesn't send them by default.
      We have to enable them by calling migrate-set-capabilities. Let's enable
      migration events everytime we can and clear QEMU_CAPS_MIGRATION_EVENT in
      case migrate-set-capabilities does not support events.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      3df4d2a4
    • P
      qemu_hotplug: try harder to eject media · 28554080
      Pavel Hrdina 提交于
      Some guests lock the tray and QEMU eject command will simply fail to
      eject the media.  But the guest OS can handle this attempt to eject the
      media and can unlock the tray and open it. In this case, we should try
      again to actually eject the media.
      
      If the first attempt fails to detect a tray_open we will fail with
      error, from monitor.  If we receive that event, we know, that the guest
      properly reacted to the eject request, unlocked the tray and opened it.
      In this case, we need to run the command again to actually eject the
      media from the device.  The reason to call it again is, that QEMU
      doesn't wait for the guest to react and report an error, that the tray
      is locked.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1147471Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
      28554080
    • P
      virDomainObjSignal: drop this function · 6b278f3a
      Pavel Hrdina 提交于
      There are multiple consumers for the domain condition and we should
      always wake them all.
      Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
      6b278f3a
  2. 19 6月, 2015 3 次提交
  3. 05 6月, 2015 1 次提交
  4. 04 6月, 2015 1 次提交
    • P
      qemu: process: Update current balloon state to maximum on vm startup · 641a145d
      Peter Krempa 提交于
      After libvirt issues the balloon resize command, the current balloon
      size needs to be changed to the maximum memory size since the vCPUs were
      not started and thus the balloon driver could not return the memory.
      
      Since GetXMLDesc and other APIs return the balloon size without updating
      it in case they are not able to obtain the job and the memory balloon
      does not support the asynchronous event the sizing might be incorrect.
      641a145d
  5. 03 6月, 2015 3 次提交
    • P
      qemu: process: Refactor setup of memory ballooning · f4c67f07
      Peter Krempa 提交于
      Since the monitor code now supports ullongs when setting balloon size,
      drop the legacy code with overflow checking.
      
      Additionally the comment mentioning that the job is treated as a sync
      job does not make sense any more since the monitor is entered
      asynchronously.
      f4c67f07
    • P
      conf: Refactor emulatorpin handling · ee3da892
      Peter Krempa 提交于
      Store the emulator pinning cpu mask as a pure virBitmap rather than the
      virDomainPinDef since it stores only the bitmap and refactor
      qemuDomainPinEmulator to do the same operations in a much saner way.
      
      As a side effect virDomainEmulatorPinAdd and virDomainEmulatorPinDel can
      be removed since they don't add any value.
      ee3da892
    • P
      qemu: Fix possible crash in qemuProcessSetVcpuAffinities · ff4c42ed
      Peter Krempa 提交于
      In case when <vcpu ... cpuset=""> is not specified, the vcpupin array is
      not guaranteed to be allocated to def->vcpus. This would cause a crash
      for TCG since it does not report thread IDs for vCPUs.
      ff4c42ed
  6. 21 5月, 2015 2 次提交
  7. 20 5月, 2015 1 次提交
  8. 15 5月, 2015 1 次提交
  9. 29 4月, 2015 1 次提交
  10. 28 4月, 2015 5 次提交
    • J
      qemu: Remove need for qemuMonitorIOThreadInfoFree · b515339f
      John Ferlan 提交于
      Replace with just VIR_FREE.
      b515339f
    • J
      qemu: qemuProcessDetectIOThreadPIDs invert checks · 69b16513
      John Ferlan 提交于
      If we received zero iothreads from the monitor, but were perhaps
      expecting to receive something, then the code was skipping the check
      to ensure what's in the monitor matches our expectations.  So invert
      the checks to check that what we get back matches expectations and
      then check there are zero iothreads returned.
      69b16513
    • J
      qemu: Remove need for qemuDomainParseIOThreadAlias · 4c2ca566
      John Ferlan 提交于
      Rather than have a separate routine to parse the alias of an iothread
      returned from qemu in order to get the iothread_id value, parse the alias
      when returning and just return the iothread_id in qemuMonitorIOThreadInfoPtr
      
      This set of patches removes the function, changes the "char *name" to
      "unsigned int" and handles all the fallout.
      4c2ca566
    • J
      Move iothreadspin information into iothreadids · b266486f
      John Ferlan 提交于
      Remove the iothreadspin array from cputune and replace with a cpumask
      to be stored in the iothreadids list.
      
      Adjust the test output because our printing goes in order of the iothreadids
      list now.
      b266486f
    • J
      qemu: Use domain iothreadids to IOThread's 'thread_id' · 8d4614a5
      John Ferlan 提交于
      Add 'thread_id' to the virDomainIOThreadIDDef as a means to store the
      'thread_id' as returned from the live qemu monitor data.
      
      Remove the iothreadpids list from _qemuDomainObjPrivate and replace with
      the new iothreadids 'thread_id' element.
      
      Rather than use the default numbering scheme of 1..number of iothreads
      defined for the domain, use the iothreadid's list for the iothread_id
      
      Since iothreadids list keeps track of the iothread_id's, these are
      now used in place of the many places where a for loop would "know"
      that the ID was "+ 1" from the array element.
      
      The new tests ensure usage of the <iothreadid> values for an exact number
      of iothreads and the usage of a smaller number of <iothreadid> values than
      iothreads that exist (and usage of the default numbering scheme).
      8d4614a5
  11. 27 4月, 2015 1 次提交
  12. 26 4月, 2015 2 次提交
    • P
      qemu: Connect to guest agent after channel hotplug · a03e2d3a
      Peter Krempa 提交于
      If a user hot-attaches the guest agent channel libvirt would ignore it
      until the restart of libvirtd or shutdown/destroy and start of the VM
      itself.
      
      This patch adds code that opens or closes the guest agent connection
      according to the state of the guest agent channel according to
      connect/disconnect events.
      
      To allow opening the channel from the event handler qemuConnectAgent
      needed to be exported.
      a03e2d3a
    • P
      qemu: agent: Differentiate errors when the agent channel was hotplugged · e1c04108
      Peter Krempa 提交于
      When the guest agent channel gets hotplugged to a VM, libvirt would
      still report that "QEMU guest agent is not configured" rather than
      stating that the connection was not established yet.
      
      Currently the code won't be able to connect to the agent after hotplug
      but that will change in a later patch.
      
      As the qemuFindAgentConfig() helper is quite helpful in this case move
      it to a more usable place and export it.
      e1c04108
  13. 24 4月, 2015 2 次提交
  14. 15 4月, 2015 1 次提交
  15. 14 4月, 2015 1 次提交
  16. 08 4月, 2015 2 次提交
    • M
      qemuProcessHook: Call virNuma*() only when needed · ea576ee5
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1198645
      
      Once upon a time, there was a little domain. And the domain was pinned
      onto a NUMA node and hasn't fully allocated its memory:
      
        <memory unit='KiB'>2355200</memory>
        <currentMemory unit='KiB'>1048576</currentMemory>
      
        <numatune>
          <memory mode='strict' nodeset='0'/>
        </numatune>
      
      Oh little me, said the domain, what will I do with so little memory.
      If I only had a few megabytes more. But the old admin noticed the
      whimpering, barely audible to untrained human ear. And good admin he
      was, he gave the domain yet more memory. But the old NUMA topology
      witch forbade to allocate more memory on the node zero. So he
      decided to allocate it on a different node:
      
      virsh # numatune little_domain --nodeset 0-1
      
      virsh # setmem little_domain 2355200
      
      The little domain was happy. For a while. Until bad, sharp teeth
      shaped creature came. Every process in the system was afraid of him.
      The OOM Killer they called him. Oh no, he's after the little domain.
      There's no escape.
      
      Do you kids know why? Because when the little domain was born, her
      father, Libvirt, called numa_set_membind(). So even if the admin
      allowed her to allocate memory from other nodes in the cgroups, the
      membind() forbid it.
      
      So what's the lesson? Libvirt should rely on cgroups, whenever
      possible and use numa_set_membind() as the last ditch effort.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      ea576ee5
    • M
      qemu: fix crash in qemuProcessAutoDestroy · 7578cc17
      Michael Chapman 提交于
      The destination libvirt daemon in a migration may segfault if the client
      disconnects immediately after the migration has begun:
      
        # virsh -c qemu+tls://remote/system list --all
         Id    Name                           State
        ----------------------------------------------------
        ...
      
        # timeout --signal KILL 1 \
            virsh migrate example qemu+tls://remote/system \
              --verbose --compressed --live --auto-converge \
              --abort-on-error --unsafe --persistent \
              --undefinesource --copy-storage-all --xml example.xml
        Killed
      
        # virsh -c qemu+tls://remote/system list --all
        error: failed to connect to the hypervisor
        error: unable to connect to server at 'remote:16514': Connection refused
      
      The crash is in:
      
         1531 void
         1532 qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj)
         1533 {
         1534     qemuDomainObjPrivatePtr priv = obj->privateData;
         1535     qemuDomainJob job = priv->job.active;
         1536
         1537     priv->jobs_queued--;
      
      Backtrace:
      
        #0  at qemuDomainObjEndJob at qemu/qemu_domain.c:1537
        #1  in qemuDomainRemoveInactive at qemu/qemu_domain.c:2497
        #2  in qemuProcessAutoDestroy at qemu/qemu_process.c:5646
        #3  in virCloseCallbacksRun at util/virclosecallbacks.c:350
        #4  in qemuConnectClose at qemu/qemu_driver.c:1154
        ...
      
      qemuDomainRemoveInactive calls virDomainObjListRemove, which in this
      case is holding the last remaining reference to the domain.
      qemuDomainRemoveInactive then calls qemuDomainObjEndJob, but the domain
      object has been freed and poisoned by then.
      
      This patch bumps the domain's refcount until qemuDomainRemoveInactive
      has completed. We also ensure qemuProcessAutoDestroy does not return the
      domain to virCloseCallbacksRun to be unlocked in this case. There is
      similar logic in bhyveProcessAutoDestroy and lxcProcessAutoDestroy
      (which call virDomainObjListRemove directly).
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      7578cc17
  17. 02 4月, 2015 3 次提交
  18. 31 3月, 2015 1 次提交
    • P
      qemu: blockjob: Synchronously update backing chain in XML on ABORT/PIVOT · 630ee5ac
      Peter Krempa 提交于
      When the synchronous pivot option is selected, libvirt would not update
      the backing chain until the job was exitted. Some applications then
      received invalid data as their job serialized first.
      
      This patch removes polling to wait for the ABORT/PIVOT job completion
      and replaces it with a condition. If a synchronous operation is
      requested the update of the XML is executed in the job of the caller of
      the synchronous request. Otherwise the monitor event callback uses a
      separate worker to update the backing chain with a new job.
      
      This is a regression since 1a92c719
      
      When the ABORT job is finished synchronously you get the following call
      stack:
       #0  qemuBlockJobEventProcess
       #1  qemuDomainBlockJobImpl
       #2  qemuDomainBlockJobAbort
       #3  virDomainBlockJobAbort
      
      While previously or while using the _ASYNC flag you'd get:
       #0  qemuBlockJobEventProcess
       #1  processBlockJobEvent
       #2  qemuProcessEventHandler
       #3  virThreadPoolWorker
      630ee5ac
  19. 26 3月, 2015 1 次提交
  20. 23 3月, 2015 1 次提交
  21. 19 3月, 2015 1 次提交
    • L
      util: clean up #includes of virnetdevopenvswitch.h · 451547a4
      Laine Stump 提交于
      virnetdevopenvswitch.h declares a few functions that can be called to
      add ports to and remove them from OVS bridges, and retrieve the
      migration data for a port. It does not contain any data definitions
      that are used by domain_conf.h. But for some reason, domain_conf.h
      virnetdevopenvswitch.h should be directly #including it. This adds a
      few lines to the project, but saves all the files that don't need it
      from the extra computing, and makes the dependencies more clear cut.
      451547a4
  22. 18 3月, 2015 2 次提交
  23. 17 3月, 2015 1 次提交