1. 04 6月, 2015 1 次提交
    • P
      qemu: process: Update current balloon state to maximum on vm startup · 641a145d
      Peter Krempa 提交于
      After libvirt issues the balloon resize command, the current balloon
      size needs to be changed to the maximum memory size since the vCPUs were
      not started and thus the balloon driver could not return the memory.
      
      Since GetXMLDesc and other APIs return the balloon size without updating
      it in case they are not able to obtain the job and the memory balloon
      does not support the asynchronous event the sizing might be incorrect.
      641a145d
  2. 03 6月, 2015 3 次提交
    • P
      qemu: process: Refactor setup of memory ballooning · f4c67f07
      Peter Krempa 提交于
      Since the monitor code now supports ullongs when setting balloon size,
      drop the legacy code with overflow checking.
      
      Additionally the comment mentioning that the job is treated as a sync
      job does not make sense any more since the monitor is entered
      asynchronously.
      f4c67f07
    • P
      conf: Refactor emulatorpin handling · ee3da892
      Peter Krempa 提交于
      Store the emulator pinning cpu mask as a pure virBitmap rather than the
      virDomainPinDef since it stores only the bitmap and refactor
      qemuDomainPinEmulator to do the same operations in a much saner way.
      
      As a side effect virDomainEmulatorPinAdd and virDomainEmulatorPinDel can
      be removed since they don't add any value.
      ee3da892
    • P
      qemu: Fix possible crash in qemuProcessSetVcpuAffinities · ff4c42ed
      Peter Krempa 提交于
      In case when <vcpu ... cpuset=""> is not specified, the vcpupin array is
      not guaranteed to be allocated to def->vcpus. This would cause a crash
      for TCG since it does not report thread IDs for vCPUs.
      ff4c42ed
  3. 21 5月, 2015 2 次提交
  4. 20 5月, 2015 1 次提交
  5. 15 5月, 2015 1 次提交
  6. 29 4月, 2015 1 次提交
  7. 28 4月, 2015 5 次提交
    • J
      qemu: Remove need for qemuMonitorIOThreadInfoFree · b515339f
      John Ferlan 提交于
      Replace with just VIR_FREE.
      b515339f
    • J
      qemu: qemuProcessDetectIOThreadPIDs invert checks · 69b16513
      John Ferlan 提交于
      If we received zero iothreads from the monitor, but were perhaps
      expecting to receive something, then the code was skipping the check
      to ensure what's in the monitor matches our expectations.  So invert
      the checks to check that what we get back matches expectations and
      then check there are zero iothreads returned.
      69b16513
    • J
      qemu: Remove need for qemuDomainParseIOThreadAlias · 4c2ca566
      John Ferlan 提交于
      Rather than have a separate routine to parse the alias of an iothread
      returned from qemu in order to get the iothread_id value, parse the alias
      when returning and just return the iothread_id in qemuMonitorIOThreadInfoPtr
      
      This set of patches removes the function, changes the "char *name" to
      "unsigned int" and handles all the fallout.
      4c2ca566
    • J
      Move iothreadspin information into iothreadids · b266486f
      John Ferlan 提交于
      Remove the iothreadspin array from cputune and replace with a cpumask
      to be stored in the iothreadids list.
      
      Adjust the test output because our printing goes in order of the iothreadids
      list now.
      b266486f
    • J
      qemu: Use domain iothreadids to IOThread's 'thread_id' · 8d4614a5
      John Ferlan 提交于
      Add 'thread_id' to the virDomainIOThreadIDDef as a means to store the
      'thread_id' as returned from the live qemu monitor data.
      
      Remove the iothreadpids list from _qemuDomainObjPrivate and replace with
      the new iothreadids 'thread_id' element.
      
      Rather than use the default numbering scheme of 1..number of iothreads
      defined for the domain, use the iothreadid's list for the iothread_id
      
      Since iothreadids list keeps track of the iothread_id's, these are
      now used in place of the many places where a for loop would "know"
      that the ID was "+ 1" from the array element.
      
      The new tests ensure usage of the <iothreadid> values for an exact number
      of iothreads and the usage of a smaller number of <iothreadid> values than
      iothreads that exist (and usage of the default numbering scheme).
      8d4614a5
  8. 27 4月, 2015 1 次提交
  9. 26 4月, 2015 2 次提交
    • P
      qemu: Connect to guest agent after channel hotplug · a03e2d3a
      Peter Krempa 提交于
      If a user hot-attaches the guest agent channel libvirt would ignore it
      until the restart of libvirtd or shutdown/destroy and start of the VM
      itself.
      
      This patch adds code that opens or closes the guest agent connection
      according to the state of the guest agent channel according to
      connect/disconnect events.
      
      To allow opening the channel from the event handler qemuConnectAgent
      needed to be exported.
      a03e2d3a
    • P
      qemu: agent: Differentiate errors when the agent channel was hotplugged · e1c04108
      Peter Krempa 提交于
      When the guest agent channel gets hotplugged to a VM, libvirt would
      still report that "QEMU guest agent is not configured" rather than
      stating that the connection was not established yet.
      
      Currently the code won't be able to connect to the agent after hotplug
      but that will change in a later patch.
      
      As the qemuFindAgentConfig() helper is quite helpful in this case move
      it to a more usable place and export it.
      e1c04108
  10. 24 4月, 2015 2 次提交
  11. 15 4月, 2015 1 次提交
  12. 14 4月, 2015 1 次提交
  13. 08 4月, 2015 2 次提交
    • M
      qemuProcessHook: Call virNuma*() only when needed · ea576ee5
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1198645
      
      Once upon a time, there was a little domain. And the domain was pinned
      onto a NUMA node and hasn't fully allocated its memory:
      
        <memory unit='KiB'>2355200</memory>
        <currentMemory unit='KiB'>1048576</currentMemory>
      
        <numatune>
          <memory mode='strict' nodeset='0'/>
        </numatune>
      
      Oh little me, said the domain, what will I do with so little memory.
      If I only had a few megabytes more. But the old admin noticed the
      whimpering, barely audible to untrained human ear. And good admin he
      was, he gave the domain yet more memory. But the old NUMA topology
      witch forbade to allocate more memory on the node zero. So he
      decided to allocate it on a different node:
      
      virsh # numatune little_domain --nodeset 0-1
      
      virsh # setmem little_domain 2355200
      
      The little domain was happy. For a while. Until bad, sharp teeth
      shaped creature came. Every process in the system was afraid of him.
      The OOM Killer they called him. Oh no, he's after the little domain.
      There's no escape.
      
      Do you kids know why? Because when the little domain was born, her
      father, Libvirt, called numa_set_membind(). So even if the admin
      allowed her to allocate memory from other nodes in the cgroups, the
      membind() forbid it.
      
      So what's the lesson? Libvirt should rely on cgroups, whenever
      possible and use numa_set_membind() as the last ditch effort.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      ea576ee5
    • M
      qemu: fix crash in qemuProcessAutoDestroy · 7578cc17
      Michael Chapman 提交于
      The destination libvirt daemon in a migration may segfault if the client
      disconnects immediately after the migration has begun:
      
        # virsh -c qemu+tls://remote/system list --all
         Id    Name                           State
        ----------------------------------------------------
        ...
      
        # timeout --signal KILL 1 \
            virsh migrate example qemu+tls://remote/system \
              --verbose --compressed --live --auto-converge \
              --abort-on-error --unsafe --persistent \
              --undefinesource --copy-storage-all --xml example.xml
        Killed
      
        # virsh -c qemu+tls://remote/system list --all
        error: failed to connect to the hypervisor
        error: unable to connect to server at 'remote:16514': Connection refused
      
      The crash is in:
      
         1531 void
         1532 qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj)
         1533 {
         1534     qemuDomainObjPrivatePtr priv = obj->privateData;
         1535     qemuDomainJob job = priv->job.active;
         1536
         1537     priv->jobs_queued--;
      
      Backtrace:
      
        #0  at qemuDomainObjEndJob at qemu/qemu_domain.c:1537
        #1  in qemuDomainRemoveInactive at qemu/qemu_domain.c:2497
        #2  in qemuProcessAutoDestroy at qemu/qemu_process.c:5646
        #3  in virCloseCallbacksRun at util/virclosecallbacks.c:350
        #4  in qemuConnectClose at qemu/qemu_driver.c:1154
        ...
      
      qemuDomainRemoveInactive calls virDomainObjListRemove, which in this
      case is holding the last remaining reference to the domain.
      qemuDomainRemoveInactive then calls qemuDomainObjEndJob, but the domain
      object has been freed and poisoned by then.
      
      This patch bumps the domain's refcount until qemuDomainRemoveInactive
      has completed. We also ensure qemuProcessAutoDestroy does not return the
      domain to virCloseCallbacksRun to be unlocked in this case. There is
      similar logic in bhyveProcessAutoDestroy and lxcProcessAutoDestroy
      (which call virDomainObjListRemove directly).
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      7578cc17
  14. 02 4月, 2015 3 次提交
  15. 31 3月, 2015 1 次提交
    • P
      qemu: blockjob: Synchronously update backing chain in XML on ABORT/PIVOT · 630ee5ac
      Peter Krempa 提交于
      When the synchronous pivot option is selected, libvirt would not update
      the backing chain until the job was exitted. Some applications then
      received invalid data as their job serialized first.
      
      This patch removes polling to wait for the ABORT/PIVOT job completion
      and replaces it with a condition. If a synchronous operation is
      requested the update of the XML is executed in the job of the caller of
      the synchronous request. Otherwise the monitor event callback uses a
      separate worker to update the backing chain with a new job.
      
      This is a regression since 1a92c719
      
      When the ABORT job is finished synchronously you get the following call
      stack:
       #0  qemuBlockJobEventProcess
       #1  qemuDomainBlockJobImpl
       #2  qemuDomainBlockJobAbort
       #3  virDomainBlockJobAbort
      
      While previously or while using the _ASYNC flag you'd get:
       #0  qemuBlockJobEventProcess
       #1  processBlockJobEvent
       #2  qemuProcessEventHandler
       #3  virThreadPoolWorker
      630ee5ac
  16. 26 3月, 2015 1 次提交
  17. 23 3月, 2015 1 次提交
  18. 19 3月, 2015 1 次提交
    • L
      util: clean up #includes of virnetdevopenvswitch.h · 451547a4
      Laine Stump 提交于
      virnetdevopenvswitch.h declares a few functions that can be called to
      add ports to and remove them from OVS bridges, and retrieve the
      migration data for a port. It does not contain any data definitions
      that are used by domain_conf.h. But for some reason, domain_conf.h
      virnetdevopenvswitch.h should be directly #including it. This adds a
      few lines to the project, but saves all the files that don't need it
      from the extra computing, and makes the dependencies more clear cut.
      451547a4
  19. 18 3月, 2015 2 次提交
  20. 17 3月, 2015 1 次提交
  21. 16 3月, 2015 5 次提交
    • J
      Convert virDomainVcpuPinFindByVcpu into virDomainPinFindByVcpu · a8a89270
      John Ferlan 提交于
      Since both Vcpu and IOThreads code use the same API's, alter the naming
      of the API's to remove the "Vcpu" specific reference
      a8a89270
    • J
      Convert virDomainVcpuPinDefPtr to virDomainPinDefPtr · 59ba7023
      John Ferlan 提交于
      As pointed out by jtomko in his review of the IOThreads pinning code:
      
      http://www.redhat.com/archives/libvir-list/2015-March/msg00495.html
      
      there are some comments sprinkled in indicating IOThreads were using
      the same structure as the VcpuPin code...
      
      This is the first patch of a few that will change the virDomainVcpuPin*
      structures and code to just virDomainPin* - starting with the data
      structure naming...
      59ba7023
    • P
      conf: Replace access to def->mem.max_balloon with accessor functions · 4f9907cd
      Peter Krempa 提交于
      As there are two possible approaches to define a domain's memory size -
      one used with legacy, non-NUMA VMs configured in the <memory> element
      and per-node based approach on NUMA machines - the user needs to make
      sure that both are specified correctly in the NUMA case.
      
      To avoid this burden on the user I'd like to replace the NUMA case with
      automatic totaling of the memory size. To achieve this I need to replace
      direct access to the virDomainMemtune's 'max_balloon' field with
      two separate getters depending on the desired size.
      
      The two sizes are needed as:
      1) Startup memory size doesn't include memory modules in some
      hypervisors.
      2) After startup these count as the usable memory size.
      
      Note that the comments for the functions are future aware and document
      state that will be present after a few later patches.
      4f9907cd
    • P
      qemu: event: Don't fiddle with disk backing trees without a job · 1a92c719
      Peter Krempa 提交于
      Surprisingly we did not grab a VM job when a block job finished and we'd
      happily rewrite the backing chain data. This made it possible to crash
      libvirt when queueing two backing chains tightly and other badness.
      
      To fix it, add yet another handler to the helper thread that handles
      monitor events that require a job.
      1a92c719
    • P
      5c634730
  22. 03 3月, 2015 2 次提交