1. 08 4月, 2015 14 次提交
    • D
      vbox: Implement virDomainSendKey · 306a242d
      Dawid Zamirski 提交于
      Since the holdtime is not supported by VBOX SDK, it's being simulated
      by sleeping before sending the key-up codes. The key-up codes are
      auto-generated based on XT codeset rules (adding of 0x80 to key-down)
      which results in the same behavior as for QEMU implementation.
      306a242d
    • D
      vbox: Register IKeyboard with the unified API. · 445733f3
      Dawid Zamirski 提交于
      The IKeyboard COM object is needed to implement virDomainSendKey and is
      available in all supported VBOX versions.
      445733f3
    • M
      qemuProcessHook: Call virNuma*() only when needed · ea576ee5
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1198645
      
      Once upon a time, there was a little domain. And the domain was pinned
      onto a NUMA node and hasn't fully allocated its memory:
      
        <memory unit='KiB'>2355200</memory>
        <currentMemory unit='KiB'>1048576</currentMemory>
      
        <numatune>
          <memory mode='strict' nodeset='0'/>
        </numatune>
      
      Oh little me, said the domain, what will I do with so little memory.
      If I only had a few megabytes more. But the old admin noticed the
      whimpering, barely audible to untrained human ear. And good admin he
      was, he gave the domain yet more memory. But the old NUMA topology
      witch forbade to allocate more memory on the node zero. So he
      decided to allocate it on a different node:
      
      virsh # numatune little_domain --nodeset 0-1
      
      virsh # setmem little_domain 2355200
      
      The little domain was happy. For a while. Until bad, sharp teeth
      shaped creature came. Every process in the system was afraid of him.
      The OOM Killer they called him. Oh no, he's after the little domain.
      There's no escape.
      
      Do you kids know why? Because when the little domain was born, her
      father, Libvirt, called numa_set_membind(). So even if the admin
      allowed her to allocate memory from other nodes in the cgroups, the
      membind() forbid it.
      
      So what's the lesson? Libvirt should rely on cgroups, whenever
      possible and use numa_set_membind() as the last ditch effort.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      ea576ee5
    • M
      vircgroup: Introduce virCgroupControllerAvailable · d65acbde
      Michal Privoznik 提交于
      This new internal API checks if given CGroup controller is
      available.  It is going to be needed later when we need to make a
      decision whether pin domain memory onto NUMA nodes using cpuset
      CGroup controller or using numa_set_membind().
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      d65acbde
    • M
      qemu_driver: check caps after starting block job · cfcdf5ff
      Michael Chapman 提交于
      Currently we check qemuCaps before starting the block job. But qemuCaps
      isn't available on a stopped domain, which means we get a misleading
      error message in this case:
      
        # virsh domstate example
        shut off
      
        # virsh blockjob example vda
        error: unsupported configuration: block jobs not supported with this QEMU binary
      
      Move the qemuCaps check into the block job so that we are guaranteed the
      domain is running.
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      cfcdf5ff
    • M
      qemu_migrate: use nested job when adding NBD to cookie · 72df8314
      Michael Chapman 提交于
      qemuMigrationCookieAddNBD is usually called from within an async
      MIGRATION_OUT or MIGRATION_IN job, so it needs to start a nested job.
      
      (The one exception is during the Begin phase when change protection
      isn't enabled, but qemuDomainObjEnterMonitorAsync will behave the same
      as qemuDomainObjEnterMonitor in this case.)
      
      This bug was encountered with a libvirt client that repeatedly queries
      the disk mirroring block job info during a migration. If one of these
      queries occurs just as the Perform migration cookie is baked, libvirt
      crashes.
      
      Relevant logs are as follows:
      
          6701: warning : qemuDomainObjEnterMonitorInternal:1544 : This thread seems to be the async job owner; entering monitor without asking for a nested job is dangerous
      [1] 6701: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block","id":"libvirt-629"}
      [2] 6699: info : qemuMonitorIOWrite:503 : QEMU_MONITOR_IO_WRITE: mon=0x7fefdc004700 buf={"execute":"query-block","id":"libvirt-629"}
      [3] 6704: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block-jobs","id":"libvirt-630"}
      [4] 6699: info : qemuMonitorJSONIOProcessLine:203 : QEMU_MONITOR_RECV_REPLY: mon=0x7fefdc004700 reply={"return": [...], "id": "libvirt-629"}
          6699: error : qemuMonitorJSONIOProcessLine:211 : internal error: Unexpected JSON reply '{"return": [...], "id": "libvirt-629"}'
      
      At [1] qemuMonitorBlockStatsUpdateCapacity sends its request, then waits
      on mon->notify. At [2] the request is written out to the monitor socket.
      At [3] qemuMonitorBlockJobInfo sends its request, and also waits on
      mon->notify. The reply from the first request is received at [4].
      However, qemuMonitorJSONIOProcessLine is not expecting this reply since
      the second request hadn't completed sending. The reply is dropped and an
      error is returned.
      
      qemuMonitorIO signals mon->notify twice during its error handling,
      waking up both of the threads waiting on it. One of them clears mon->msg
      as it exits qemuMonitorSend; the other crashes:
      
        qemuMonitorSend (mon=0x7fefdc004700, msg=<value optimized out>) at qemu/qemu_monitor.c:975
        975         while (!mon->msg->finished) {
        (gdb) print mon->msg
        $1 = (qemuMonitorMessagePtr) 0x0
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      72df8314
    • M
      parallels: delete old networks in prlsdkDoApplyConfig before adding new ones · 9baf87bb
      Maxim Nestratov 提交于
      In order to change an existing domain we delete all existing devices and add
      new from scratch. In case of network devices we should also delete corresponding
      virtual networks (if any) before removing actual devices from xml. In the patch,
      we do it by extending prlsdkDoApplyConfig with a new parameter, which stands for
      old xml, and calling prlsdkDelNet every time old xml is specified.
      Signed-off-by: NMaxim Nestratov <mnestratov@parallels.com>
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      9baf87bb
    • M
      util: fix removal of callbacks in virCloseCallbacksRun · fa2607d5
      Michael Chapman 提交于
      The close callbacks hash are keyed by a UUID-string, but
      virCloseCallbacksRun was attempting to remove them by raw UUID. This
      patch ensures the callback entries are removed by UUID-string as well.
      
      This bug caused problems when guest migrations were abnormally aborted:
      
        # timeout --signal KILL 1 \
            virsh migrate example qemu+tls://remote/system \
              --verbose --compressed --live --auto-converge \
              --abort-on-error --unsafe --persistent \
              --undefinesource --copy-storage-all --xml example.xml
        Killed
      
        # virsh migrate example qemu+tls://remote/system \
            --verbose --compressed --live --auto-converge \
            --abort-on-error --unsafe --persistent \
            --undefinesource --copy-storage-all --xml example.xml
        error: Requested operation is not valid: domain 'example' is not being migrated
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      fa2607d5
    • M
      qemu: fix race between disk mirror fail and cancel · e5d729ba
      Michael Chapman 提交于
      If a VM migration is aborted, a disk mirror may be failed by QEMU before
      libvirt has a chance to cancel it. The disk->mirrorState remains at
      _ABORT in this case, and this breaks subsequent mirrorings of that disk.
      
      We should instead check the mirrorState directly and transition to _NONE
      if it is already aborted. Do the check *after* aborting the block job in
      QEMU to avoid a race.
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      e5d729ba
    • M
      qemu: fix error propagation in qemuMigrationBegin · 77ddd0bb
      Michael Chapman 提交于
      If virCloseCallbacksSet fails, qemuMigrationBegin must return NULL to
      indicate an error occurred.
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      77ddd0bb
    • M
      qemu: fix crash in qemuProcessAutoDestroy · 7578cc17
      Michael Chapman 提交于
      The destination libvirt daemon in a migration may segfault if the client
      disconnects immediately after the migration has begun:
      
        # virsh -c qemu+tls://remote/system list --all
         Id    Name                           State
        ----------------------------------------------------
        ...
      
        # timeout --signal KILL 1 \
            virsh migrate example qemu+tls://remote/system \
              --verbose --compressed --live --auto-converge \
              --abort-on-error --unsafe --persistent \
              --undefinesource --copy-storage-all --xml example.xml
        Killed
      
        # virsh -c qemu+tls://remote/system list --all
        error: failed to connect to the hypervisor
        error: unable to connect to server at 'remote:16514': Connection refused
      
      The crash is in:
      
         1531 void
         1532 qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj)
         1533 {
         1534     qemuDomainObjPrivatePtr priv = obj->privateData;
         1535     qemuDomainJob job = priv->job.active;
         1536
         1537     priv->jobs_queued--;
      
      Backtrace:
      
        #0  at qemuDomainObjEndJob at qemu/qemu_domain.c:1537
        #1  in qemuDomainRemoveInactive at qemu/qemu_domain.c:2497
        #2  in qemuProcessAutoDestroy at qemu/qemu_process.c:5646
        #3  in virCloseCallbacksRun at util/virclosecallbacks.c:350
        #4  in qemuConnectClose at qemu/qemu_driver.c:1154
        ...
      
      qemuDomainRemoveInactive calls virDomainObjListRemove, which in this
      case is holding the last remaining reference to the domain.
      qemuDomainRemoveInactive then calls qemuDomainObjEndJob, but the domain
      object has been freed and poisoned by then.
      
      This patch bumps the domain's refcount until qemuDomainRemoveInactive
      has completed. We also ensure qemuProcessAutoDestroy does not return the
      domain to virCloseCallbacksRun to be unlocked in this case. There is
      similar logic in bhyveProcessAutoDestroy and lxcProcessAutoDestroy
      (which call virDomainObjListRemove directly).
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      7578cc17
    • M
      virQEMUDriverGetConfig: Fix memleak · 225aa802
      Michal Privoznik 提交于
      ==19015== 968 (416 direct, 552 indirect) bytes in 1 blocks are definitely lost in loss record 999 of 1,049
      ==19015==    at 0x4C2C070: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
      ==19015==    by 0x52ADF14: virAllocVar (viralloc.c:560)
      ==19015==    by 0x5302FD1: virObjectNew (virobject.c:193)
      ==19015==    by 0x1DD9401E: virQEMUDriverConfigNew (qemu_conf.c:164)
      ==19015==    by 0x1DDDF65D: qemuStateInitialize (qemu_driver.c:666)
      ==19015==    by 0x53E0823: virStateInitialize (libvirt.c:777)
      ==19015==    by 0x11E067: daemonRunStateInit (libvirtd.c:905)
      ==19015==    by 0x53201AD: virThreadHelper (virthread.c:206)
      ==19015==    by 0xA1EE1F2: start_thread (in /lib64/libpthread-2.19.so)
      ==19015==    by 0xA4EFC8C: clone (in /lib64/libc-2.19.so)
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      225aa802
    • M
      virDomainVirtioSerialAddrSetFree: Fix memleak · 8d971cec
      Michal Privoznik 提交于
      ==19015== 8 bytes in 1 blocks are definitely lost in loss record 34 of 1,049
      ==19015==    at 0x4C29F80: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
      ==19015==    by 0x4C2C32F: realloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
      ==19015==    by 0x52AD888: virReallocN (viralloc.c:245)
      ==19015==    by 0x52AD97E: virExpandN (viralloc.c:294)
      ==19015==    by 0x52ADC51: virInsertElementsN (viralloc.c:436)
      ==19015==    by 0x5335864: virDomainVirtioSerialAddrSetAddController (domain_addr.c:816)
      ==19015==    by 0x53358E0: virDomainVirtioSerialAddrSetAddControllers (domain_addr.c:839)
      ==19015==    by 0x1DD5513B: qemuDomainAssignVirtioSerialAddresses (qemu_command.c:1422)
      ==19015==    by 0x1DD55A6E: qemuDomainAssignAddresses (qemu_command.c:1711)
      ==19015==    by 0x1DDA5818: qemuProcessStart (qemu_process.c:4616)
      ==19015==    by 0x1DDF1807: qemuDomainObjStart (qemu_driver.c:7265)
      ==19015==    by 0x1DDF1A66: qemuDomainCreateWithFlags (qemu_driver.c:7320)
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      8d971cec
    • M
      qemuSetupCgroupForVcpu: Fix memleak · 9dbe6f31
      Michal Privoznik 提交于
      ==19015== 1,064 (656 direct, 408 indirect) bytes in 2 blocks are definitely lost in loss record 1,002 of 1,049
      ==19015==    at 0x4C2C070: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
      ==19015==    by 0x52AD74B: virAlloc (viralloc.c:144)
      ==19015==    by 0x52B47CA: virCgroupNew (vircgroup.c:1057)
      ==19015==    by 0x52B53E5: virCgroupNewVcpu (vircgroup.c:1451)
      ==19015==    by 0x1DD85A40: qemuSetupCgroupForVcpu (qemu_cgroup.c:1013)
      ==19015==    by 0x1DDA66EA: qemuProcessStart (qemu_process.c:4844)
      ==19015==    by 0x1DDF1807: qemuDomainObjStart (qemu_driver.c:7265)
      ==19015==    by 0x1DDF1A66: qemuDomainCreateWithFlags (qemu_driver.c:7320)
      ==19015==    by 0x1DDF1ACD: qemuDomainCreate (qemu_driver.c:7337)
      ==19015==    by 0x53F87EA: virDomainCreate (libvirt-domain.c:6820)
      ==19015==    by 0x12690A: remoteDispatchDomainCreate (remote_dispatch.h:3481)
      ==19015==    by 0x126827: remoteDispatchDomainCreateHelper (remote_dispatch.h:3457)
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      9dbe6f31
  2. 07 4月, 2015 6 次提交
  3. 03 4月, 2015 8 次提交
    • J
      6b55c18f
    • N
      libvirt: virsh: Kill all uses of __FUNCTION__ in error messages · c4db8c5e
      Noella Ashu 提交于
      The error output of snapshot-revert should be more friendly.
      There is no need to show virDomainRevertToSnapshot to user.
      virReportError already includes __FUNCTION__ information in a
      separate member of the struct, so repeating it in the message is
      redundant and leads to situations where higher level code ends up
      reporting the lower level name. We correctly converted the error
      output making it more succinct and user-friendly.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1086726
      c4db8c5e
    • L
      virsh: Fix domifaddr output in quiet mode · 156fde0b
      Luyao Huang 提交于
      In virsh we have two printing functions: vshPrint() which prints a
      string onto stdout and vshPrintExtra() which does not print anything
      if virsh is run in quiet mode. Usually, the former is used to print
      actual results, while the latter to print strings like table headers
      and other formatting stuff. However, in cmdDomIfAddr we have
      mistakenly used vshPrintExtra even for actual data. After this patch,
      the output should look like the following:
      
        # virsh -q domifaddr test3 --source agent
        lo         00:00:00:00:00:00    ipv4         127.0.0.1/8
        -          -                    ipv6         ::1/128
        ens8       52:54:00:1a:cb:3f    ipv6         fe80::5054:ff:fe1a:cb3f/64
        virbr0     52:54:00:db:51:e7    ipv4         192.168.122.1/24
        virbr0-nic 52:54:00:db:51:e7    N/A          N/A
      Signed-off-by: NLuyao Huang <lhuang@redhat.com>
      156fde0b
    • D
      esx: esxNodeGetFreeMemory return info from host. · 66fe31d1
      Dawid Zamirski 提交于
      Before this patch, when connected via vCenter, the free memory returned
      was from the resorcePool (usually a cluster). This is in conflict with
      e.g esxNodeGetInfo which always pulls info from the ESX host.
      Since libvirt ESX driver works primarily with ESX hosts, this patch
      changes esxNodeGetFreeMemory to pull that information from ESX host so
      it's consistent with behavior of esxNodeGetInfo.
      66fe31d1
    • D
      esx: add esxVI_GetInt · 486a8e47
      Dawid Zamirski 提交于
      Modeled after the already existing esxVI_GetLong.
      486a8e47
    • E
      conf: Change virStoragePoolSaveConfig prototype s/configDir/configFile · 17ab5bc0
      Erik Skultety 提交于
      Just a minor change which might be a little confusing for someone
      looking only at the API.
      17ab5bc0
    • E
      conf: Introduce virStoragePoolSaveState · 39b183b4
      Erik Skultety 提交于
      Introduce virStoragePoolSaveState to properly format the state XML in
      the same manner as virStoragePoolDefFormat, except for adding a
      <poolstate> ... </poolstate> around the definition. This is similar to
      virNetworkObjFormat used to save the live/active network information.
      39b183b4
    • E
      conf: Introduce virStoragePoolDefFormatBuf · 6ae11909
      Erik Skultety 提交于
      When modifying config/status XML, it might be handy to include some
      additional XML elements (e.g. <poolstate>). In order to do so,
      introduce new formatting function virStoragePoolDefFormatBuf and make
      virStoragePoolDefFormat call it.
      6ae11909
  4. 02 4月, 2015 12 次提交
    • J
      libxl: fix dom0 balloon logic · d685c0f9
      Jim Fehlig 提交于
      Recent testing on large memory systems revealed a bug in the Xen xl
      tool's freemem() function.  When autoballooning is enabled, freemem()
      is used to ensure enough memory is available to start a domain,
      ballooning dom0 if necessary.  When ballooning large amounts of memory
      from dom0, freemem() would exceed its self-imposed wait time and
      return an error.  Meanwhile, dom0 continued to balloon.  Starting the
      domain later, after sufficient memory was ballooned from dom0, would
      succeed.  The libvirt implementation in libxlDomainFreeMem() suffers
      the same bug since it is modeled after freemem().
      
      In the end, the best place to fix the bug on the Xen side was to
      slightly change the behavior of libxl_wait_for_memory_target().
      Instead of failing after caller-provided wait_sec, the function now
      blocks as long as dom0 memory ballooning is progressing.  It will return
      failure only when more memory is needed to reach the target and wait_sec
      have expired with no progress being made.  See xen.git commit fd3aa246.
      There was a dicussion on how this would affect other libxl apps like
      libvirt
      
      http://lists.xen.org/archives/html/xen-devel/2015-03/msg00739.html
      
      If libvirt containing this patch was build against a Xen containing
      the old libxl_wait_for_memory_target() behavior, libxlDomainFreeMem()
      will fail after 30 sec and domain creation will be terminated.
      Without this patch and with old libxl_wait_for_memory_target() behavior,
      libxlDomainFreeMem() does not succeed after 30 sec, but returns success
      anyway.  Domain creation continues resulting in all sorts of fun stuff
      like cpu soft lockups in the guest OS.  It was decided to properly fix
      libxl_wait_for_memory_target(), and if anything improve the default
      behavior of apps using the freemem reference impl in xl.
      
      xl was patched to accommodate the change in libxl_wait_for_memory_target()
      with xen.git commit 883b30a0.  This patch does the same in the libxl
      driver.  While at it, I changed the logic to essentially match
      freemem() in $xensrc/tools/libxl/xl_cmdimpl.c.  It was a bit cleaner
      IMO and will make it easier to spot future, potentially interesting
      divergences.
      d685c0f9
    • M
      Typos: Get rid of dependan(t|cies) · 2a15fef0
      Martin Kletzander 提交于
      Dependant is flagged as wrong in US dictionary (only valid in UK
      dictionary, and even then, it has only the financial sense and not the
      inter-relatedness sense that we are more prone to be wanting throughout
      code).
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      2a15fef0
    • M
      b2089588
    • H
      hostdev: Fix index error in loop after remove an element · 7adb4bfc
      Huanle Han 提交于
      'virPCIDeviceList' is actually an array. Removing one element makes the
      rest of the element move.
      
      Use while loop, increase index only when not virPCIDeviceListDel(pcidevs, dev)
      Signed-off-by: NHuanle Han <hanxueluo@gmail.com>
      7adb4bfc
    • J
      Fix xlconfigtest with older libxl · b84c5729
      Ján Tomko 提交于
      Commit cd5dc303 added this test, but it fails if
      LIBXL_HAVE_BUILDINFO_USBDEVICE_LIST is not defined:
      
      6) Xen XM-2-XML Format fullvirt-multiusb
      ... libvirt:  error : unsupported configuration: multiple USB
      devices not supported
      FAILED
      b84c5729
    • J
      Auto add virtio-serial controllers · 1371ea92
      Ján Tomko 提交于
      In virDomainVirtioSerialAddrNext, add another controller
      if we've exhausted all ports of the existing controllers.
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1076708
      1371ea92
    • J
      89e991a2
    • J
      ee0d97a7
    • J
      Allocate virtio-serial addresses when starting a domain · 59033788
      Ján Tomko 提交于
      Instead of always using controller 0 and incrementing port number,
      respect the maximum port numbers of controllers and use all of them.
      
      Ports for virtio consoles are quietly reserved, but not formatted
      (neither in XML nor on QEMU command line).
      
      Also rejects duplicate virtio-serial addresses.
      https://bugzilla.redhat.com/show_bug.cgi?id=890606
      https://bugzilla.redhat.com/show_bug.cgi?id=1076708
      
      Test changes:
      * virtio-auto.args
        Filling out the port when just the controller is specified.
        switched from using
          maxport + 1
        to:
          first free port on the controller
      * virtio-autoassign.args
        Filling out the address when no <address> is specified.
        Started using all the controllers instead of 0, also discards
        the bus value.
      * xml -> xml output of virtio-auto
        The port assignment is no longer done as a part of XML parsing,
        so the unspecified values stay 0.
      59033788
    • J
      Add functions to track virtio-serial addresses · 16db8d2e
      Ján Tomko 提交于
      Create a sorted array of virtio-serial controllers.
      Each of the elements contains the controller index
      and a bitmap of available ports.
      
      Buses are not tracked, because they aren't supported by QEMU.
      16db8d2e
    • J
      Add test for virtio serial port assignment · 8945d6a8
      Ján Tomko 提交于
      Add a test to demonstrate the effect of automatic virtio-serial
      address assignment.
      8945d6a8
    • J
      scsi: Remove unused 'type_path' in processLU · f51fbdd1
      John Ferlan 提交于
      Seems to be a remnant that was never cleaned up from original submit...
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      f51fbdd1