1. 10 4月, 2015 17 次提交
    • D
      conf: add input device type for parallels containers · 6cc2cdf6
      Dmitry Guryanov 提交于
      Add VIR_DOMAIN_INPUT_BUS_PARALLELS device type
      to handle domain configuration properly for
      parallels containers, when VNC is enabled.
      
      When domain configuration has at least one
      'graphics', there should be mouse and keyboard.
      Signed-off-by: NDmitry Guryanov <dguryanov@parallels.com>
      6cc2cdf6
    • D
      conf: return proper default video type for parallels · 756f8dcd
      Dmitry Guryanov 提交于
      Fix function virDomainVideoDefaultType for
      parallels VMs and containers. It should return
      VGA for VMs and VIR_DOMAIN_VIDEO_TYPE_PARALLELS
      for containers.
      Signed-off-by: NDmitry Guryanov <dguryanov@parallels.com>
      756f8dcd
    • D
      conf: add VIR_DOMAIN_VIDEO_TYPE_PARALLELS video type · 0d572b69
      Dmitry Guryanov 提交于
      We support VNC for containers to have the same
      interface with VMs. At this moment it just renders
      linux text console.
      
      Of course we don't pass any physical devices and
      don't emulate virtual devices. Our VNC server
      renders text from terminal master and sends
      input events from VNC client to terminal.
      
      So add special video type VIR_DOMAIN_VIDEO_TYPE_PARALLELS
      for these pseudo-devices.
      Signed-off-by: NDmitry Guryanov <dguryanov@parallels.com>
      0d572b69
    • D
      parallels: don't fill net adapter model for containers · b16868a1
      Dmitry Guryanov 提交于
      Network adapter model has no sense for container,
      so we shouldn't set it to e1000 in
      parallelsDomainDeviceDefPostParse.
      Signed-off-by: NDmitry Guryanov <dguryanov@parallels.com>
      b16868a1
    • D
      parallels: fill adapter model in virDomainNetDef · 6a06b467
      Dmitry Guryanov 提交于
      We handle this parameter for VMs while defining
      domains, so let's get this property from PCS and
      set corresponding field of virDomainNetDef in
      prlsdkLoadDomains function.
      Signed-off-by: NDmitry Guryanov <dguryanov@parallels.com>
      6a06b467
    • D
      parallels: add controllers in prlsdkLoadDomain · b204afa1
      Dmitry Guryanov 提交于
      Call virDomainDefAddImplicitControllers to add disk
      controllers, so virDomainDef, filled by this function
      will look exactly like the one returned by virDomainDefParseString.
      Signed-off-by: NDmitry Guryanov <dguryanov@parallels.com>
      b204afa1
    • D
      parallels: report, that cdroms are readonly · 66aee375
      Dmitry Guryanov 提交于
      Set readonly flag for cdrom devices when we
      retrieve a list of domains from PCS.
      Signed-off-by: NDmitry Guryanov <dguryanov@parallels.com>
      66aee375
    • D
      parallels: implement virDomainManagedSave · 8951ad86
      Dmitry Guryanov 提交于
      Implement virDomainManagedSave api function. In PCS
      this feature called "suspend". You can suspend VM or
      CT while it is in running or paused state. And after
      resuming (or starting) it will have the same state, as
      before suspend.
      Signed-off-by: NDmitry Guryanov <dguryanov@parallels.com>
      8951ad86
    • D
      parallels: split prlsdkDomainChangeState function · 233b799d
      Dmitry Guryanov 提交于
      Split function prlsdkDomainChangeState into
      prlsdkDomainChangeStateLocked and prlsdkDomainChangeState.
      So it can be used from places, where virDomainObj already
      found and locked.
      Signed-off-by: NDmitry Guryanov <dguryanov@parallels.com>
      233b799d
    • D
      parallels: fix headers in parallels_sdk.h · 18558ae8
      Dmitry Guryanov 提交于
      Return value of functions prlsdkStart/Kill/Stop e.t.c.
      is PRL_RESULT in parallels_sdk.c and int in parallels_sdk.h.
      PRL_RESULT is int, so compiler didn't report errors.
      Let's fix the difference.
      Signed-off-by: NDmitry Guryanov <dguryanov@parallels.com>
      18558ae8
    • J
      qemu: qemuDomainHotplugVcpus - separate out the del cgroup and pin · 97a1d94f
      John Ferlan 提交于
      Future IOThread setting patches would copy the code anyway, so create
      and generalize a delete cgroup and pindef for the vcpu into its own API.
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      97a1d94f
    • J
      qemu: qemuDomainHotplugVcpus - separate out the add cgroup · 0ed8e47a
      John Ferlan 提交于
      Future IOThread setting patches would copy the code anyway, so create
      and generalize the add the vcpu to a cgroup into its own API.
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      0ed8e47a
    • J
      cgroup: Use virCgroupNewThread · 0456eda3
      John Ferlan 提交于
      Replace the virCgroupNew{Vcpu|Emulator|IOThread} calls with the common
      virCgroupNewThread API
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      0456eda3
    • J
      cgroup: Introduce virCgroupNewThread · 2cd3a980
      John Ferlan 提交于
      Create a new common API to replace the virCgroupNew{Vcpu|Emulator|IOThread}
      API's using an emum to generate the cgroup name
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      2cd3a980
    • J
      storage: Don't duplicate efforts of backend driver · 2ac0e647
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1206521
      
      If the backend driver updates the pool available and/or allocation values,
      then the storage_driver VolCreateXML, VolCreateXMLFrom, and VolDelete APIs
      should not change the value; otherwise, it will appear as if the values
      were "doubled" for each change.  Additionally since unsigned arithmetic will
      be used depending on the size and operation, either or both values could be
      appear to be much larger than they should be (in the EiB range).
      
      Currently only the disk pool updates the values, but other pools could.
      Assume a "fresh" disk pool of 500 MiB using /dev/sde:
      
      $ virsh pool-info disk-pool
      ...
      Capacity:       509.88 MiB
      Allocation:     0.00 B
      Available:      509.84 MiB
      
      $ virsh vol-create-as disk-pool sde1 --capacity 300M
      
      $ virsh pool-info disk-pool
      ...
      Capacity:       509.88 MiB
      Allocation:     600.47 MiB
      Available:      16.00 EiB
      
      Following assumes disk backend updated to refresh the disk pool at deletion
      of primary partition as well as extended partition:
      
      $ virsh vol-delete --pool disk-pool sde1
      Vol sde1 deleted
      
      $ virsh pool-info disk-pool
      ...
      Capacity:       509.88 MiB
      Allocation:     9.73 EiB
      Available:      6.27 EiB
      
      This patch will check if the backend updated the pool values and honor that
      update.
      2ac0e647
    • J
      storage: Need to update freeExtent at delete primary partition · 1ffd82bb
      John Ferlan 提交于
      Commit id '471e1c4e' only considered updating the pool if the extended
      partition was removed. As it turns out removing a primary partition
      would also need to update the freeExtent list otherwise the following
      sequence would fail (assuming a "fresh" disk pool for /dev/sde of 500M):
      
      $  virsh pool-info disk-pool
      ...
      Capacity:       509.88 MiB
      Allocation:     0.00 B
      Available:      509.84 MiB
      
      $ virsh vol-create-as disk-pool sde1 --capacity 300M
      $ virsh vol-delete --pool disk-pool sde1
      $ virsh vol-create-as disk-pool sde1 --capacity 300M
      error: Failed to create vol sde1
      error: internal error: no large enough free extent
      
      $
      
      This patch will refresh the pool, rereading the partitions, and
      return
      1ffd82bb
    • J
      storage: Fix issues in storageVolResize · 1095230d
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1073305
      
      When creating a volume in a pool, the creation allows the 'capacity'
      value to be larger than the available space in the pool. As long as
      the 'allocation' value will fit in the space, the volume will be created.
      
      However, resizing the volume checks were made with the new absolute
      capacity value against existing capacity + the available space without
      regard for whether the new absolute capacity was actually allocating
      space or not.  For example, a pool with 75G of available space creates
      a volume of 10G using a capacity of 100G and allocation of 10G will succeed;
      however, if the allocation used a capacity of 10G instead and then tried
      to resize the allocation to 100G the code would fail to allow the backend
      to try the resize.
      
      Furthermore, when updating the pool "available" and "allocation" values,
      the resize code would just "blindly" adjust them regardless of whether
      space was "allocated" or just "capacity" was being adjusted.  This left
      a scenario whereby a resize to 100G would fail; however, a resize to 50G
      followed by one to 100G would both succeed.  Again, neither was adjusting
      the allocation value, just the "capacity" value.
      
      This patch adds more logic to the resize code to understand whether the
      new capacity value is actually "allocating" space as well and whether it
      shrinking or expanding. Since unsigned arithmatic is involved, the possibility
      that we adjust the pool size values incorrectly is probable.
      
      This patch also ensures that updates to the pool values only occur if we
      actually performed the allocation.
      
      NB: The storageVolDelete, storageVolCreateXML, and storageVolCreateXMLFrom
      each only updates the pool allocation/availability values by the target
      volume allocation value.
      1095230d
  2. 09 4月, 2015 9 次提交
  3. 08 4月, 2015 14 次提交
    • M
      virLXCControllerSetupResourceLimits: Call virNuma*() iff needed · 36256688
      Michal Privoznik 提交于
      Like we are doing in qemu driver (ea576ee5), lets call
      virNumaSetupMemoryPolicy() only if really needed. Problem is, if
      we numa_set_membind() child, there's no way to change it from the
      daemon afterwards. So any later attempts to change the pinning
      will fail. But in very weird way - CGroups will be set, but due
      to membind child will not allocate memory from any other node.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      36256688
    • L
      fix memleak in qemuRestoreCgroupState · 7cd0cf05
      Luyao Huang 提交于
       131,088 bytes in 16 blocks are definitely lost in loss record 2,174 of 2,176
          at 0x4C29BFD: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
          by 0x4C2BACB: realloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
          by 0x52A026F: virReallocN (viralloc.c:245)
          by 0x52BFCB5: saferead_lim (virfile.c:1268)
          by 0x52C00EF: virFileReadLimFD (virfile.c:1328)
          by 0x52C019A: virFileReadAll (virfile.c:1351)
          by 0x52A5D4F: virCgroupGetValueStr (vircgroup.c:763)
          by 0x1DDA0DA3: qemuRestoreCgroupState (qemu_cgroup.c:805)
          by 0x1DDA0DA3: qemuConnectCgroup (qemu_cgroup.c:857)
          by 0x1DDB7BA1: qemuProcessReconnect (qemu_process.c:3694)
          by 0x52FD171: virThreadHelper (virthread.c:206)
          by 0x82B8DF4: start_thread (pthread_create.c:308)
          by 0x85C31AC: clone (clone.S:113)
      Signed-off-by: NLuyao Huang <lhuang@redhat.com>
      7cd0cf05
    • D
      vbox: Implement virDomainSendKey · 306a242d
      Dawid Zamirski 提交于
      Since the holdtime is not supported by VBOX SDK, it's being simulated
      by sleeping before sending the key-up codes. The key-up codes are
      auto-generated based on XT codeset rules (adding of 0x80 to key-down)
      which results in the same behavior as for QEMU implementation.
      306a242d
    • D
      vbox: Register IKeyboard with the unified API. · 445733f3
      Dawid Zamirski 提交于
      The IKeyboard COM object is needed to implement virDomainSendKey and is
      available in all supported VBOX versions.
      445733f3
    • M
      qemuProcessHook: Call virNuma*() only when needed · ea576ee5
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1198645
      
      Once upon a time, there was a little domain. And the domain was pinned
      onto a NUMA node and hasn't fully allocated its memory:
      
        <memory unit='KiB'>2355200</memory>
        <currentMemory unit='KiB'>1048576</currentMemory>
      
        <numatune>
          <memory mode='strict' nodeset='0'/>
        </numatune>
      
      Oh little me, said the domain, what will I do with so little memory.
      If I only had a few megabytes more. But the old admin noticed the
      whimpering, barely audible to untrained human ear. And good admin he
      was, he gave the domain yet more memory. But the old NUMA topology
      witch forbade to allocate more memory on the node zero. So he
      decided to allocate it on a different node:
      
      virsh # numatune little_domain --nodeset 0-1
      
      virsh # setmem little_domain 2355200
      
      The little domain was happy. For a while. Until bad, sharp teeth
      shaped creature came. Every process in the system was afraid of him.
      The OOM Killer they called him. Oh no, he's after the little domain.
      There's no escape.
      
      Do you kids know why? Because when the little domain was born, her
      father, Libvirt, called numa_set_membind(). So even if the admin
      allowed her to allocate memory from other nodes in the cgroups, the
      membind() forbid it.
      
      So what's the lesson? Libvirt should rely on cgroups, whenever
      possible and use numa_set_membind() as the last ditch effort.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      ea576ee5
    • M
      vircgroup: Introduce virCgroupControllerAvailable · d65acbde
      Michal Privoznik 提交于
      This new internal API checks if given CGroup controller is
      available.  It is going to be needed later when we need to make a
      decision whether pin domain memory onto NUMA nodes using cpuset
      CGroup controller or using numa_set_membind().
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      d65acbde
    • M
      qemu_driver: check caps after starting block job · cfcdf5ff
      Michael Chapman 提交于
      Currently we check qemuCaps before starting the block job. But qemuCaps
      isn't available on a stopped domain, which means we get a misleading
      error message in this case:
      
        # virsh domstate example
        shut off
      
        # virsh blockjob example vda
        error: unsupported configuration: block jobs not supported with this QEMU binary
      
      Move the qemuCaps check into the block job so that we are guaranteed the
      domain is running.
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      cfcdf5ff
    • M
      qemu_migrate: use nested job when adding NBD to cookie · 72df8314
      Michael Chapman 提交于
      qemuMigrationCookieAddNBD is usually called from within an async
      MIGRATION_OUT or MIGRATION_IN job, so it needs to start a nested job.
      
      (The one exception is during the Begin phase when change protection
      isn't enabled, but qemuDomainObjEnterMonitorAsync will behave the same
      as qemuDomainObjEnterMonitor in this case.)
      
      This bug was encountered with a libvirt client that repeatedly queries
      the disk mirroring block job info during a migration. If one of these
      queries occurs just as the Perform migration cookie is baked, libvirt
      crashes.
      
      Relevant logs are as follows:
      
          6701: warning : qemuDomainObjEnterMonitorInternal:1544 : This thread seems to be the async job owner; entering monitor without asking for a nested job is dangerous
      [1] 6701: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block","id":"libvirt-629"}
      [2] 6699: info : qemuMonitorIOWrite:503 : QEMU_MONITOR_IO_WRITE: mon=0x7fefdc004700 buf={"execute":"query-block","id":"libvirt-629"}
      [3] 6704: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block-jobs","id":"libvirt-630"}
      [4] 6699: info : qemuMonitorJSONIOProcessLine:203 : QEMU_MONITOR_RECV_REPLY: mon=0x7fefdc004700 reply={"return": [...], "id": "libvirt-629"}
          6699: error : qemuMonitorJSONIOProcessLine:211 : internal error: Unexpected JSON reply '{"return": [...], "id": "libvirt-629"}'
      
      At [1] qemuMonitorBlockStatsUpdateCapacity sends its request, then waits
      on mon->notify. At [2] the request is written out to the monitor socket.
      At [3] qemuMonitorBlockJobInfo sends its request, and also waits on
      mon->notify. The reply from the first request is received at [4].
      However, qemuMonitorJSONIOProcessLine is not expecting this reply since
      the second request hadn't completed sending. The reply is dropped and an
      error is returned.
      
      qemuMonitorIO signals mon->notify twice during its error handling,
      waking up both of the threads waiting on it. One of them clears mon->msg
      as it exits qemuMonitorSend; the other crashes:
      
        qemuMonitorSend (mon=0x7fefdc004700, msg=<value optimized out>) at qemu/qemu_monitor.c:975
        975         while (!mon->msg->finished) {
        (gdb) print mon->msg
        $1 = (qemuMonitorMessagePtr) 0x0
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      72df8314
    • M
      parallels: delete old networks in prlsdkDoApplyConfig before adding new ones · 9baf87bb
      Maxim Nestratov 提交于
      In order to change an existing domain we delete all existing devices and add
      new from scratch. In case of network devices we should also delete corresponding
      virtual networks (if any) before removing actual devices from xml. In the patch,
      we do it by extending prlsdkDoApplyConfig with a new parameter, which stands for
      old xml, and calling prlsdkDelNet every time old xml is specified.
      Signed-off-by: NMaxim Nestratov <mnestratov@parallels.com>
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      9baf87bb
    • M
      util: fix removal of callbacks in virCloseCallbacksRun · fa2607d5
      Michael Chapman 提交于
      The close callbacks hash are keyed by a UUID-string, but
      virCloseCallbacksRun was attempting to remove them by raw UUID. This
      patch ensures the callback entries are removed by UUID-string as well.
      
      This bug caused problems when guest migrations were abnormally aborted:
      
        # timeout --signal KILL 1 \
            virsh migrate example qemu+tls://remote/system \
              --verbose --compressed --live --auto-converge \
              --abort-on-error --unsafe --persistent \
              --undefinesource --copy-storage-all --xml example.xml
        Killed
      
        # virsh migrate example qemu+tls://remote/system \
            --verbose --compressed --live --auto-converge \
            --abort-on-error --unsafe --persistent \
            --undefinesource --copy-storage-all --xml example.xml
        error: Requested operation is not valid: domain 'example' is not being migrated
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      fa2607d5
    • M
      qemu: fix race between disk mirror fail and cancel · e5d729ba
      Michael Chapman 提交于
      If a VM migration is aborted, a disk mirror may be failed by QEMU before
      libvirt has a chance to cancel it. The disk->mirrorState remains at
      _ABORT in this case, and this breaks subsequent mirrorings of that disk.
      
      We should instead check the mirrorState directly and transition to _NONE
      if it is already aborted. Do the check *after* aborting the block job in
      QEMU to avoid a race.
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      e5d729ba
    • M
      qemu: fix error propagation in qemuMigrationBegin · 77ddd0bb
      Michael Chapman 提交于
      If virCloseCallbacksSet fails, qemuMigrationBegin must return NULL to
      indicate an error occurred.
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      77ddd0bb
    • M
      qemu: fix crash in qemuProcessAutoDestroy · 7578cc17
      Michael Chapman 提交于
      The destination libvirt daemon in a migration may segfault if the client
      disconnects immediately after the migration has begun:
      
        # virsh -c qemu+tls://remote/system list --all
         Id    Name                           State
        ----------------------------------------------------
        ...
      
        # timeout --signal KILL 1 \
            virsh migrate example qemu+tls://remote/system \
              --verbose --compressed --live --auto-converge \
              --abort-on-error --unsafe --persistent \
              --undefinesource --copy-storage-all --xml example.xml
        Killed
      
        # virsh -c qemu+tls://remote/system list --all
        error: failed to connect to the hypervisor
        error: unable to connect to server at 'remote:16514': Connection refused
      
      The crash is in:
      
         1531 void
         1532 qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj)
         1533 {
         1534     qemuDomainObjPrivatePtr priv = obj->privateData;
         1535     qemuDomainJob job = priv->job.active;
         1536
         1537     priv->jobs_queued--;
      
      Backtrace:
      
        #0  at qemuDomainObjEndJob at qemu/qemu_domain.c:1537
        #1  in qemuDomainRemoveInactive at qemu/qemu_domain.c:2497
        #2  in qemuProcessAutoDestroy at qemu/qemu_process.c:5646
        #3  in virCloseCallbacksRun at util/virclosecallbacks.c:350
        #4  in qemuConnectClose at qemu/qemu_driver.c:1154
        ...
      
      qemuDomainRemoveInactive calls virDomainObjListRemove, which in this
      case is holding the last remaining reference to the domain.
      qemuDomainRemoveInactive then calls qemuDomainObjEndJob, but the domain
      object has been freed and poisoned by then.
      
      This patch bumps the domain's refcount until qemuDomainRemoveInactive
      has completed. We also ensure qemuProcessAutoDestroy does not return the
      domain to virCloseCallbacksRun to be unlocked in this case. There is
      similar logic in bhyveProcessAutoDestroy and lxcProcessAutoDestroy
      (which call virDomainObjListRemove directly).
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      7578cc17
    • M
      virQEMUDriverGetConfig: Fix memleak · 225aa802
      Michal Privoznik 提交于
      ==19015== 968 (416 direct, 552 indirect) bytes in 1 blocks are definitely lost in loss record 999 of 1,049
      ==19015==    at 0x4C2C070: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
      ==19015==    by 0x52ADF14: virAllocVar (viralloc.c:560)
      ==19015==    by 0x5302FD1: virObjectNew (virobject.c:193)
      ==19015==    by 0x1DD9401E: virQEMUDriverConfigNew (qemu_conf.c:164)
      ==19015==    by 0x1DDDF65D: qemuStateInitialize (qemu_driver.c:666)
      ==19015==    by 0x53E0823: virStateInitialize (libvirt.c:777)
      ==19015==    by 0x11E067: daemonRunStateInit (libvirtd.c:905)
      ==19015==    by 0x53201AD: virThreadHelper (virthread.c:206)
      ==19015==    by 0xA1EE1F2: start_thread (in /lib64/libpthread-2.19.so)
      ==19015==    by 0xA4EFC8C: clone (in /lib64/libc-2.19.so)
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      225aa802