1. 29 9月, 2015 1 次提交
    • J
      Update pool allocation with new values on volume creation · 56a4e9cb
      Ján Tomko 提交于
      Since commit e0139e30, we update the pool allocation with
      the user-provided allocation values.
      
      For qcow2, the allocation is ignored for volume building,
      but we still subtracted it from pool's allocation.
      This can result in interesting values if the user-provided
      allocation is large enough:
      
      Capacity:       104.71 GiB
      Allocation:     109.13 GiB
      Available:      16.00 EiB
      
      We already do a VolRefresh on volume creation. Also refresh
      the volume after creating and use the new value to update the pool.
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1163091
      56a4e9cb
  2. 26 9月, 2015 3 次提交
    • J
      conf: Fix virtType check · 5e06a4f0
      John Ferlan 提交于
      Commit id '7383b8cc' changed virDomainDef 'virtType' to an enum, that
      caused a build failure on some archs due to comparing an unsigned value
      to < 0.  Adjust the fetch of 'type' to be into temporary 'int virtType'
      and then assign that virtType to the def->virtType
      5e06a4f0
    • S
      qemu: Make virtType of type virDomainVirtType · 7383b8cc
      Shivangi Dhir 提交于
      Earlier virtType was of type int. After, introducing the enum VIR_DOMAIN_VIRT_NONE,
      the type of virtType is modified to virDomainVirtType.
      7383b8cc
    • S
      conf: Add new VIR_DOMAIN_VIRT_NONE enum · 62569e45
      Shivangi Dhir 提交于
      Introduce VIR_DOMAIN_VIRT_NONE to give domaintype the default value of zero.
      This is specially helpful in constructing better error messages
      when we don't want to look up the default emulator by virtType.
      
      The test data in vircapstest.c is also modified to reflect this change.
      62569e45
  3. 25 9月, 2015 3 次提交
  4. 24 9月, 2015 15 次提交
  5. 23 9月, 2015 10 次提交
    • P
      virsh: Fix job status indicator for 0 length block jobs · 7acfb940
      Peter Krempa 提交于
      Although 0 length block jobs aren't entirely useful, the output of virsh
      blockjob is empty due to the condition that suppresses the output for
      migration jobs that did not start. Since the only place that actually
      uses the condition that suppresses the output is in migration, let's
      move the check there and thus add support for 0 of 0 equaling to 100%.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1196711
      7acfb940
    • P
      qemu: Refresh memory size only on fresh starts · d7a0386e
      Peter Krempa 提交于
      Qemu unfortunately doesn't update internal state right after migration
      and so the actual balloon size as returned by 'query-balloon' are
      invalid for a while after the CPUs are started after migration. If we'd
      refresh our internal state at this point we would report invalid current
      memory size until the next balloon event would arrive.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1242940
      d7a0386e
    • P
      qemu: Align memory module sizes to 2MiB · 624ec1c2
      Peter Krempa 提交于
      My original implementation was based on a qemu version that still did
      not have all the checks in place. Using sizes that would align to odd
      megabyte increments will produce the following error:
      
      qemu-kvm: -device pc-dimm,node=0,memdev=memdimm0,id=dimm0: backend memory size must be multiple of 0x200000
      qemu-kvm: -device pc-dimm,node=0,memdev=memdimm0,id=dimm0: Device 'pc-dimm' could not be initialized
      
      Introduce an alignment retrieval function for memory devices and use it
      to align the devices separately and modify a test case to verify it.
      624ec1c2
    • J
      virsh: Notify users about disconnects · 035947eb
      Jiri Denemark 提交于
      After my "client rpc: Report proper error for keepalive disconnections"
      patch, virsh would no long print a warning when it closes a connection
      to a daemon after a keepalive timeout. Although the warning
      
          virsh # 2015-09-15 10:59:26.729+0000: 642080: info :
          libvirt version: 1.2.19
          2015-09-15 10:59:26.729+0000: 642080: warning :
          virKeepAliveTimerInternal:143 : No response from client
          0x7efdc0a46730 after 1 keepalive messages in 2 seconds
      
      was pretty ugly, it was still useful. This patch brings the useful part
      back while making it much nicer:
      
      virsh # error: Disconnected from qemu:///system due to keepalive timeout
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      035947eb
    • J
      client rpc: Process pending data on error · adf3be57
      Jiri Denemark 提交于
      Even though we hit an error in client's IO loop, we still want to
      process any pending data. So instead of reporting the error right away,
      we can finish the current iteration and report the error once we're done
      with it. Note that the error is stored in client->error by
      virNetClientMarkClose so we don't need to worry about it being reset or
      rewritten by any API we call in the meantime.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      adf3be57
    • J
      client rpc: Report proper error for keepalive disconnections · c91776d5
      Jiri Denemark 提交于
      Whenever a connection was closed due to keepalive timeout, we would log
      a warning but the interrupted API would return rather useless generic
      error:
      
          internal error: received hangup / error event on socket
      
      Let's report a proper keepalive timeout error and make sure it is
      propagated to all pending APIs. The error should be better now:
      
          internal error: connection closed due to keepalive timeout
      
      Based on an old patch from Martin Kletzander.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      c91776d5
    • L
      conf: escape string for disk driver name attribute · 363995b0
      Luyao Huang 提交于
      Just like e92e5ba1, this attribute was missed.
      Signed-off-by: NLuyao Huang <lhuang@redhat.com>
      363995b0
    • M
      Use VIR_DIV_UP macro where possible · d772a70f
      Martin Kletzander 提交于
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      d772a70f
    • L
      Makefile: fix build fail when make rpm · 789bdd7d
      Luyao Huang 提交于
      Build fail and error like this:
      
        CC       qemu/libvirt_driver_qemu_impl_la-qemu_command.lo
      qemu/qemu_capabilities.c:46:27: fatal error: qemu_capspriv.h: No such file or directory
       #include "qemu_capspriv.h"
      
      Add qemu_capspriv.h to source.
      Signed-off-by: NLuyao Huang <lhuang@redhat.com>
      789bdd7d
    • C
      spec: Fix some warnings with latest rpmbuild · dae1250b
      Cole Robinson 提交于
      $ rpmbuild -ba libvirt.spec
      warning: Macro expanded in comment on line 5: # If neither fedora nor rhel was defined, try to guess them from %{dist}
      
      warning: Macro %enable_autotools defined but not used within scope
      warning: Macro %client_only defined but not used within scope
      ...
      dae1250b
  6. 22 9月, 2015 8 次提交
    • M
      tests: Avoid use of virQEMUDriverCreateXMLConf(NULL) · 086f37e9
      Michal Privoznik 提交于
      We use the function to create a virDomainXMLOption object that is
      required for some functions. However, we don't pass the driver
      pointer to the object anywhere - rather than pass NULL. This
      causes trouble later when parsing a domain XML and calling post
      parse callbacks:
      
        Program received signal SIGSEGV, Segmentation fault.
        0x000000000043fa3e in qemuDomainDefPostParse (def=0x7d36c0, caps=0x7caf10, opaque=0x0) at qemu/qemu_domain.c:1043
        1043        qemuCaps = virQEMUCapsCacheLookup(driver->qemuCapsCache, def->emulator);
        (gdb) bt
        #0  0x000000000043fa3e in qemuDomainDefPostParse (def=0x7d36c0, caps=0x7caf10, opaque=0x0) at qemu/qemu_domain.c:1043
        #1  0x00007ffff2928bf9 in virDomainDefPostParse (def=0x7d36c0, caps=0x7caf10, xmlopt=0x7c82c0) at conf/domain_conf.c:4269
        #2  0x00007ffff294de04 in virDomainDefParseXML (xml=0x7da8c0, root=0x7dab80, ctxt=0x7da980, caps=0x7caf10, xmlopt=0x7c82c0, flags=0) at conf/domain_conf.c:16400
        #3  0x00007ffff294e5b5 in virDomainDefParseNode (xml=0x7da8c0, root=0x7dab80, caps=0x7caf10, xmlopt=0x7c82c0, flags=0) at conf/domain_conf.c:16582
        #4  0x00007ffff294e424 in virDomainDefParse (xmlStr=0x0, filename=0x7c7ef0 "/home/zippy/work/libvirt/libvirt.git/tests/securityselinuxlabeldata/disks.xml", caps=0x7caf10, xmlopt=0x7c82c0, flags=0) at conf/domain_conf.c:16529
        #5  0x00007ffff294e4b2 in virDomainDefParseFile (filename=0x7c7ef0 "/home/zippy/work/libvirt/libvirt.git/tests/securityselinuxlabeldata/disks.xml", caps=0x7caf10, xmlopt=0x7c82c0, flags=0) at conf/domain_conf.c:16553
        #6  0x00000000004303ca in testSELinuxLoadDef (testname=0x53c929 "disks") at securityselinuxlabeltest.c:192
        #7  0x00000000004309e8 in testSELinuxLabeling (opaque=0x53c929) at securityselinuxlabeltest.c:313
        #8  0x0000000000431207 in virtTestRun (title=0x53c92f "Labelling \"disks\"", body=0x430964 <testSELinuxLabeling>, data=0x53c929) at testutils.c:211
        #9  0x0000000000430c5d in mymain () at securityselinuxlabeltest.c:373
        #10 0x00000000004325c2 in virtTestMain (argc=1, argv=0x7fffffffd7e8, func=0x430b4a <mymain>) at testutils.c:863
        #11 0x0000000000430deb in main (argc=1, argv=0x7fffffffd7e8) at securityselinuxlabeltest.c:381
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      086f37e9
    • M
      qemuTestDriverInit: init the driver lock too · b1dc59b3
      Michal Privoznik 提交于
      Even though usage of the lock is limited to a very few cases,
      it's still needed. Therefore we should initialize it too.
      Otherwise we may get some random test failures:
      
      ==1204== Conditional jump or move depends on uninitialised value(s)
      ==1204==    at 0xEF7F7CF: pthread_mutex_lock (in /lib64/libpthread-2.20.so)
      ==1204==    by 0x9CA89A5: virMutexLock (virthread.c:89)
      ==1204==    by 0x450B2A: qemuDriverLock (qemu_conf.c:83)
      ==1204==    by 0x45549C: virQEMUDriverGetConfig (qemu_conf.c:869)
      ==1204==    by 0x448E29: qemuDomainDeviceDefPostParse (qemu_domain.c:1240)
      ==1204==    by 0x9CC9B13: virDomainDeviceDefPostParse (domain_conf.c:4224)
      ==1204==    by 0x9CC9B91: virDomainDefPostParseDeviceIterator (domain_conf.c:4251)
      ==1204==    by 0x9CC7843: virDomainDeviceInfoIterateInternal (domain_conf.c:3440)
      ==1204==    by 0x9CC9C25: virDomainDefPostParse (domain_conf.c:4276)
      ==1204==    by 0x9CEEE03: virDomainDefParseXML (domain_conf.c:16400)
      ==1204==    by 0x9CEF5B4: virDomainDefParseNode (domain_conf.c:16582)
      ==1204==    by 0x9CEF423: virDomainDefParse (domain_conf.c:16529)
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      b1dc59b3
    • J
      Revert "qemu: Fix integer/boolean logic in qemuSetUnprivSGIO" · ec6754db
      John Ferlan 提交于
      This reverts commit 69b850fe.
      
      This change broke the ability to "clear" or reset unfiltered back
      to filtered.
      ec6754db
    • P
    • P
      qemu: ppc64: Align memory sizes to 256MiB blocks · bd874b6c
      Peter Krempa 提交于
      For some machine types ppc64 machines now require that memory sizes are
      aligned to 256MiB increments (due to the dynamically reconfigurable
      memory). As now we treat existing configs reasonably in regards to
      migration, we can round all the sizes unconditionally. The only drawback
      will be that the memory size of a VM can potentially increase by
      (256MiB - 1byte) * number_of_NUMA_nodes.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1249006
      bd874b6c
    • P
      qemu: command: Align memory sizes only on fresh starts · c7d7ba85
      Peter Krempa 提交于
      When we are starting a qemu process for an incomming migration or
      snapshot reloading we should not modify the memory sizes in the domain
      since we could potentially change the guest ABI that was tediously
      checked before. Additionally the function now updates the initial memory
      size according to the NUMA node size, which should not happen if we are
      restoring state.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1252685
      c7d7ba85
    • P
      conf: Don't always recalculate initial memory size from NUMA size totals · 0fed5a7b
      Peter Krempa 提交于
      When implementing memory hotplug I've opted to recalculate the initial
      memory size (contents of the <memory> element) as a sum of the sizes of
      NUMA nodes when NUMA was enabled. This was based on an assumption that
      qemu did not allow starting when the NUMA node size total didn't equal
      to the initial memory size. Unfortunately the check was introduced to
      qemu just lately.
      
      This patch uses the new XML parser flag to decide whether it's safe to
      update the memory size total from the NUMA cell sizes or not.
      
      As an additional improvement we now report an error in case when the
      size of hotplug memory would exceed the total memory size.
      
      The rest of the changes assures that the function is called with correct
      flags.
      0fed5a7b
    • P
      conf: Pre-calculate initial memory size instead of always calculating it · 403e8606
      Peter Krempa 提交于
      Add 'initial_memory' member to struct virDomainMemtune so that the
      memory size can be pre-calculated once instead of inferring it always
      again and again.
      
      Separating of the fields will also allow finer granularity of decisions
      in later patches where it will allow to keep the old initial memory
      value in cases where we are handling incomming migration from older
      versions that did not always update the size from NUMA as the code did
      previously.
      
      The change also requires modification of the qemu memory alignment
      function since at the point where we are modifying the size of NUMA
      nodes the total size needs to be recalculated too.
      
      The refactoring done in this patch also fixes a crash in the hyperv
      driver that did not properly initialize def->numa and thus
      virDomainNumaGetMemorySize(def->numa) crashed.
      
      In summary this patch should have no functional impact at this point.
      403e8606