1. 16 1月, 2015 4 次提交
  2. 15 1月, 2015 7 次提交
  3. 14 1月, 2015 6 次提交
    • P
      qemu_process: detect updated video ram size values from QEMU · ce745914
      Pavel Hrdina 提交于
      QEMU internally updates the size of video memory if the domain XML had
      provided too low memory size or there are some dependencies for a QXL
      devices 'vgamem' and 'ram' size. We need to know about the changes and
      store them into the status XML to not break migration or managedsave
      through different libvirt versions.
      
      The values would be loaded only if the "vgamem_mb" property exists for
      the device.  The presence of the "vgamem_mb" also tells that the
      "ram_size" and "vram_size" exists for QXL devices.
      Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
      ce745914
    • P
      qemu_monitor: introduce new function to get QOM path · cc41c648
      Pavel Hrdina 提交于
      The search is done recursively only through QOM object that has a type
      prefixed with "child<" as this indicate that the QOM is a parent for
      other QOM objects.
      
      The usage is that you give known device name with starting path where to
      search.
      Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
      cc41c648
    • P
      qemu_driver: fix setting vcpus for offline domain · e105dc98
      Pavel Hrdina 提交于
      Commit e3435caf fixed hot-plugging of vcpus with strict memory pinning
      on NUMA hosts, but unfortunately it also broke updating number of vcpus
      for offline guests using our API.
      
      The issue is that we try to create a cpu cgroup for non-running guest
      which fails as there are no cgroups for that domain. We should create
      cgroups and update cpuset.mems only if we are hot-plugging.
      Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
      e105dc98
    • M
      qemu, lxc: Warn if setting QoS on unsupported vNIC types · 04cf99a6
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1165993
      
      So, there are still plenty of vNIC types that we don't know how to set
      bandwidth on. Let's warn explicitly in case user has requested it
      instead of pretending everything was set.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      04cf99a6
    • S
      qemu: snapshot: inactive external snapshot can't work after libvirtd restart · 9f974858
      Shanzhi Yu 提交于
      When create inactive external snapshot, after update disk definitions,
      virDomainSaveConfig is needed, if not after restart libvirtd the new snapshot
      file definitions in xml will be lost.
      
      Reproduce steps:
      
      1. prepare a shut off guest
      $ virsh domstate rhel7 && virsh domblklist rhel7
      shut off
      
      Target     Source
      ------------------------------------------------
      vda        /var/lib/libvirt/images/rhel7.img
      
      2. create external disk snapshot
      $ virsh snapshot-create rhel7 --disk-only && virsh domblklist rhel7
      Domain snapshot 1417882967 created
      Target     Source
      ------------------------------------------------
      vda        /var/lib/libvirt/images/rhel7.1417882967
      
      3. restart libvirtd then check guest source file
      $ service  libvirtd restart && virsh domblklist rhel7
      Redirecting to /bin/systemctl restart  libvirtd.service
      Target     Source
      ------------------------------------------------
      vda        /var/lib/libvirt/images/rhel7.img
      
      This was first reported by Eric Blake
      http://www.redhat.com/archives/libvir-list/2014-December/msg00369.htmlSigned-off-by: NShanzhi Yu <shyu@redhat.com>
      9f974858
    • D
      Give virDomainDef parser & formatter their own flags · 0ecd6851
      Daniel P. Berrange 提交于
      The virDomainDefParse* and virDomainDefFormat* methods both
      accept the VIR_DOMAIN_XML_* flags defined in the public API,
      along with a set of other VIR_DOMAIN_XML_INTERNAL_* flags
      defined in domain_conf.c.
      
      This is seriously confusing & error prone for a number of
      reasons:
      
       - VIR_DOMAIN_XML_SECURE, VIR_DOMAIN_XML_MIGRATABLE and
         VIR_DOMAIN_XML_UPDATE_CPU are only relevant for the
         formatting operation
       - Some of the VIR_DOMAIN_XML_INTERNAL_* flags only apply
         to parse or to format, but not both.
      
      This patch cleanly separates out the flags. There are two
      distint VIR_DOMAIN_DEF_PARSE_* and VIR_DOMAIN_DEF_FORMAT_*
      flags that are used by the corresponding methods. The
      VIR_DOMAIN_XML_* flags received via public API calls must
      be converted to the VIR_DOMAIN_DEF_FORMAT_* flags where
      needed.
      
      The various calls to virDomainDefParse which hardcoded the
      use of the VIR_DOMAIN_XML_INACTIVE flag change to use the
      VIR_DOMAIN_DEF_PARSE_INACTIVE flag.
      0ecd6851
  4. 13 1月, 2015 3 次提交
    • E
      qemu: forbid second blockcommit during active commit · e1125ceb
      Eric Blake 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1135339 documents some
      confusing behavior when a user tries to start an inactive block
      commit in a second connection while there is already an on-going
      active commit from a first connection.  Eventually, qemu will
      support multiple simultaneous block jobs, but as of now, it does
      not; furthermore, libvirt also needs an overhaul before we can
      support simultaneous jobs.  So, the best way to avoid confusing
      ourselves is to quit relying on qemu to tell us about the situation
      (where we risk getting in weird states) and instead forbid a
      duplicate block commit ourselves.
      
      Note that we are still relying on qemu to diagnose attempts to
      interrupt an inactive commit (since we only track XML of an active
      commit), but as inactive commit is less confusing for libvirt to
      manage, there is less that can go wrong by leaving that detection
      up to qemu.
      
      * src/qemu/qemu_driver.c (qemuDomainBlockCommit): Hoist check for
      active commit to occur earlier outside of conditions.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      e1125ceb
    • D
      Add stub virDomainDefineXMLFlags impls · 4d2ebc71
      Daniel P. Berrange 提交于
      Make sure every virt driver implements virDomainDefineXMLFlags
      by adding a trivial passthrough from the existing impl with
      no flags set.
      4d2ebc71
    • M
      qemu: Allow enabling/disabling features with host-passthrough · adff345e
      Martin Kletzander 提交于
      QEMU supports feature specification with -cpu host and we just skip
      using that.  Since QEMU developers themselves would like to use this
      feature, this patch modifies the code to work.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1178850Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      adff345e
  5. 12 1月, 2015 1 次提交
    • P
      qxl: change the default value for vgamem_mb to 16 MiB · 0e502466
      Pavel Hrdina 提交于
      The default value should be 16 MiB instead of 8 MiB. Only really old
      version of upstream QEMU used the 8 MiB as default for vga framebuffer.
      
      Without this change if you update your libvirt where we introduced the
      "vgamem" attribute for QXL video device the value will be set to 8 MiB,
      but previously your guest had 16 MiB because we didn't pass any value to
      QEMU command line which means QEMU used its own 16 MiB as default.
      
      This will affect all users with guest's display resolution higher than
      1920x1080.
      Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
      0e502466
  6. 08 1月, 2015 1 次提交
    • M
      qemu: Fix system pages handling in <memoryBacking/> · 732586d9
      Michal Privoznik 提交于
      In one of my previous commits (311b4a67) I've tried to allow to
      pass regular system pages to <hugepages>. However, there was a
      little bug that wasn't caught. If domain has guest NUMA topology
      defined, qemuBuildNumaArgStr() function takes care of generating
      corresponding command line. The hugepages backing for guest NUMA
      nodes is handled there too. And here comes the bug: the hugepages
      setting from XML is stored in KiB internally, however, the system
      pages size was queried and stored in Bytes. So the check whether
      these two are equal was failing even if it shouldn't.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      732586d9
  7. 07 1月, 2015 2 次提交
    • J
      Indentation · b0731790
      Ján Tomko 提交于
      b0731790
    • P
      qemu: Don't unref domain after exit from nested async job · 79bb49a8
      Peter Krempa 提交于
      In commit 540c339a the whole domain
      reference counting was refactored in the qemu driver. Domain jobs now
      don't need to reference the domain object as they now expect the
      reference from the calling function.
      
      However, the patch forgot to remove the unref call in case we exit the
      monitor when we were acquiring a nested job. This caused the daemon to
      crash on a subsequent access to the domain object once we've done an
      operation requiring a nested job for a monitor access.
      
      An easy reproducer case:
      
      1) Start a vm with qcow disks
      2) virsh snapshot-create-as DOMNAME
      3) virsh dumpxml DOMNAME
      4) daemon crashes in a semi-random spot while accessing a now-removed VM
      object.
      
      Fortunately, the commit wasn't released yet, so there are no security
      implications.
      Reported-by: NShanzi Yu <shyu@redhat.com>
      Signed-off-by: NPeter Krempa <pkrempa@redhat.com>
      79bb49a8
  8. 06 1月, 2015 4 次提交
  9. 23 12月, 2014 1 次提交
  10. 21 12月, 2014 1 次提交
    • M
      qemu: completely rework reference counting · 540c339a
      Martin Kletzander 提交于
      There is one problem that causes various errors in the daemon.  When
      domain is waiting for a job, it is unlocked while waiting on the
      condition.  However, if that domain is for example transient and being
      removed in another API (e.g. cancelling incoming migration), it get's
      unref'd.  If the first call, that was waiting, fails to get the job, it
      unref's the domain object, and because it was the last reference, it
      causes clearing of the whole domain object.  However, when finishing the
      call, the domain must be unlocked, but there is no way for the API to
      know whether it was cleaned or not (unless there is some ugly temporary
      variable, but let's scratch that).
      
      The root cause is that our APIs don't ref the objects they are using and
      all use the implicit reference that the object has when it is in the
      domain list.  That reference can be removed when the API is waiting for
      a job.  And because each domain doesn't do its ref'ing, it results in
      the ugly checking of the return value of virObjectUnref() that we have
      everywhere.
      
      This patch changes qemuDomObjFromDomain() to ref the domain (using
      virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
      should be the only function in which the return value of
      virObjectUnref() is checked.  This makes all reference counting
      deterministic and makes the code a bit clearer.
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      540c339a
  11. 19 12月, 2014 3 次提交
    • D
      disable vCPU pinning with TCG mode · 65686e5a
      Daniel P. Berrange 提交于
      Although QMP returns info about vCPU threads in TCG mode, the
      data it returns is mostly lies. Only the first vCPU has a valid
      thread_id returned. The thread_id given for the other vCPUs is
      in fact the main emulator thread. All vCPUs actually run under
      the same thread in TCG mode.
      
      Our vCPU pinning code is not at all able to cope with this
      so if you try to set CPU affinity per-vCPU you end up with
      wierd errors
      
      error: Failed to start domain instance-00000007
      error: cannot set CPU affinity on process 24365: Invalid argument
      
      Since few people will care about the performance of TCG with
      strict CPU pinning, lets just disable that for now, so we get
      a clear error message
      
      error: Failed to start domain instance-00000007
      error: Requested operation is not valid: cpu affinity is not supported
      65686e5a
    • D
      Don't setup fake CPU pids for old QEMU · b07f3d82
      Daniel P. Berrange 提交于
      The code assumes that def->vcpus == nvcpupids, so when we setup
      fake CPU pids for old QEMU with nvcpupids == 1, we cause the
      later code to read off the end of the array. This has fun results
      like sche_setaffinity(0, ...) which changes libvirtd's own CPU
      affinity, or even better sched_setaffinity($RANDOM, ...) which
      changes the affinity of a random OS process.
      b07f3d82
    • M
      qemu: Create memory-backend-{ram,file} iff needed · f309db1f
      Michal Privoznik 提交于
      Libvirt BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1175397
      QEMU BZ:    https://bugzilla.redhat.com/show_bug.cgi?id=1170093
      
      In qemu there are two interesting arguments:
      
      1) -numa to create a guest NUMA node
      2) -object memory-backend-{ram,file} to tell qemu which memory
      region on which host's NUMA node it should allocate the guest
      memory from.
      
      Combining these two together we can instruct qemu to create a
      guest NUMA node that is tied to a host NUMA node. And it works
      just fine. However, depending on machine type used, there might
      be some issued during migration when OVMF is enabled (see QEMU
      BZ). While this truly is a QEMU bug, we can help avoiding it. The
      problem lies within the memory backend objects somewhere. Having
      said that, fix on our side consists on putting those objects on
      the command line if and only if needed. For instance, while
      previously we would construct this (in all ways correct) command
      line:
      
          -object memory-backend-ram,size=256M,id=ram-node0 \
          -numa node,nodeid=0,cpus=0,memdev=ram-node0
      
      now we create just:
      
          -numa node,nodeid=0,cpus=0,mem=256
      
      because the backend object is obviously not tied to any specific
      host NUMA node.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      f309db1f
  12. 18 12月, 2014 3 次提交
  13. 17 12月, 2014 4 次提交
    • J
      Fix error message on redirdev caps detection · 952f8a73
      Ján Tomko 提交于
      952f8a73
    • L
      conf: fix cannot start a guest have a shareable network iscsi hostdev · dddd8327
      Luyao Huang 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1174569
      
      There's nothing we need to do for shared iSCSI devices in
      qemuAddSharedHostdev and qemuRemoveSharedHostdev. The iSCSI layer
      takes care about that for us.
      Signed-off-by: NLuyao Huang <lhuang@redhat.com>
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      dddd8327
    • E
      getstats: crawl backing chain for qemu · 3937ef9c
      Eric Blake 提交于
      Wire up backing chain recursion.  For the first time, it is now
      possible to get libvirt to expose that qemu tracks read statistics
      on backing files, as well as report maximum extent written on a
      backing file during a block-commit operation.
      
      For a running domain, where one of the two images has a backing
      file, I see the traditional output:
      
      $ virsh domstats --block testvm2
      Domain: 'testvm2'
        block.count=2
        block.0.name=vda
        block.0.path=/tmp/wrapper.qcow2
        block.0.rd.reqs=1
        block.0.rd.bytes=512
        block.0.rd.times=28858
        block.0.wr.reqs=0
        block.0.wr.bytes=0
        block.0.wr.times=0
        block.0.fl.reqs=0
        block.0.fl.times=0
        block.0.allocation=0
        block.0.capacity=1310720000
        block.0.physical=200704
        block.1.name=vdb
        block.1.path=/dev/sda7
        block.1.rd.reqs=0
        block.1.rd.bytes=0
        block.1.rd.times=0
        block.1.wr.reqs=0
        block.1.wr.bytes=0
        block.1.wr.times=0
        block.1.fl.reqs=0
        block.1.fl.times=0
        block.1.allocation=0
        block.1.capacity=1310720000
      
      vs. the new output:
      
      $ virsh domstats --block --backing testvm2
      Domain: 'testvm2'
        block.count=3
        block.0.name=vda
        block.0.path=/tmp/wrapper.qcow2
        block.0.rd.reqs=1
        block.0.rd.bytes=512
        block.0.rd.times=28858
        block.0.wr.reqs=0
        block.0.wr.bytes=0
        block.0.wr.times=0
        block.0.fl.reqs=0
        block.0.fl.times=0
        block.0.allocation=0
        block.0.capacity=1310720000
        block.0.physical=200704
        block.1.name=vda
        block.1.path=/dev/sda6
        block.1.backingIndex=1
        block.1.rd.reqs=0
        block.1.rd.bytes=0
        block.1.rd.times=0
        block.1.wr.reqs=0
        block.1.wr.bytes=0
        block.1.wr.times=0
        block.1.fl.reqs=0
        block.1.fl.times=0
        block.1.allocation=327680
        block.1.capacity=786432000
        block.2.name=vdb
        block.2.path=/dev/sda7
        block.2.rd.reqs=0
        block.2.rd.bytes=0
        block.2.rd.times=0
        block.2.wr.reqs=0
        block.2.wr.bytes=0
        block.2.wr.times=0
        block.2.fl.reqs=0
        block.2.fl.times=0
        block.2.allocation=0
        block.2.capacity=1310720000
      
      I may later do a patch that trims the output to avoid 0 stats,
      particularly for backing files (which are more likely to have
      0 stats, at least for write statistics when no block-commit
      is performed).  Also, I still plan to expose physical size
      information (qemu doesn't expose it yet, so it requires a stat,
      and for block devices, a further open/seek operation).  But
      this patch is good enough without worrying about that yet.
      
      * src/qemu/qemu_driver.c (QEMU_DOMAIN_STATS_BACKING): New internal
      enum bit.
      (qemuConnectGetAllDomainStats): Recognize new user flag, and pass
      details to...
      (qemuDomainGetStatsBlock): ...here, where we can do longer recursion.
      (qemuDomainGetStatsOneBlock): Output new field.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      3937ef9c
    • E
      getstats: split block stats reporting for easier recursion · c2d380bf
      Eric Blake 提交于
      In order to report stats on backing chains, we need to separate
      the output of stats for one block from how we traverse blocks.
      
      * src/qemu/qemu_driver.c (qemuDomainGetStatsBlock): Split...
      (qemuDomainGetStatsOneBlock): ...into new helper.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      c2d380bf