1. 06 1月, 2015 9 次提交
  2. 05 1月, 2015 4 次提交
  3. 04 1月, 2015 4 次提交
    • K
      libxl: Add support for parsing/formating Xen XL config · 4f524212
      Kiarie Kahurani 提交于
      Now that xenconfig supports parsing and formatting Xen's
      XL config format, integrate it into the libxl driver's
      connectDomainXML{From,To}Native functions.
      Signed-off-by: NKiarie Kahurani <davidkiarie4@gmail.com>
      Signed-off-by: NJim Fehlig <jfehlig@suse.com>
      4f524212
    • K
      tests: Tests for the xen-xl parser · 6b818d3b
      Kiarie Kahurani 提交于
      add tests for the xen_xl config parser
      Signed-off-by: NKiarie Kahurani <davidkiarie4@gmail.com>
      Signed-off-by: NJim Fehlig <jfehlig@suse.com>
      6b818d3b
    • K
      src/xenconfig: Xen-xl parser · 2c78051a
      Kiarie Kahurani 提交于
      Introduce a Xen xl parser
      
      This parser allows for users to convert the new xl disk format and
      spice graphics config to libvirt xml format and vice versa. Regarding
      the spice graphics config, the code is pretty much straight forward.
      For the disk {formating, parsing}, this parser takes care of the new
      xl format which include positional parameters and key/value parameters.
      In xl format disk config a <diskspec> consists of parameters separated by
      commas. If the parameters do not contain an '=' they are automatically
      assigned to certain options following the order below
      
         target, format, vdev, access
      
      The above are the only mandatory parameters in the <diskspec> but there
      are many more disk config options. These options can be specified as
      key=value pairs. This takes care of the rest of the options such as
      
        devtype, backend, backendtype, script, direct-io-safe,
      
      The positional paramters can also be specified in key/value form
      for example
      
          /dev/vg/guest-volume,,hda
          /dev/vg/guest-volume,raw,hda,rw
          format=raw, vdev=hda, access=rw, target=/dev/vg/guest-volume
      
      are interpleted to one config.
      
      In xm format, the above diskspec would be written as
      
      phy:/dev/vg/guest-volume,hda,w
      
      The disk parser is based on the same parser used successfully by
      the Xen project for several years now.  Ian Jackson authored the
      scanner, which is used by this commit with mimimal changes.  Only
      the PREFIX option is changed, to produce function and file names
      more consistent with libvirt's convention.
      Signed-off-by: NKiarie Kahurani <davidkiarie4@gmail.com>
      Signed-off-by: NJim Fehlig <jfehlig@suse.com>
      2c78051a
    • K
      src/xenconfig: Export helper functions · 7ad117b2
      Kiarie Kahurani 提交于
      Export helper functions for reuse in getting values
      from a virConfPtr object
      Signed-off-by: NKiarie Kahurani <davidkiarie4@gmail.com>
      Signed-off-by: NJim Fehlig <jfehlig@suse.com>
      7ad117b2
  4. 25 12月, 2014 1 次提交
  5. 23 12月, 2014 4 次提交
  6. 21 12月, 2014 4 次提交
    • M
      qemu: completely rework reference counting · 540c339a
      Martin Kletzander 提交于
      There is one problem that causes various errors in the daemon.  When
      domain is waiting for a job, it is unlocked while waiting on the
      condition.  However, if that domain is for example transient and being
      removed in another API (e.g. cancelling incoming migration), it get's
      unref'd.  If the first call, that was waiting, fails to get the job, it
      unref's the domain object, and because it was the last reference, it
      causes clearing of the whole domain object.  However, when finishing the
      call, the domain must be unlocked, but there is no way for the API to
      know whether it was cleaned or not (unless there is some ugly temporary
      variable, but let's scratch that).
      
      The root cause is that our APIs don't ref the objects they are using and
      all use the implicit reference that the object has when it is in the
      domain list.  That reference can be removed when the API is waiting for
      a job.  And because each domain doesn't do its ref'ing, it results in
      the ugly checking of the return value of virObjectUnref() that we have
      everywhere.
      
      This patch changes qemuDomObjFromDomain() to ref the domain (using
      virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
      should be the only function in which the return value of
      virObjectUnref() is checked.  This makes all reference counting
      deterministic and makes the code a bit clearer.
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      540c339a
    • M
      util: Fix possible NULL dereference · 3b0f0557
      Martin Kletzander 提交于
      Commit 1a80b97d, which added the virCgroupHasEmptyTasks() function
      forgot that the parameter @cgroup may be NULL and did not check that.
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      3b0f0557
    • C
      maint: update .mailmap · f5070e3a
      Claudio Bley 提交于
      Add an email alias after updating my email address in commit 738a2aec.
      f5070e3a
    • C
      Fix typo s/interpetation/interpretation/ · 6c355c51
      Claudio Bley 提交于
      Fix the typo in struct virSecurityModel's comment for its doi field.
      6c355c51
  7. 20 12月, 2014 2 次提交
    • C
      Update my email address in AUTHORS.in · 738a2aec
      Claudio Bley 提交于
      738a2aec
    • C
      docs: split typedef and struct definition for apibuild.py · f55572ca
      Claudio Bley 提交于
      The members of struct virSecurityLabel and struct virSecurityModel
      were not shown in the libvirt API docs because the corresponding
      <field> elements were missing from the libvirt-api.xml.
      
      The reason is that apibuild.py does not cope well with typedef's
      using inline struct definitions. It fails to associate the comment
      with the typedef and because of this refuses to write out the
      field of the struct.
      f55572ca
  8. 19 12月, 2014 3 次提交
    • D
      disable vCPU pinning with TCG mode · 65686e5a
      Daniel P. Berrange 提交于
      Although QMP returns info about vCPU threads in TCG mode, the
      data it returns is mostly lies. Only the first vCPU has a valid
      thread_id returned. The thread_id given for the other vCPUs is
      in fact the main emulator thread. All vCPUs actually run under
      the same thread in TCG mode.
      
      Our vCPU pinning code is not at all able to cope with this
      so if you try to set CPU affinity per-vCPU you end up with
      wierd errors
      
      error: Failed to start domain instance-00000007
      error: cannot set CPU affinity on process 24365: Invalid argument
      
      Since few people will care about the performance of TCG with
      strict CPU pinning, lets just disable that for now, so we get
      a clear error message
      
      error: Failed to start domain instance-00000007
      error: Requested operation is not valid: cpu affinity is not supported
      65686e5a
    • D
      Don't setup fake CPU pids for old QEMU · b07f3d82
      Daniel P. Berrange 提交于
      The code assumes that def->vcpus == nvcpupids, so when we setup
      fake CPU pids for old QEMU with nvcpupids == 1, we cause the
      later code to read off the end of the array. This has fun results
      like sche_setaffinity(0, ...) which changes libvirtd's own CPU
      affinity, or even better sched_setaffinity($RANDOM, ...) which
      changes the affinity of a random OS process.
      b07f3d82
    • M
      qemu: Create memory-backend-{ram,file} iff needed · f309db1f
      Michal Privoznik 提交于
      Libvirt BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1175397
      QEMU BZ:    https://bugzilla.redhat.com/show_bug.cgi?id=1170093
      
      In qemu there are two interesting arguments:
      
      1) -numa to create a guest NUMA node
      2) -object memory-backend-{ram,file} to tell qemu which memory
      region on which host's NUMA node it should allocate the guest
      memory from.
      
      Combining these two together we can instruct qemu to create a
      guest NUMA node that is tied to a host NUMA node. And it works
      just fine. However, depending on machine type used, there might
      be some issued during migration when OVMF is enabled (see QEMU
      BZ). While this truly is a QEMU bug, we can help avoiding it. The
      problem lies within the memory backend objects somewhere. Having
      said that, fix on our side consists on putting those objects on
      the command line if and only if needed. For instance, while
      previously we would construct this (in all ways correct) command
      line:
      
          -object memory-backend-ram,size=256M,id=ram-node0 \
          -numa node,nodeid=0,cpus=0,memdev=ram-node0
      
      now we create just:
      
          -numa node,nodeid=0,cpus=0,mem=256
      
      because the backend object is obviously not tied to any specific
      host NUMA node.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      f309db1f
  9. 18 12月, 2014 4 次提交
  10. 17 12月, 2014 5 次提交
    • J
      Fix error message on redirdev caps detection · 952f8a73
      Ján Tomko 提交于
      952f8a73
    • J
      logical: Add "--type snapshot" to lvcreate command · cafb934d
      John Ferlan 提交于
      A recent lvm change has resulted in a change for the "default" type of
      logical volume created when the "--virtualsize" or "--V" is supplied on
      the command line (e.g. when the allocation and capacity values of a to
      be created volume differ). It seems that at the very least the following
      change adjusts the default type:
      
      https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=e0164f21
      
      and the following may also have some impact.
      
      https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=87fc3b71
      
      When using the virsh vol-create-as or vol-create xmlfile commands, the
      result is that libvirt will now create a "thin logical volume" and a
      "thin logical volume pool" rather than just a "thin snapshot logical
      volume". For example the following sequence:
      
        # lvcreate --name test -L 2M -V 5M lvm_test
          Rounding up size to full physical extent 4.00 MiB
          Rounding up size to full physical extent 8.00 MiB
          Logical volume "test" created.
        # lvs lvm_test
          LV    VG       Attr       LSize Pool  Origin Data%  Meta%  Move Log Cpy%Sync Convert
          lvol1 lvm_test twi-a-tz-- 4.00m              0.00   0.98
          test  lvm_test Vwi-a-tz-- 8.00m lvol1        0.00
      
      compared to the former code which had the following:
      
          LV   VG       Attr       LSize  Pool Origin         Data%  Move Log Cpy%Sync Convert
          test LVM_Test swi-a-s---  4.00m      [test_vorigin]   0.00
      
      Since libvirt doesn't know how to parse the thin logical volume
      and pool, it will fail to find the newly created volume and pool
      even though it exists in the volume group.
      
      It cannot find since the command used to find/parse returns a thin volume
      'test' with no associated device, for example the output is:
      
        lvol1##UgUwkp-fTFP-C0rc-ufue-xrYh-dkPr-FGPFPx#lvol1_tdata(0)#thin-pool#1#4194304#4194304#4194304#twi-a-tz--
        test##NcaIoH-4YWJ-QKu3-sJc3-EOcS-goff-cThLIL##thin#0#8388608#4194304#8388608#Vwi-a-tz--
      
      as compared to the former which had the following:
      
            test#[test_vorigin]#Dt5Of3-4WE6-buvw-CWJ4-XOiz-ywOU-YULYw6#/dev/sda3(1300)#linear#1#4194304#4194304#4194304#swi-a-s---
      
      While it's possible to generate code to handle the new thin lv and pool, this
      patch will add a "--type snapshot" onto the lvcreate command libvirt uses
      in order to "for now" be able to continue to utilize the thin snapshots
      cafb934d
    • L
      conf: fix cannot start a guest have a shareable network iscsi hostdev · dddd8327
      Luyao Huang 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1174569
      
      There's nothing we need to do for shared iSCSI devices in
      qemuAddSharedHostdev and qemuRemoveSharedHostdev. The iSCSI layer
      takes care about that for us.
      Signed-off-by: NLuyao Huang <lhuang@redhat.com>
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      dddd8327
    • E
      getstats: crawl backing chain for qemu · 3937ef9c
      Eric Blake 提交于
      Wire up backing chain recursion.  For the first time, it is now
      possible to get libvirt to expose that qemu tracks read statistics
      on backing files, as well as report maximum extent written on a
      backing file during a block-commit operation.
      
      For a running domain, where one of the two images has a backing
      file, I see the traditional output:
      
      $ virsh domstats --block testvm2
      Domain: 'testvm2'
        block.count=2
        block.0.name=vda
        block.0.path=/tmp/wrapper.qcow2
        block.0.rd.reqs=1
        block.0.rd.bytes=512
        block.0.rd.times=28858
        block.0.wr.reqs=0
        block.0.wr.bytes=0
        block.0.wr.times=0
        block.0.fl.reqs=0
        block.0.fl.times=0
        block.0.allocation=0
        block.0.capacity=1310720000
        block.0.physical=200704
        block.1.name=vdb
        block.1.path=/dev/sda7
        block.1.rd.reqs=0
        block.1.rd.bytes=0
        block.1.rd.times=0
        block.1.wr.reqs=0
        block.1.wr.bytes=0
        block.1.wr.times=0
        block.1.fl.reqs=0
        block.1.fl.times=0
        block.1.allocation=0
        block.1.capacity=1310720000
      
      vs. the new output:
      
      $ virsh domstats --block --backing testvm2
      Domain: 'testvm2'
        block.count=3
        block.0.name=vda
        block.0.path=/tmp/wrapper.qcow2
        block.0.rd.reqs=1
        block.0.rd.bytes=512
        block.0.rd.times=28858
        block.0.wr.reqs=0
        block.0.wr.bytes=0
        block.0.wr.times=0
        block.0.fl.reqs=0
        block.0.fl.times=0
        block.0.allocation=0
        block.0.capacity=1310720000
        block.0.physical=200704
        block.1.name=vda
        block.1.path=/dev/sda6
        block.1.backingIndex=1
        block.1.rd.reqs=0
        block.1.rd.bytes=0
        block.1.rd.times=0
        block.1.wr.reqs=0
        block.1.wr.bytes=0
        block.1.wr.times=0
        block.1.fl.reqs=0
        block.1.fl.times=0
        block.1.allocation=327680
        block.1.capacity=786432000
        block.2.name=vdb
        block.2.path=/dev/sda7
        block.2.rd.reqs=0
        block.2.rd.bytes=0
        block.2.rd.times=0
        block.2.wr.reqs=0
        block.2.wr.bytes=0
        block.2.wr.times=0
        block.2.fl.reqs=0
        block.2.fl.times=0
        block.2.allocation=0
        block.2.capacity=1310720000
      
      I may later do a patch that trims the output to avoid 0 stats,
      particularly for backing files (which are more likely to have
      0 stats, at least for write statistics when no block-commit
      is performed).  Also, I still plan to expose physical size
      information (qemu doesn't expose it yet, so it requires a stat,
      and for block devices, a further open/seek operation).  But
      this patch is good enough without worrying about that yet.
      
      * src/qemu/qemu_driver.c (QEMU_DOMAIN_STATS_BACKING): New internal
      enum bit.
      (qemuConnectGetAllDomainStats): Recognize new user flag, and pass
      details to...
      (qemuDomainGetStatsBlock): ...here, where we can do longer recursion.
      (qemuDomainGetStatsOneBlock): Output new field.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      3937ef9c
    • E
      getstats: split block stats reporting for easier recursion · c2d380bf
      Eric Blake 提交于
      In order to report stats on backing chains, we need to separate
      the output of stats for one block from how we traverse blocks.
      
      * src/qemu/qemu_driver.c (qemuDomainGetStatsBlock): Split...
      (qemuDomainGetStatsOneBlock): ...into new helper.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      c2d380bf