1. 26 2月, 2015 1 次提交
    • L
      qemu: fix ifindex array reported to systemd · 4bbe1029
      Laine Stump 提交于
      Commit f7afeddc added code to report to systemd an array of interface
      indexes for all tap devices used by a guest. Unfortunately it not only
      didn't add code to report the ifindexes for macvtap interfaces
      (interface type='direct') or the tap devices used by type='ethernet',
      it ended up sending "-1" as the ifindex for each macvtap or hostdev
      interface. This resulted in a failure to start any domain that had a
      macvtap or hostdev interface (or actually any type other than
      "network" or "bridge").
      
      This patch does the following with the nicindexes array:
      
      1) Modify qemuBuildInterfaceCommandLine() to only fill in the
      nicindexes array if given a non-NULL pointer to an array (and modifies
      the test jig calls to the function to send NULL). This is because
      there are tests in the test suite that have type='ethernet' and still
      have an ifname specified, but that device of course doesn't actually
      exist on the test system, so attempts to call virNetDevGetIndex() will
      fail.
      
      2) Even then, only add an entry to the nicindexes array for
      appropriate types, and to do so for all appropriate types ("network",
      "bridge", and "direct"), but only if the ifname is known (since that
      is required to call virNetDevGetIndex().
      4bbe1029
  2. 20 2月, 2015 1 次提交
    • M
      virQEMUCapsCacheLookupCopy: Filter qemuCaps based on machineType · af204232
      Michal Privoznik 提交于
      Not all machine types support all devices, device properties, backends,
      etc. So until we create a matrix of [machineType, qemuCaps], lets just
      filter out some capabilities before we return them to the consumer
      (which is going to make decisions based on them straight away).
      Currently, as qemu is unable to tell which capabilities are (not)
      enabled for given machine types, it's us who has to hardcode the matrix.
      One day maybe the hardcoding will go away and we can create the matrix
      dynamically on the fly based on a few monitor calls.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      af204232
  3. 17 2月, 2015 2 次提交
    • M
      qemuBuildMemoryBackendStr: Report backend requirement more appropriately · 7832fac8
      Michal Privoznik 提交于
      So, when building the '-numa' command line, the
      qemuBuildMemoryBackendStr() function does quite a lot of checks to
      chose the best backend, or to check if one is in fact needed. However,
      it returned that backend is needed even for this little fella:
      
        <numatune>
          <memory mode="strict" nodeset="0,2"/>
        </numatune>
      
      This can be guaranteed via CGroups entirely, there's no need to use
      memory-backend-ram to let qemu know where to get memory from. Well, as
      long as there's no <memnode/> element, which explicitly requires the
      backend. Long story short, we wouldn't have to care, as qemu works
      either way. However, the problem is migration (as always). Previously,
      libvirt would have started qemu with:
      
        -numa node,memory=X
      
      in this case and restricted memory placement in CGroups. Today, libvirt
      creates more complicated command line:
      
        -object memory-backend-ram,id=ram-node0,size=X
        -numa node,memdev=ram-node0
      
      Again, one wouldn't find anything wrong with these two approaches.
      Both work just fine. Unless you try to migrated from the older libvirt
      into the newer one. These two approaches are, unfortunately, not
      compatible. My suggestion is, in order to allow users to migrate, lets
      use the older approach for as long as the newer one is not needed.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      7832fac8
    • M
      qemuxml2argvtest: Fake response from numad · 38064806
      Michal Privoznik 提交于
      Well, we can pretend that we've asked numad for its suggestion and let
      qemu command line be built with that respect. Again, this alone has no
      big value, but see later commits which build on the top of this.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      38064806
  4. 12 2月, 2015 1 次提交
  5. 11 2月, 2015 1 次提交
  6. 10 2月, 2015 1 次提交
  7. 03 2月, 2015 1 次提交
  8. 27 1月, 2015 2 次提交
    • D
      qemu: report TAP device indexes to systemd · f7afeddc
      Daniel P. Berrange 提交于
      Record the index of each TAP device created and report them to
      systemd, so they show up in machinectl status for the VM.
      f7afeddc
    • D
      Removing probing of secondary drivers · 55ea7be7
      Daniel P. Berrange 提交于
      For stateless, client side drivers, it is never correct to
      probe for secondary drivers. It is only ever appropriate to
      use the secondary driver that is associated with the
      hypervisor in question. As a result the ESX & HyperV drivers
      have both been forced to do hacks where they register no-op
      drivers for the ones they don't implement.
      
      For stateful, server side drivers, we always just want to
      use the same built-in shared driver. The exception is
      virtualbox which is really a stateless driver and so wants
      to use its own server side secondary drivers. To deal with
      this virtualbox has to be built as 3 separate loadable
      modules to allow registration to work in the right order.
      
      This can all be simplified by introducing a new struct
      recording the precise set of secondary drivers each
      hypervisor driver wants
      
      struct _virConnectDriver {
          virHypervisorDriverPtr hypervisorDriver;
          virInterfaceDriverPtr interfaceDriver;
          virNetworkDriverPtr networkDriver;
          virNodeDeviceDriverPtr nodeDeviceDriver;
          virNWFilterDriverPtr nwfilterDriver;
          virSecretDriverPtr secretDriver;
          virStorageDriverPtr storageDriver;
      };
      
      Instead of registering the hypervisor driver, we now
      just register a virConnectDriver instead. This allows
      us to remove all probing of secondary drivers. Once we
      have chosen the primary driver, we immediately know the
      correct secondary drivers to use.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      55ea7be7
  9. 16 1月, 2015 2 次提交
  10. 14 1月, 2015 1 次提交
    • D
      Give virDomainDef parser & formatter their own flags · 0ecd6851
      Daniel P. Berrange 提交于
      The virDomainDefParse* and virDomainDefFormat* methods both
      accept the VIR_DOMAIN_XML_* flags defined in the public API,
      along with a set of other VIR_DOMAIN_XML_INTERNAL_* flags
      defined in domain_conf.c.
      
      This is seriously confusing & error prone for a number of
      reasons:
      
       - VIR_DOMAIN_XML_SECURE, VIR_DOMAIN_XML_MIGRATABLE and
         VIR_DOMAIN_XML_UPDATE_CPU are only relevant for the
         formatting operation
       - Some of the VIR_DOMAIN_XML_INTERNAL_* flags only apply
         to parse or to format, but not both.
      
      This patch cleanly separates out the flags. There are two
      distint VIR_DOMAIN_DEF_PARSE_* and VIR_DOMAIN_DEF_FORMAT_*
      flags that are used by the corresponding methods. The
      VIR_DOMAIN_XML_* flags received via public API calls must
      be converted to the VIR_DOMAIN_DEF_FORMAT_* flags where
      needed.
      
      The various calls to virDomainDefParse which hardcoded the
      use of the VIR_DOMAIN_XML_INACTIVE flag change to use the
      VIR_DOMAIN_DEF_PARSE_INACTIVE flag.
      0ecd6851
  11. 13 1月, 2015 1 次提交
  12. 15 12月, 2014 1 次提交
    • M
      qemu: Allow system pages to <memoryBacking/> · 311b4a67
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1173507
      
      It occurred to me that OpenStack uses the following XML when not using
      regular huge pages:
      
        <memoryBacking>
          <hugepages>
            <page size='4' unit='KiB'/>
          </hugepages>
        </memoryBacking>
      
      However, since we are expecting to see huge pages only, we fail to
      startup the domain with following error:
      
        libvirtError: internal error: Unable to find any usable hugetlbfs
        mount for 4 KiB
      
      While regular system pages are not huge pages technically, our code is
      prepared for that and if it helps OpenStack (or other management
      applications) we should cope with that.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      311b4a67
  13. 11 12月, 2014 1 次提交
  14. 25 11月, 2014 2 次提交
  15. 21 11月, 2014 1 次提交
  16. 15 11月, 2014 1 次提交
  17. 12 11月, 2014 1 次提交
    • M
      qemuxml2argvtest: Run some test only on Linux · 8d659b17
      Michal Privoznik 提交于
      As I was reviewing bhyve commits, I've noticed qemuxml2argvtest
      failing for some test cases. This is not bug in qemu driver code
      rather than being unable to load qemuxml2argvmock on non-Linux
      platforms. For instance:
      
      318) QEMU XML-2-ARGV numatune-memnode
      ... libvirt:  error : internal error: NUMA node 0 is unavailable
      FAILED
      
      Rather than disabling qemuxml2argvtest on BSD (we do compile qemu
      driver there) disable only those test cases which require mocking.
      To achieve that goal new DO_TEST_LINUX() macro is introduced which
      invokes the test case on Linux only and consume arguments on other
      systems.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      8d659b17
  18. 07 11月, 2014 1 次提交
  19. 06 11月, 2014 1 次提交
  20. 04 11月, 2014 1 次提交
  21. 03 11月, 2014 1 次提交
  22. 20 10月, 2014 1 次提交
    • M
      tests: fix incorrect caps for shmem-invalid-size, shmem-small-size · e80be99f
      Maxime Leroy 提交于
      VIR_TEST_DEBUG=2 ./qemuxml2argvtest generates the following output:
      
      409) QEMU XML-2-ARGV shmem-invalid-size
      ... Got expected error: unsupported configuration: ivshmem device is not \
      	 supported with this QEMU binary
      OK
      410) QEMU XML-2-ARGV shmem-small-size
      ... Got expected error: unsupported configuration: ivshmem device is not \
      supported with this QEMU binary
      OK
      
      We should have:
      
      409) QEMU XML-2-ARGV shmem-invalid-size
      ... Got expected error: XML error: shmem size must be a power of two
      OK
      410) QEMU XML-2-ARGV shmem-small-size
      ... Got expected error: XML error: shmem size must be at least 1 MiB
      OK
      
      This commit fixes the issue by providing QEMU_CAPS_DEVICE_IVSHMEM caps
      for shmem-invalid-size, shmem-small-size test.
      Signed-off-by: NMaxime Leroy <maxime.leroy@6wind.com>
      e80be99f
  23. 04 10月, 2014 2 次提交
  24. 03 10月, 2014 1 次提交
    • C
      qemu: Don't compare CPU against host for TCG · 445a09bd
      Cole Robinson 提交于
      Right now when building the qemu command line, we try to do various
      unconditional validations of the guest CPU against the host CPU. However
      this checks are overly applied. The only time we should use the checks
      are:
      
      - The user requests host-model/host-passthrough, or
      
      - When KVM is requsted. CPU features requested in TCG mode are always
        emulated by qemu and are independent of the host CPU, so no host CPU
        checks should be performed.
      
      Right now if trying to specify a CPU for arm on an x86 host, it attempts
      to do non-sensical validation and falls over.
      
      Switch all the test cases that were intending to test CPU validation to
      use KVM, so they continue to test the intended code.
      
      Amend some aarch64 XML tests with a CPU model, to ensure things work
      correctly.
      445a09bd
  25. 26 9月, 2014 1 次提交
  26. 24 9月, 2014 1 次提交
  27. 19 9月, 2014 1 次提交
  28. 18 9月, 2014 2 次提交
    • M
      qemu: Honor hugepages for UMA domains · 281f7001
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1135396
      
      There are two ways how to tell qemu to use huge pages. The first one
      is suitable for domains with NUMA nodes: the path to hugetlbfs mount
      is appended to NUMA node definition on the command line. The second
      one is suitable for UMA domains: here there's this global '-mem-path'
      argument that accepts path to the hugetlbfs mount point. However, the
      latter case was not used for all the cases that it should be. For
      instance:
      
        <memoryBacking>
          <hugepages>
            <page size='2048' unit='KiB' nodeset='0'/>
          </hugepages>
        </memoryBacking>
      
      didn't trigger the '-mem-path' so the huge pages - despite being
      configured - were not used at all.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      281f7001
    • M
      conf: Disallow nonexistent NUMA nodes for hugepages · ec982f6d
      Michal Privoznik 提交于
      As of 136ad497 it is possible to specify different huge pages per
      guest NUMA node. However, there's no check if nodeset specified in
      ./hugepages/page contains only those guest NUMA nodes that exist.
      In other words with current code it is possible to define meaningless
      combination:
      
        <memoryBacking>
          <hugepages>
            <page size='1048576' unit='KiB' nodeset='0,2-3'/>
            <page size='2048' unit='KiB' nodeset='1,4'/>
          </hugepages>
        </memoryBacking>
        <vcpu placement='static'>4</vcpu>
        <cpu>
          <numa>
            <cell id='0' cpus='0' memory='1048576'/>
            <cell id='1' cpus='1' memory='1048576'/>
            <cell id='2' cpus='2' memory='1048576'/>
            <cell id='3' cpus='3' memory='1048576'/>
          </numa>
        </cpu>
      
      Notice the node 4 in <hugepages/>?
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      ec982f6d
  29. 17 9月, 2014 1 次提交
  30. 16 9月, 2014 1 次提交
  31. 10 9月, 2014 1 次提交
    • M
      qemu: Implement extended loader and nvram · 54289916
      Michal Privoznik 提交于
      QEMU now supports UEFI with the following command line:
      
        -drive file=/usr/share/OVMF/OVMF_CODE.fd,if=pflash,format=raw,unit=0,readonly=on \
        -drive file=/usr/share/OVMF/OVMF_VARS.fd,if=pflash,format=raw,unit=1 \
      
      where the first line reflects <loader> and the second one <nvram>.
      Moreover, these two lines obsolete the -bios argument.
      
      Note that UEFI is unusable without ACPI. This is handled properly now.
      Among with this extension, the variable file is expected to be
      writable and hence we need security drivers to label it.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      Acked-by: NLaszlo Ersek <lersek@redhat.com>
      54289916
  32. 29 8月, 2014 2 次提交
    • J
      qemu: Allow use of iothreads for disk definitions · ef8da2ad
      John Ferlan 提交于
      For virtio-blk-pci disks with the disk iothread attribute that are
      running the correct emulator, add the "iothread=iothread#" to the
      -device command line in order to enable iothreads for the disk as
      long as the command is available, the disk iothread value provided is
      valid, and is supported for the disk device being added
      ef8da2ad
    • J
      qemu: Add support for iothreads · 72edaae7
      John Ferlan 提交于
      Add a new capability to ensure the iothreads feature exists for the qemu
      emulator being run - requires the "query-iothreads" QMP command. Using the
      domain XML add correspoding command argument in order to generate the
      threads. The iothreads will use a name space "iothread#" where, the
      future patch to add support for using an iothread to a disk definition to
      merely define which of the available threads to use.
      
      Add tests to ensure the xml/argv processing is correct.  Note that no
      change was made to qemuargv2xmltest.c as processing the -object element
      would require knowing more than just iothreads.
      72edaae7
  33. 26 8月, 2014 1 次提交