1. 21 1月, 2015 1 次提交
  2. 16 1月, 2015 4 次提交
  3. 15 1月, 2015 1 次提交
  4. 14 1月, 2015 1 次提交
  5. 13 1月, 2015 1 次提交
  6. 12 1月, 2015 1 次提交
    • P
      qxl: change the default value for vgamem_mb to 16 MiB · 0e502466
      Pavel Hrdina 提交于
      The default value should be 16 MiB instead of 8 MiB. Only really old
      version of upstream QEMU used the 8 MiB as default for vga framebuffer.
      
      Without this change if you update your libvirt where we introduced the
      "vgamem" attribute for QXL video device the value will be set to 8 MiB,
      but previously your guest had 16 MiB because we didn't pass any value to
      QEMU command line which means QEMU used its own 16 MiB as default.
      
      This will affect all users with guest's display resolution higher than
      1920x1080.
      Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
      0e502466
  7. 08 1月, 2015 1 次提交
    • M
      qemu: Fix system pages handling in <memoryBacking/> · 732586d9
      Michal Privoznik 提交于
      In one of my previous commits (311b4a67) I've tried to allow to
      pass regular system pages to <hugepages>. However, there was a
      little bug that wasn't caught. If domain has guest NUMA topology
      defined, qemuBuildNumaArgStr() function takes care of generating
      corresponding command line. The hugepages backing for guest NUMA
      nodes is handled there too. And here comes the bug: the hugepages
      setting from XML is stored in KiB internally, however, the system
      pages size was queried and stored in Bytes. So the check whether
      these two are equal was failing even if it shouldn't.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      732586d9
  8. 19 12月, 2014 1 次提交
    • M
      qemu: Create memory-backend-{ram,file} iff needed · f309db1f
      Michal Privoznik 提交于
      Libvirt BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1175397
      QEMU BZ:    https://bugzilla.redhat.com/show_bug.cgi?id=1170093
      
      In qemu there are two interesting arguments:
      
      1) -numa to create a guest NUMA node
      2) -object memory-backend-{ram,file} to tell qemu which memory
      region on which host's NUMA node it should allocate the guest
      memory from.
      
      Combining these two together we can instruct qemu to create a
      guest NUMA node that is tied to a host NUMA node. And it works
      just fine. However, depending on machine type used, there might
      be some issued during migration when OVMF is enabled (see QEMU
      BZ). While this truly is a QEMU bug, we can help avoiding it. The
      problem lies within the memory backend objects somewhere. Having
      said that, fix on our side consists on putting those objects on
      the command line if and only if needed. For instance, while
      previously we would construct this (in all ways correct) command
      line:
      
          -object memory-backend-ram,size=256M,id=ram-node0 \
          -numa node,nodeid=0,cpus=0,memdev=ram-node0
      
      now we create just:
      
          -numa node,nodeid=0,cpus=0,mem=256
      
      because the backend object is obviously not tied to any specific
      host NUMA node.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      f309db1f
  9. 17 12月, 2014 1 次提交
  10. 15 12月, 2014 2 次提交
    • L
      qemu: add/remove bridge fdb entries as guest CPUs are started/stopped · 44292e48
      Laine Stump 提交于
      When libvirt is managing a bridge's forwarding database (FDB)
      (macTableManager='libvirt'), if we add FDB entries for a new guest
      interface even before the qemu process is created, then in the case of
      a migration any other guest attached to the "destination" bridge will
      have its traffic immediately sent to the destination of the migration
      even while the source domain is still running (and the destination, of
      course, isn't). To make sure that traffic from other guests on the new
      host continues flowing to the old guest until the new one is ready, we
      have to wait until the new guest CPUs are started to add the FDB
      entries.
      
      Conversely, we need to remove the FDB entries from the bridge any time
      the guest CPUs are stopped; among other things, this will assure
      proper operation during a post-copy migration (which is just the
      opposite of the problem described in the previous paragraph).
      44292e48
    • M
      qemu: Allow system pages to <memoryBacking/> · 311b4a67
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1173507
      
      It occurred to me that OpenStack uses the following XML when not using
      regular huge pages:
      
        <memoryBacking>
          <hugepages>
            <page size='4' unit='KiB'/>
          </hugepages>
        </memoryBacking>
      
      However, since we are expecting to see huge pages only, we fail to
      startup the domain with following error:
      
        libvirtError: internal error: Unable to find any usable hugetlbfs
        mount for 4 KiB
      
      While regular system pages are not huge pages technically, our code is
      prepared for that and if it helps OpenStack (or other management
      applications) we should cope with that.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      311b4a67
  11. 09 12月, 2014 2 次提交
    • L
      qemu: always use virDomainNetGetActualBridgeName to get interface's bridge · 4aae2ed6
      Laine Stump 提交于
      qemuNetworkIfaceConnect() used to have a special case for
      actualType='network' (a network with forward mode of route, nat, or
      isolated) to call the libvirt public API to retrieve the bridge being
      used by a network. That is no longer necessary - since all network
      types that use a bridge and tap device now get the bridge name stored
      in the ActualNetDef, we can just always use
      virDomainNetGetActualBridgeName() instead.
      
      (an audit of the two callers to qemuNetworkIfaceConnect() confirms
      that it is never called for any other type of network, so the dead
      code in the else statement (logging an internal error if it is called
      for any other type of network) is eliminated in the process.)
      4aae2ed6
    • L
      qemu: setup tap devices for macTableManager='libvirt' · 7cb822c2
      Laine Stump 提交于
      When libvirt is managing the MAC table of a Linux host bridge, it must
      turn off learning and unicast_flood for each tap device attached to
      that bridge, then add a Forwarding Database (fdb) entry for the tap
      device using the MAC address from the domain interface config.
      
      Once we have disabled learning and flooding, any packet that has a
      destination MAC address not present in the fdb will be dropped by the
      bridge. This, along with the opportunistic disabling of promiscuous
      mode[*], can result in enhanced network performance. and a potential
      slight security improvement.
      
      [*] If there is only one device on the bridge with learning/unicast_flood
      enabled, then that device will automatically have promiscuous mode
      disabled. If there are *no* devices with learning/unicast_flood
      enabled (e.g. for a libvirt "route", "nat", or isolated network that
      has no physical device attached), then all non-tap devices will have
      promiscuous mode disabled (tap devices always have promiscuous mode
      enabled, which may be a bug in the kernel, but in practice has 0
      effect).
      
      None of this has any effect for kernels prior to 3.15 (upstream kernel
      commit 2796d0c648c940b4796f84384fbcfb0a2399db84 "bridge: Automatically
      manage port promiscuous mode"). Even after that, until kernel 3.17
      (upstream commit 5be5a2df40f005ea7fb7e280e87bbbcfcf1c2fc0 "bridge: Add
      filtering support for default_pvid") traffic will not be properly
      forwarded without manually adding vlan table entries. Unfortunately,
      although the presence of the first patch is signalled by existence of
      the "learning" and "unicast_flood" options in sysfs, there is no
      reliable way to query whether or not the system's kernel has the
      second of those patches installed, the only thing that can be done is
      to try the setting and see if traffic continues to pass.
      7cb822c2
  12. 03 12月, 2014 1 次提交
    • J
      Replace virNetworkFree with virObjectUnref · 121c09a9
      John Ferlan 提交于
      Since virNetworkFree will call virObjectUnref anyway, let's just use that
      directly so as to avoid the possibility that we inadvertently clear out
      a pending error message when using the public API.
      121c09a9
  13. 25 11月, 2014 4 次提交
    • P
      qemu-command: introduce new vgamem attribute for QXL video device · 742d49fa
      Pavel Hrdina 提交于
      Add attribute to set vgamem_mb parameter of QXL device for QEMU. This
      value sets the size of VGA framebuffer for QXL device. Default value in
      QEMU is 8MB so reuse it also in libvirt to not break things.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1076098Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
      742d49fa
    • P
      qemu-command: use vram attribute for all video devices · 24c6ca86
      Pavel Hrdina 提交于
      So far we didn't have any option to set video memory size for qemu video
      devices. There was only the vram (ram for QXL) attribute but it was valid
      only for the QXL video device.
      
      To provide this feature to users QEMU has a dedicated device attribute
      called 'vgamem_mb' to set the video memory size. We will use the 'vram'
      attribute for setting video memory size for other QEMU video devices.
      
      For the cirrus device we will ignore the vram value because it has
      hardcoded video size in QEMU.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1076098Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
      24c6ca86
    • P
      QXL: fix setting ram and vram values for QEMU QXL device · c32cfc6d
      Pavel Hrdina 提交于
      QEMU has two different type of QXL display device. The first "qxl-vga"
      is for primary video device and second "qxl" is for secondary video
      device.
      
      There are also two different ways how to specify those devices on qemu
      command line, the first one and obsolete is using "-vga" option and the
      current new one is using "-device" option. The "-vga" could be used only
      to setup primary video device, so the "-vga qxl" equal to
      "-device qxl-vga". Unfortunately the "-vga qxl" doesn't support setting
      additional parameters for the device and "-global" option must be used
      for this purpose. It's mandatory to use "-global qxl-vga...." to set the
      parameters of primary video device previously defined with "-vga qxl".
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1076098Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
      c32cfc6d
    • P
      video: cleanup usage of vram attribute and update documentation · 81ba2298
      Pavel Hrdina 提交于
      The vram attribute was introduced to set the video memory but it is
      usable only for few hypervisors excluding QEMU/KVM and the old XEN
      driver. Only in case of QEMU the vram was used for QXL.
      
      This patch updates the documentation to reflect current code in libvirt
      and also changes the cases when we will set the default vram attribute.
      It also fixes existing strange default value for VGA devices 9MB to 16MB
      because the video ram should be rounded to power of two.
      
      The change of default value could affect migrations but I found out that
      QEMU always round the video ram to power of two internally so it's safe
      to change the default value to the next closest power of two and also
      silently correct every domain XML definition. And it's also safe because
      we don't pass the value to QEMU.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1076098Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
      81ba2298
  14. 21 11月, 2014 4 次提交
  15. 20 11月, 2014 2 次提交
  16. 19 11月, 2014 1 次提交
  17. 15 11月, 2014 2 次提交
  18. 12 11月, 2014 1 次提交
  19. 11 11月, 2014 1 次提交
  20. 07 11月, 2014 2 次提交
  21. 06 11月, 2014 4 次提交
  22. 05 11月, 2014 1 次提交
  23. 04 11月, 2014 1 次提交