1. 06 5月, 2020 10 次提交
    • D
      gitlab: move some jobs onto CentOS 8 · 019b71de
      Daniel P. Berrangé 提交于
      So that we don't have to chase frequent Fedora releases, move the
      non-build related jobs onto the long life CentOS 8 distro.
      Reviewed-by: NAndrea Bolognani <abologna@redhat.com>
      Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
      019b71de
    • A
      tests: Enable directory override for qemucapsprobe · bab946e3
      Andrea Bolognani 提交于
      Currently, qemucapsprobe fails when libvirt is not already installed
      on the system:
      
        $ ./tests/qemucapsprobe /path/to/qemu-system-ppc64 >/dev/null
        I/O warning : failed to load external entity "/usr/share/libvirt/cpu_map/index.xml"
        2020-05-06 09:49:59.136+0000: 269822: info : libvirt version: 6.4.0
        2020-05-06 09:49:59.136+0000: 269822: info : hostname: [...]
        2020-05-06 09:49:59.136+0000: 269822: warning : virQEMUCapsLogProbeFailure:5127 :
        Failed to probe capabilities for /path/to/qemu-system-ppc64: XML error: failed to
        parse xml document '/usr/share/libvirt/cpu_map/index.xml'
      
      It would be great if the tool could work entirely out of the build
      directory, and this patch achieves just that.
      Suggested-by: NPeter Krempa <pkrempa@redhat.com>
      Signed-off-by: NAndrea Bolognani <abologna@redhat.com>
      Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
      bab946e3
    • J
      qemu: Don't use CPU from host capabilities as host-model on ARM · 3af4c75d
      Jiri Denemark 提交于
      We never supported host-model CPUs on ARM and we don't want to support
      them even once patches for direct detection of host CPU are merged. And
      since using host CPU definition for host-model CPUs exists only for
      backward compatibility, we should not use it for any host-model support
      added in the future. Such enhancement should exclusively use the result
      of query-cpu-model-expansion. Until proper host-model support is
      implemented for ARM (if ever), we need to make sure the detected host
      CPU is not accidentally used for host-model CPUs.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
      3af4c75d
    • P
      qemucapabilitiesdata: Add test data for x86_64 for the qemu-5.1 dev cycle · 22f0da4e
      Peter Krempa 提交于
      Start the new capability file for the new development cycle of QEMU.
      
      Note that compared to previous version this was generated on an AMD cpu.
      Signed-off-by: NPeter Krempa <pkrempa@redhat.com>
      Reviewed-by: NJán Tomko <jtomko@redhat.com>
      22f0da4e
    • P
    • L
      systemd: start libvirtd after firewalld/iptables services · 0756415f
      Laine Stump 提交于
      When a system has enabled the iptables/ip6tables services rather than
      firewalld, there is no explicit ordering of the start of those
      services vs. libvirtd. This creates a problem when libvirtd.service is
      started before ip[6]tables, as the latter, when it finally is started,
      will remove all of the iptables rules that had previously been added
      by libvirt, including the custom chains where libvirt's rules are
      kept. This results in an error message similar to the following when a
      user subsequently tries to start a new libvirt network:
      
       "Error while activating network: Call to virNetworkCreate failed:
       internal error: Failed to apply firewall rules
       /usr/sbin/ip6tables -w --table filter --insert LIBVIRT_FWO \
         --in-interface virbr2 --jump REJECT:
       ip6tables: No chain/target/match by that name."
      
      (Prior to logging this error, it also would have caused failure to
      forward (or block) traffic in some cases, e.g. for guests on a NATed
      network, since libvirt's rules to forward/block had all been deleted
      and libvirt didn't know about it, so it couldn't fix the problem)
      
      When this happens, the problem can be remedied by simply restarting
      libvirtd.service (which has the side-effect of reloading all
      libvirt-generated firewall rules)
      
      Instead, we can just explicitly stating in the libvirtd.service file
      that libvirtd.service should start after ip6tables.service and
      ip6tables.service, eliminating the race condition that leads to the
      error.
      
      There is also nothing (that I can see) in the systemd .service files
      to guarantee that firewalld.service will be started (if enabled) prior
      to libvirtd.service. The same error scenario given above would occur
      if libvirtd.service started before firewalld.service.  Even before
      that, though libvirtd would have detected that firewalld.service was
      disabled, and then turn off all firewalld support. So, for example,
      firewalld's libvirt zone wouldn't be used, and most likely traffic
      from guests would therefore be blocked (all with no external
      indication of the source of the problem other than a debug-level log
      when libvirtd was started saying that firewalld wasn't in use); also
      libvirtd wouldn't notice when firewalld reloaded its rules (which also
      simultaneously deletes all of libvirt's rules).
      
      I'm not aware of any reports that have been traced back to
      libvirtd.service starting before firewalld.service, but have seen that
      error reported multiple times, and also don't see an existing
      dependency that would guarantee firewalld.service starts before
      libvirtd.service, so it's possible it's been happening and we just
      haven't gotten to the bottom of it.
      
      This patch adds an After= line to the libvirtd.service file for each
      of iptables.service, ip6tables.service, and firewalld.servicee, which
      should guarantee that libvirtd.service isn't started until systemd has
      started whichever of the others is enabled.
      
      This race was diagnosed, and patch proposed, by Jason Montleon in
      https://bugzilla.redhat.com/1723698 . At the time (April 2019) danpb
      agreed with him that this change to libvirtd.service was a reasonable
      thing to do, but I guess everyone thought someone else was going to
      post a patch, so in the end nobody did.
      Signed-off-by: NLaine Stump <laine@redhat.com>
      Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
      0756415f
    • L
      docs: note that <dnsmasq:option> was added in libvirt 5.6.0 · 695219a5
      Laine Stump 提交于
      To make it simpler to answer questions of "Why doesn't this thing work
      for me?"
      Signed-off-by: NLaine Stump <laine@redhat.com>
      Reviewed-by: NAndrea Bolognani <abologna@redhat.com>
      695219a5
    • J
      docs: Xen improvements · 57687260
      Jim Fehlig 提交于
      In formatdomain, using 'libxl' and 'xen' is redundant since they now
      both refer to the same driver. 'xen' predates 'libxl' and unambiguously
      identifies the Xen hypervisor, so drop the use of 'libxl'.
      
      In aclpolkit, the connection URI was erroneously identified as 'libxl'
      and the name 'xenlight'. Change the URI to 'xen' and driver name to 'Xen'.
      Signed-off-by: NJim Fehlig <jfehlig@suse.com>
      Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
      57687260
    • J
      libxl: Clarify that 'xenlight' should only be used internally · 836ea91d
      Jim Fehlig 提交于
      The libxl driver has suffered an identity crisis since its introduction.
      It took on the name 'libxl' since at the time libvirt already contained
      a 'xen' driver for the old Xen toolstack implementation. 'libxl' is short
      for libxenlight, which is often called xenlight. Unfortunately all forms
      of the name are used in the libxl driver.
      
      The only remaining use of the 'xenlight' form is when interacting with
      the host device manager, which is difficult to change since it would
      cause problems when upgrading the driver.
      
      Rename the #define to make it clear the 'xenlight' form is internal and
      add a comment describing why the name exists and that its use should be
      discouraged.
      Signed-off-by: NJim Fehlig <jfehlig@suse.com>
      Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
      836ea91d
    • J
      libxl: Use the name 'Xen' in driver tables · d218a9c2
      Jim Fehlig 提交于
      The libxl driver declares its name as 'Xen' through the public
      virConnectGetType() API. In the virHypervisorDriver table the name is
      set to 'xenlight'. To add more confusion, the name is set to 'LIBXL'
      in the virStateDriver. For consistency, use the same name in the driver
      tables as reported in the public virConnectGetType() API.
      Signed-off-by: NJim Fehlig <jfehlig@suse.com>
      Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
      d218a9c2
  2. 05 5月, 2020 12 次提交
  3. 04 5月, 2020 3 次提交
  4. 01 5月, 2020 1 次提交
  5. 28 4月, 2020 4 次提交
  6. 27 4月, 2020 10 次提交