1. 26 6月, 2020 1 次提交
  2. 16 6月, 2020 1 次提交
  3. 15 6月, 2020 1 次提交
  4. 06 5月, 2020 1 次提交
    • L
      systemd: start libvirtd after firewalld/iptables services · 0756415f
      Laine Stump 提交于
      When a system has enabled the iptables/ip6tables services rather than
      firewalld, there is no explicit ordering of the start of those
      services vs. libvirtd. This creates a problem when libvirtd.service is
      started before ip[6]tables, as the latter, when it finally is started,
      will remove all of the iptables rules that had previously been added
      by libvirt, including the custom chains where libvirt's rules are
      kept. This results in an error message similar to the following when a
      user subsequently tries to start a new libvirt network:
      
       "Error while activating network: Call to virNetworkCreate failed:
       internal error: Failed to apply firewall rules
       /usr/sbin/ip6tables -w --table filter --insert LIBVIRT_FWO \
         --in-interface virbr2 --jump REJECT:
       ip6tables: No chain/target/match by that name."
      
      (Prior to logging this error, it also would have caused failure to
      forward (or block) traffic in some cases, e.g. for guests on a NATed
      network, since libvirt's rules to forward/block had all been deleted
      and libvirt didn't know about it, so it couldn't fix the problem)
      
      When this happens, the problem can be remedied by simply restarting
      libvirtd.service (which has the side-effect of reloading all
      libvirt-generated firewall rules)
      
      Instead, we can just explicitly stating in the libvirtd.service file
      that libvirtd.service should start after ip6tables.service and
      ip6tables.service, eliminating the race condition that leads to the
      error.
      
      There is also nothing (that I can see) in the systemd .service files
      to guarantee that firewalld.service will be started (if enabled) prior
      to libvirtd.service. The same error scenario given above would occur
      if libvirtd.service started before firewalld.service.  Even before
      that, though libvirtd would have detected that firewalld.service was
      disabled, and then turn off all firewalld support. So, for example,
      firewalld's libvirt zone wouldn't be used, and most likely traffic
      from guests would therefore be blocked (all with no external
      indication of the source of the problem other than a debug-level log
      when libvirtd was started saying that firewalld wasn't in use); also
      libvirtd wouldn't notice when firewalld reloaded its rules (which also
      simultaneously deletes all of libvirt's rules).
      
      I'm not aware of any reports that have been traced back to
      libvirtd.service starting before firewalld.service, but have seen that
      error reported multiple times, and also don't see an existing
      dependency that would guarantee firewalld.service starts before
      libvirtd.service, so it's possible it's been happening and we just
      haven't gotten to the bottom of it.
      
      This patch adds an After= line to the libvirtd.service file for each
      of iptables.service, ip6tables.service, and firewalld.servicee, which
      should guarantee that libvirtd.service isn't started until systemd has
      started whichever of the others is enabled.
      
      This race was diagnosed, and patch proposed, by Jason Montleon in
      https://bugzilla.redhat.com/1723698 . At the time (April 2019) danpb
      agreed with him that this change to libvirtd.service was a reasonable
      thing to do, but I guess everyone thought someone else was going to
      post a patch, so in the end nobody did.
      Signed-off-by: NLaine Stump <laine@redhat.com>
      Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
      0756415f
  5. 05 5月, 2020 2 次提交
  6. 24 4月, 2020 1 次提交
  7. 03 4月, 2020 4 次提交
  8. 27 3月, 2020 1 次提交
  9. 17 3月, 2020 1 次提交
    • D
      rpc: avoid name lookup when dispatching node device APIs · 69eee587
      Daniel P. Berrangé 提交于
      The node device APIs are a little unusual because we don't use a
      "remote_nonnull_node_device" object on the wire, instead we just
      have a "remote_string" for the device name. This meant dispatcher
      code generation needed special cases. In doing so we mistakenly
      used the virNodeDeviceLookupByName() API which gets dispatched
      into the driver, instead of get_nonnull_node_device() which
      directly populates a virNodeDevicePtr object.
      
      This wasn't a problem with monolithic libvirtd, as the
      virNodeDeviceLookupByName() API call was trivially satisfied
      by the registered driver, albeit with an extra (undesirable)
      authentication check. With the split daemons, the call to
      virNodeDeviceLookupByName() fails in virtqemud, because the
      node device driver obviously doesn't exist in that daemon.
      Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
      Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
      69eee587
  10. 14 3月, 2020 1 次提交
  11. 05 3月, 2020 1 次提交
  12. 27 2月, 2020 1 次提交
    • P
      daemon: set default memlock limit for systemd service · b379fee1
      Pavel Hrdina 提交于
      The default memlock limit is 64k which is not enough to start a single
      VM. The requirements for one VM are 12k, 8k for eBPF map and 4k for eBPF
      program, however, it fails to create eBPF map and program with 64k limit.
      By testing I figured out that the minimal limit is 80k to start a single
      VM with functional eBPF and if I add 12k I can start another one.
      
      This leads into following calculation:
      
      80k as memlock limit worked to start a VM with eBPF which means there
      is 68k of lock memory that I was not able to figure out what was using
      it.  So to get a number for 4096 VMs:
      
              68 + 12 * 4096 = 49220
      
      If we round it up we will get 64M of memory lock limit to support 4096
      VMs with default map size which can hold 64 entries for devices.
      
      This should be good enough as a sane default and users can change it if
      the need to.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1807090Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
      Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
      b379fee1
  13. 25 2月, 2020 1 次提交
  14. 07 2月, 2020 1 次提交
  15. 06 2月, 2020 1 次提交
  16. 04 2月, 2020 2 次提交
  17. 30 1月, 2020 1 次提交
  18. 29 1月, 2020 1 次提交
  19. 27 1月, 2020 1 次提交
    • D
      libvirt: pass a directory path into drivers for embedded usage · 207709a0
      Daniel P. Berrangé 提交于
      The intent here is to allow the virt drivers to be run directly embedded
      in an arbitrary process without interfering with libvirtd. To achieve
      this they need to store all their configuration & state in a separate
      directory tree from the main system or session libvirtd instances.
      
      This can be useful for doing testing of the virt drivers in "make check"
      without interfering with the user's own libvirtd instances.
      
      It can also be used for applications using KVM/QEMU as a piece of
      infrastructure to build an service, rather than for general purpose
      OS hosting. A long standing example is libguestfs, which would prefer
      if its temporary VMs did show up in the main libvirtd VM list, because
      this confuses apps such as OpenStack Nova. A more recent example would
      be Kata which is using KVM as a technology to build containers.
      Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
      Reviewed-by: NCole Robinson <crobinso@redhat.com>
      Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
      207709a0
  20. 23 1月, 2020 1 次提交
  21. 17 1月, 2020 2 次提交
  22. 16 1月, 2020 2 次提交
  23. 08 1月, 2020 1 次提交
    • M
      remote_daemon: Initialize host boot time global variable · 35d603d5
      Michal Privoznik 提交于
      This is not strictly needed, but it makes sure we initialize the
      @bootTime global variable. Thing is, in order to validate XATTRs
      and prune those set in some previous runs of the host, a
      timestamp is recorded in XATTRs. The host boot time was unique
      enough so it was chosen as the timestamp value. And to avoid
      querying and parsing /proc/uptime every time, the query function
      does that only once and stores the boot time in a global
      variable. However, the only time the query function is called is
      in a child process that does lock files and changes seclabels. So
      effectively, we are doing exactly what we wanted to prevent from
      happening.
      
      The fix is simple, call the virHostBootTimeInit() function which
      sets the global variable.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      Reviewed-by: NCole Robinson <crobinso@redhat.com>
      35d603d5
  24. 07 1月, 2020 1 次提交
  25. 03 1月, 2020 1 次提交
  26. 20 12月, 2019 3 次提交
  27. 17 12月, 2019 1 次提交
  28. 16 12月, 2019 1 次提交
    • M
      configure: Provide OpenRC scripts for sub-daemons · 49c6fe62
      Michal Privoznik 提交于
      There is plenty of distributions that haven't switched to
      systemd nor they force their users to (Gentoo, Alpine Linux to
      name a few). With the daemon split merged their only option is to
      still use the monolithic daemon which will go away eventually.
      Provide init scripts for these distros too.
      
      For now, I'm not introducing config files which would correspond
      to the init files except for libvirtd and virtproxyd init scripts
      where it might be desirable to tweak the command line of
      corresponding daemons.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
      49c6fe62
  29. 14 12月, 2019 2 次提交
  30. 11 12月, 2019 1 次提交