1. 03 5月, 2018 9 次提交
  2. 02 5月, 2018 2 次提交
  3. 30 4月, 2018 5 次提交
  4. 28 4月, 2018 1 次提交
    • L
      nwfilter: increase pcap buffer size to be compatible with TPACKET_V3 · ce5aebea
      Laine Stump 提交于
      When an nwfilter rule sets the parameter CTRL_IP_LEARNING to "dhcp",
      this turns on the "dhcpsnoop" thread, which uses libpcap to monitor
      traffic on the domain's tap device and extract the IP address from the
      DHCP response.
      
      If libpcap on the host is built with HAVE_TPACKET3 defined (to enable
      support for TPACKET_V3), the dhcpsnoop code's initialization of the
      libpcap socket would fail with the following error:
      
        virNWFilterSnoopDHCPOpen:1134 : internal error: pcap_setfilter: can't remove kernel filter: Bad file descriptor
      
      It turns out that this was because TPACKET_V3 requires a larger buffer
      size than libvirt was setting (we were setting it to 128k). Changing
      the buffer size to 256k eliminates the error, and the dhcpsnoop thread
      once again works properly.
      
      A fuller explanation of why TPACKET_V3 requires such a large buffer,
      for future git spelunkers:
      
      libpcap calls setsockopt(... SOL_PACKET, PACKET_RX_RING...) to setup a
      ring buffer for receiving packets; two of the attributes sent to this
      API are called tp_frame_size, and tp_frame_nr. If libpcap was built
      with HAVE_TPACKET3 defined, tp_trame_size is set to MAXIMUM_SNAPLEN
      (defined in libpcap sources as 262144) and tp_frame_nr is set to:
      
       [the buffer size we set, i.e. PCAP_BUFFERSIZE i.e. 262144] / tp_frame_size.
      
      So if PCAP_BUFFERSIZE < MAXIMUM_SNAPLEN, then tp_frame_nr (the number
      of frames in the ring buffer) is 0, which is nonsensical. This same
      value is later used as a multiplier to determine the size for a call
      to malloc() (which would also fail).
      
      (NB: if HAVE_TPACKET3 is *not* defined, then tp_frame_size is set to
      the snaplen set by the user (in our case 576) plus a small amount to
      account for ethernet headers, so 256k is far more than adequate)
      
      Since the TPACKET_V3 code in libpcap actually reads multiple packets
      into each frame, it's not a problem to have only a single frame
      (especially when we are monitoring such infrequent traffic), so it's
      okay to set this relatively small buffer size (in comparison to the
      default, which is 2MB), which is important since every guest using
      dhcp snooping in a nwfilter rule will hold 2 of these buffers for the
      entire life of the guest.
      
      Thanks to Christian Ehrhardt for discovering that buffer size was the
      problem (this was not at all obvious from the error that was logged!)
      
      Resolves: https://bugzilla.redhat.com/1547237
      Fixes: https://bugs.launchpad.net/libvirt/+bug/1758037Signed-off-by: NLaine Stump <laine@laine.org>
      Reviewed-by: Christian Ehrhardt <christian.ehrhardt@canonical.com> (V1)
      Reviewed-by: NJohn Ferlan <jferlan@redhat.com>
      Tested-by: NChristian Ehrhardt <christian.ehrhardt@canonical.com>
      ce5aebea
  5. 27 4月, 2018 3 次提交
  6. 26 4月, 2018 2 次提交
  7. 25 4月, 2018 10 次提交
  8. 24 4月, 2018 5 次提交
    • D
      build: prevent unloading of dlopen'd modules · 71feef92
      Daniel P. Berrangé 提交于
      We previously added "-z nodelete" to the build of libvirt.so to prevent
      crashes when thread local destructors run which point to a code that
      has been dlclose()d:
      
        commit 8e44e559
        Author: Daniel P. Berrange <berrange@redhat.com>
        Date:   Thu Sep 1 17:57:06 2011 +0100
      
            Prevent crash from dlclose() of libvirt.so
      
      The libvirtd loadable modules can suffer from the same problem if they
      were ever unloaded. Fortunately we don't ever call dlclose() on them,
      but lets add a second layer of protection by linking them with the
      "-z nodelete" flag. While we're doing this, lets add a third layer of
      protection by passing RTLD_NODELETE to dlopen().
      Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
      71feef92
    • D
      remote: stop trying to load Xen driver module · 87680332
      Daniel P. Berrangé 提交于
      The Xen driver was recently deleted, but libvirtd has left over code
      that tries to use it. Fortunately this is dead code because WITH_XEN
      will never be defined anymore.
      Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
      87680332
    • D
      build: prevent unloading of all public libraries · 419607c4
      Daniel P. Berrangé 提交于
      We previously added "-z nodelete" to the build of libvirt.so to prevent
      crashes when thread local destructors run which point to a code that
      has been dlclose()d:
      
        commit 8e44e559
        Author: Daniel P. Berrange <berrange@redhat.com>
        Date:   Thu Sep 1 17:57:06 2011 +0100
      
            Prevent crash from dlclose() of libvirt.so
      
      We forgot to copy this protection into the libvirt-qemu.so, libvirt-lxc.so
      and libvirt-admin.so libraries when we introduced them.
      Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
      419607c4
    • J
      Check return status for virUUIDGenerate · da613819
      John Ferlan 提交于
      Although legal, a few paths were not checking a return value < 0
      for failure instead they checked a non zero failure.
      
      Clean them all up to be consistent.
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      da613819
    • M
      virNumaGetHugePageInfo: Return page_avail and page_free as ULL · 31daccf5
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1569678
      
      On some large systems (with ~400GB of RAM) it is possible for
      unsigned int to overflow in which case we report invalid number
      of 4K pages pool size. Switch to unsigned long long.
      
      We hit overflow in virNumaGetPages when doing:
      
          huge_page_sum += 1024 * page_size * page_avail;
      
      because although 'huge_page_sum' is an unsigned long long, the
      page_size and page_avail are both unsigned int, so the promotion
      to unsigned long long doesn't happen until the sum has been
      calculated, by which time we've already overflowed.
      
      Turning page_avail into a unsigned long long is not strictly
      needed until we need ability to represent more than 2^32
      4k pages, which equates to 16 TB of RAM. That's not
      outside the realm of possibility, so makes sense that we
      change it to unsigned long long to avoid future problems.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
      31daccf5
  9. 23 4月, 2018 3 次提交