1. 04 6月, 2014 2 次提交
    • M
      virCaps: Expose distance between host NUMA nodes · 8ba0a58f
      Michal Privoznik 提交于
      If user or management application wants to create a guest,
      it may be useful to know the cost of internode latencies
      before the guest resources are pinned. For example:
      
      <capabilities>
      
        <host>
          ...
          <topology>
            <cells num='2'>
              <cell id='0'>
                <memory unit='KiB'>4004132</memory>
                <distances>
                  <sibling id='0' value='10'/>
                  <sibling id='1' value='20'/>
                </distances>
                <cpus num='2'>
                  <cpu id='0' socket_id='0' core_id='0' siblings='0'/>
                  <cpu id='2' socket_id='0' core_id='2' siblings='2'/>
                </cpus>
              </cell>
              <cell id='1'>
                <memory unit='KiB'>4030064</memory>
                <distances>
                  <sibling id='0' value='20'/>
                  <sibling id='1' value='10'/>
                </distances>
                <cpus num='2'>
                  <cpu id='1' socket_id='0' core_id='0' siblings='1'/>
                  <cpu id='3' socket_id='0' core_id='2' siblings='3'/>
                </cpus>
              </cell>
            </cells>
          </topology>
          ...
        </host>
        ...
      </capabilities>
      
      We can see the distance from node1 to node0 is 20 and within nodes 10.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      8ba0a58f
    • M
      virnuma: Introduce virNumaGetDistances · 77c830d8
      Michal Privoznik 提交于
      The API gets a NUMA node and find distances to other nodes.  The
      distances are returned in an array. If an item X within the array
      equals to value of zero, then there's no such node as X.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      77c830d8
  2. 03 6月, 2014 28 次提交
  3. 02 6月, 2014 6 次提交
    • J
      Don't use AI_ADDRCONFIG when binding to wildcard addresses · 819ca36e
      Ján Tomko 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1098659
      
      With parallel boot, network addresses might not yet be assigned [1],
      but binding to wildcard addresses should work.
      
      For non-wildcard addresses, ADDRCONFIG is still used. Document this
      in libvirtd.conf.
      
      [1] http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/
      819ca36e
    • J
      25a5df16
    • J
      qemu: Process DEVICE_DELETED event in a separate thread · 47f424c2
      Jiri Denemark 提交于
      Currently, we don not acquire any job when removing a device after
      DEVICE_DELETED event was received from QEMU. This means that if there is
      another API running at the time DEVICE_DELETED is delivered and the API
      acquired a job, we may happily change the definition of the domain the
      API is working with whenever it unlocks the domain object (e.g., to talk
      with its monitor). That said, we have to acquire a job before finishing
      device removal to make things safe. However, doing so in the main event
      loop would cause a deadlock so we need to move most of the event handler
      into a separate thread.
      
      Another good reason for both acquiring a job and handling the event in a
      separate thread is that we currently remove a device backend immediately
      after removing its frontend while we should only remove the backend once
      we already received DEVICE_DELETED event. That is, we will have to talk
      to QEMU monitor from the event handler.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      47f424c2
    • J
      qemu: Finish device removal in the original thread · 4670f1dd
      Jiri Denemark 提交于
      If QEMU supports DEVICE_DELETED event, we always call
      qemuDomainRemoveDevice from the event handler. However, we will need to
      push this call away from the main event loop and begin a job for it (see
      the following commit), we need to make sure the device is fully removed
      by the original thread (and within its existing job) in case the
      DEVICE_DELETED event arrives before qemuDomainWaitForDeviceRemoval times
      out.
      
      Without this patch, device removals would be guaranteed to never finish
      before the timeout because the could would be blocked by the original
      job being still active.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      4670f1dd
    • P
      Fix build on freebsd · f8a0c9ed
      Pavel Hrdina 提交于
      On freebsd there isn't known "setlocale" so we have to include locale.h
      f8a0c9ed
    • N
      Add helper program to create custom leases · baafe668
      Nehal J Wani 提交于
      Introduce helper program to catch events from dnsmasq and maintain a custom
      lease file per network. It supports dhcpv4 and dhcpv6. The file is saved as
      "<interface-name>.status".
      
      Each lease contains the following info:
      <expiry-time (epoch time)> <mac> <iaid> <ip-address> <hostname> <clientid>
      
      Example of custom leases file content:
      [
          {
              "iaid": "1221229",
              "ip-address": "2001:db8:ca2:2:1::95",
              "mac-address": "52:54:00:12:a2:6d",
              "hostname": "Fedora20",
              "client-id": "00:04:1a:c1:d9:6b:5a:0a:e2:bc:f8:4b:1e:37:2e:38:22:55",
              "expiry-time": 1393244216
          },
          {
              "ip-address": "192.168.150.208",
              "mac-address": "52:54:00:11:56:b3",
              "hostname": "Wani-PC",
              "client-id": "01:52:54:00:11:56:b3",
              "expiry-time": 1393244248
          }
      ]
      
      src/Makefile.am:
         * Add options to compile the helper program
      
      src/network/bridge_driver.c:
         * Introduce networkDnsmasqLeaseFileNameCustom()
         * Invoke helper program along with dnsmasq
         * Delete the .status file when corresponding n/w is destroyed.
      
      src/network/leaseshelper.c
         * Helper program to create the custom lease file
      baafe668
  4. 29 5月, 2014 4 次提交
    • P
      qemu: snapshot: Improve detection of mixed snapshots · 23f38f88
      Peter Krempa 提交于
      Currently we don't support mixed (external + internal) snapshots. The
      code detecting the snapshot type didn't make sure that the memory image
      was consistent with the snapshot type leading into strange error
      message:
      
       $ virsh snapshot-create-as --domain VM --diskspec vda,snapshot=internal --memspec snapshot=external,file=/tmp/blah
       error: internal error: unexpected code path
      
      Fix the mixed detection code to detect this kind of mistake:
      
       $ virsh snapshot-create-as --domain VM --diskspec vda,snapshot=internal --memspec snapshot=external,file=/tmp/blah
       error: unsupported configuration: mixing internal and external targets for a snapshot is not yet supported
      23f38f88
    • P
      qemu: snapshot: Reject internal active snapshot without memory state · d2e668e5
      Peter Krempa 提交于
      A internal snapshot of a active VM with the memory snapshot disabled
      explicitly would actually still take the memory snapshot. Reject it
      explicitly.
      
      Before:
       $ virsh snapshot-create-as --domain VM --diskspec vda,snapshot=internal --memspec snapshot=no
       Domain snapshot 1401353155 created
      
      After:
       $ virsh snapshot-create-as --domain VM --diskspec vda,snapshot=internal --memspec snapshot=no
       error: Operation not supported: internal snapshot of a running VM must include the memory state
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1083345
      d2e668e5
    • P
      util: storage: Fix crash of libvirtd on network backed guest block-pull · 4a051b80
      Peter Krempa 提交于
      For guests backed by gluster volumes (or other network storage) we don't
      fill the backing chain (see qemuDomainDetermineDiskChain). This leaves
      the "relPath" field of the top image NULL. This causes a crash in
      virStorageFileChainLookup() when looking up a backing element for such a
      disk.
      
      Since I'm working on adding support for network storage and one of the
      steps will make the "relPath" field optional let's use STREQ_NULLABLE
      instead of STREQ in virStorageFileChainLookup() to avoid the problem.
      4a051b80
    • L
      util: fix virTimeLocalOffsetFromUTC DST processing · 26d43113
      Laine Stump 提交于
      The original version of virTimeLocalOffsetFromUTC() would fail for
      certain times of the day if daylight savings time was active. This
      could most easily be seen by uncommenting the TEST_LOCALOFFSET() cases
      that include a DST setting.
      
      After a lot of experimenting, I found that the way to solve it in
      almost all test cases is to set tm_isdst = -1 in the struct tm prior
      to calling mktime(). Once this is done, the correct offset is returned
      for all test cases at all times except the two hours just after
      00:00:00 Jan 1 UTC - during that time, any timezone that is *behind*
      UTC, and that is supposed to always be in DST will not have DST
      accounted for in its offset.
      
      I believe that the code of virTimeLocalOffsetFromUTC() actually is
      correct for all cases, but the problem still encountered is due to our
      inability to come up with a TZ string that properly forces DST to
      *always* be active. Since a modfication of the (currently fixed)
      expected result data to account for this would necessarily use the
      same functions that we're trying to test, I've instead just made the
      test program conditionally bypass the problematic cases if the current
      date is either December 31 or January 1. This way we get maximum
      testing during 363 days of the year, but don't get false failures on
      Dec 31 and Jan 1.
      26d43113