1. 12 5月, 2015 8 次提交
  2. 11 5月, 2015 11 次提交
  3. 08 5月, 2015 1 次提交
    • C
      caps: Fix regression defaulting to host arch · 8910e063
      Cole Robinson 提交于
      My commit 747761a7 (v1.2.15 only) dropped this bit of logic when filling
      in a default arch in the XML:
      
      -    /* First try to find one matching host arch */
      -    for (i = 0; i < caps->nguests; i++) {
      -        if (caps->guests[i]->ostype == ostype) {
      -            for (j = 0; j < caps->guests[i]->arch.ndomains; j++) {
      -                if (caps->guests[i]->arch.domains[j]->type == domain &&
      -                    caps->guests[i]->arch.id == caps->host.arch)
      -                    return caps->guests[i]->arch.id;
      -            }
      -        }
      -    }
      
      That attempt to match host.arch is important, otherwise we end up
      defaulting to i686 on x86_64 host for KVM, which is not intended.
      Duplicate it in the centralized CapsLookup function.
      
      Additionally add some testcases that would have caught this.
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1219191
      8910e063
  4. 07 5月, 2015 4 次提交
    • C
      tests: Remove redundant aarch64 tests · fd74e231
      Cole Robinson 提交于
      My commit 7b9de914 added some aarch64 CPU test cases. I wanted to test
      two different code paths but inadvertently added two of the same test
      cases.
      
      The second code path (using <cpu><model>host</model</cpu>) isn't easily
      exercised via the qemu tests anyways, I'll need to look elsewhere.
      
      Regardless, remove the redundant tests for now
      fd74e231
    • M
      processSerialChangedEvent: Close agent monitor early · 2af51483
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=890648
      
      So, imagine you've issued an API that involves guest agent. For
      instance, you want to query guest's IP addresses. So the API acquires
      QUERY_JOB, locks the guest agent and issues the agent command.
      However, for some reason, guest agent replies to initial ping
      correctly, but then crashes tragically while executing real command
      (in this case guest-network-get-interfaces). Since initial ping went
      well, libvirt thinks guest agent is accessible and awaits reply to the
      real command. But it will never come. What will is a monitor event.
      Our handler (processSerialChangedEvent) will try to acquire
      MODIFY_JOB, which will fail obviously because the other thread that's
      executing the API already holds a job. So the event handler exits
      early, and the QUERY_JOB is never released nor ended.
      
      The way how to solve this is to put flag somewhere in the monitor
      internals. The flag is called @running and agent commands are issued
      iff the flag is set. The flag itself is set when we connect to the
      agent socket. And unset whenever we see DISCONNECT event from the
      agent. Moreover, we must wake up all the threads waiting for the
      agent. This is done by signalizing the condition they're waiting on.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      2af51483
    • M
      qemuDomainShutdownFlags: check for domain activeness prior to guest presence · 21e8fc36
      Michal Privoznik 提交于
      Running shutdown with mode agent on a shutoff domain gives cryptic
      error message:
      
          virsh # shutdown --mode agent gentoo
          error: Failed to shutdown domain gentoo
          error: Guest agent is not responding: QEMU guest agent is not connected
      
      After this patch, the error is more clear:
      
          virsh # shutdown --mode agent gentoo
          error: Failed to shutdown domain gentoo
          error: Requested operation is not valid: domain is not running
      Reported-by: NMartin Kletzander <mkletzan@redhat.com>
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      21e8fc36
    • L
      lxc: don't up the veth interfaces unless explicitly asked to · c3cf3c43
      Lubomir Rintel 提交于
      Upping an interface for no reason and not configuring it is a cardinal sin.
      
      With the default addrgenmode if eui64 it sticks a link-local address to the
      interface. That is not good, as NetworkManager would see an address configured,
      assume the interface is already configured and won't touch it iself and the
      interface might stay unconfigured until the end of the days.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1124721Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      c3cf3c43
  5. 06 5月, 2015 9 次提交
    • B
      qemu: multiqueue for ccw devices · 808e771e
      Boris Fiuczynski 提交于
      Allow ccw devices to be used with multiqueues. ccw provides a one to
      one relation of fds to queues and does not support the vectors option.
      Signed-off-by: NBoris Fiuczynski <fiuczy@linux.vnet.ibm.com>
      Reviewed-by: NMatthew Rosato <mjrosato@linux.vnet.ibm.com>
      Reviewed-by: NDaniel Hansel <daniel.hansel@linux.vnet.ibm.com>
      Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com>
      808e771e
    • J
      qemu: Resolve Coverity FORWARD_NULL · b8e60f00
      John Ferlan 提交于
      Coverity points out that qemuMonitorGetAllBlockStatsInfo could return a
      -1 and thus not fill in 'stats' (leaving it NULL). Then the call to
      qemuMonitorBlockStatsUpdateCapacity will dereference it.
      b8e60f00
    • J
      qemu: Resolve Coverity FORWARD_NULL · 3e4ce359
      John Ferlan 提交于
      Coverity complains over the [n]values pairing in virQEMUCapsFreeStringList
      and rather than make a bunch if "if values" checks prior to calling, by
      just adding the values check inside the free function we avoid the chance
      that somehow nvalues is > 0, while values == NULL
      3e4ce359
    • J
      qemu: Resolve Coverity FORWARD_NULL · e7664eed
      John Ferlan 提交于
      Coverity points out it was possible to have a zero return from
      qemuBuildRNGBackendProps thus not filling in 'props' and then
      causing a NULL dereference on the next call.
      e7664eed
    • J
      xen: Resolve Coverity FORWARD_NULL · c9a8e594
      John Ferlan 提交于
      Coverity found that xenXMConfigCacheAddFile has an error path in which
      no error message and a -1 was not returned which could have resulted in
      a NULL dereference in a VIR_DEBUG statement and of course an erroneous
      0 value returned!
      c9a8e594
    • J
      qemu: Resolve Coverity FORWARD_NULL · 75dfbb85
      John Ferlan 提交于
      Coverity notes that ->ifname is used after the VIR_FREE done in the
      code path after the call to virNetDevMacVLanDeleteWithVPortProfile
      by a call to virNetDevOpenvswitchRemovePort.
      
      Since the ->ifname will be VIR_FREE()'d eventually in virDomainNetDefFree
      just remove the extraneous VIR_FREE here.
      
      When originally added, the Openvswitch code wasn't present and checks
      were made for non NULL prior to use.
      75dfbb85
    • J
      qemu: Resolve Coverity IDENTICAL_BRANCHES · 9ad32e50
      John Ferlan 提交于
      Coverity complains that in the error paths both the < 0 condition and
      the success path after the qemuDomainObjExitMonitor failure will end
      up going to cleanup.  So just use ignore_value in this error path to
      resolve the complaint.
      9ad32e50
    • J
      vbox: Resolve Coverity RESOURCE_LEAK · 74aab575
      John Ferlan 提交于
      If the virStringSearch() returns a 0 (zero), then each of the uses
      of the call will just jump to cleanup forgetting to free the returned
      empty list. Expand the scope a bit of each use and free at cleanup.
      74aab575
    • J
      libxl: Resolve Coverity RESOURCE_LEAK · 11b91679
      John Ferlan 提交于
      The returned socks from virNetSocketNewListenTCP needs to be VIR_FREE'd
      as well as seach of the Close/Unref on all the socks[i] that is
      already done
      11b91679
  6. 05 5月, 2015 7 次提交