1. 09 8月, 2019 1 次提交
    • J
      qemu: Pass qemuCaps to qemuDomainDefCopy · a42f8895
      Jiri Denemark 提交于
      Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
      sure it gets the capabilities stored in the domain's private data if the
      domain is running. Passing NULL may cause QEMU capabilities probing to
      be triggered in case QEMU binary changed in the meantime. When this
      happens while a running domain object is locked, QMP event delivered to
      the domain before QEMU capabilities probing finishes will deadlock the
      event loop.
      
      This patch fixes all paths leading to qemuDomainDefCopy.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
      a42f8895
  2. 29 7月, 2019 2 次提交
    • E
      backup: qemu: Add helper API for looking up node name · e3a4b8f4
      Eric Blake 提交于
      Qemu bitmap operations require knowing the node name associated with
      the format layer (the qcow2 file); as upcoming patches will be
      grabbing that information frequently, make a helper function to access
      it.
      
      Another potential benefit of this function is that we have a single
      place where we could insert a QMP node-name scraping call if we don't
      currently know the node name, when -blockdev is not supported;
      however, the goal is that we hopefully don't ever have to do that
      because we instead scrape node names only at the point where they
      change.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
      e3a4b8f4
    • E
      backup: qemu: Implement metadata tracking for checkpoint APIs · 5f4e0796
      Eric Blake 提交于
      A lot of this work heavily copies from the existing snapshot APIs.
      What's more, this patch is (intentionally) very similar to the
      checkpoint code just added in the test driver, to the point that qemu
      checkpoints are not fully usable in this patch, but it at least
      bisects and builds cleanly.  The separation between patches is done
      because the grunt work of saving and restoring XML and tracking
      relations between checkpoints is common to the test driver, while the
      later patch adding integration with QMP is specific to qemu.
      
      Also note that the interlocking to prevent checkpoints and snapshots
      from existing at the same time will be a separate patch, to make it
      easier to revert that restriction when we finally round out the design
      for supporting interaction between the two concepts.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      5f4e0796
  3. 18 7月, 2019 6 次提交
  4. 03 7月, 2019 1 次提交
  5. 21 6月, 2019 2 次提交
  6. 20 6月, 2019 1 次提交
  7. 19 6月, 2019 1 次提交
  8. 09 5月, 2019 3 次提交
  9. 10 4月, 2019 1 次提交
  10. 04 4月, 2019 2 次提交
  11. 03 4月, 2019 1 次提交
  12. 27 3月, 2019 1 次提交
  13. 22 3月, 2019 1 次提交
  14. 15 3月, 2019 1 次提交
    • M
      qemu_hotplug: Fix a rare race condition when detaching a device twice · c2bc4191
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1623389
      
      If a device is detached twice from the same domain the following
      race condition may happen:
      
      1) The first DetachDevice() call will issue "device_del" on qemu
      monitor, but since the DEVICE_DELETED event did not arrive in
      time, the API ends claiming "Device detach request sent
      successfully".
      
      2) The second DetachDevice() therefore still find the device in
      the domain and thus proceeds to detaching it again. It calls
      EnterMonitor() and qemuMonitorSend() trying to issue "device_del"
      command again. This gets both domain lock and monitor lock
      released.
      
      3) At this point, qemu sends us the DEVICE_DELETED event which is
      going to be handled by the event loop which ends up calling
      qemuDomainSignalDeviceRemoval() to determine who is going to
      remove the device from domain definition. Whether it is the
      caller that marked the device for removal or whether it is going
      to be the event processing thread.
      
      4) Because the device was marked for removal,
      qemuDomainSignalDeviceRemoval() returns true, which means the
      event is to be processed by the thread that has marked the device
      for removal (and is currently still trying to issue "device_del"
      command)
      
      5) The thread finally issues the "device_del" command, which
      fails (obviously) and therefore it calls
      qemuDomainResetDeviceRemoval() to reset the device marking and
      quits immediately after, NOT removing any device from the domain
      definition.
      
      At this point, the device is still present in the domain
      definition but doesn't exist in qemu anymore. Worse, there is no
      way to remove it from the domain definition.
      
      Solution is to note down that we've seen the event and if the
      second "device_del" fails, not take it as a failure but carry on
      with the usual execution.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      ACKed-by: NPeter Krempa <pkrempa@redhat.com>
      c2bc4191
  15. 12 3月, 2019 1 次提交
  16. 08 3月, 2019 1 次提交
  17. 08 2月, 2019 5 次提交
  18. 04 2月, 2019 1 次提交
  19. 31 1月, 2019 2 次提交
  20. 22 1月, 2019 2 次提交
  21. 18 1月, 2019 1 次提交
  22. 09 1月, 2019 1 次提交
    • Y
      qemu: Process RDMA GID state change event · ed357cef
      Yuval Shaia 提交于
      This event is emitted on the monitor when a GID table in pvrdma device
      is modified and the change needs to be propagate to the backend RDMA
      device's GID table.
      
      The control over the RDMA device's GID table is done by updating the
      device's Ethernet function addresses.
      Usually the first GID entry is determine by the MAC address, the second
      by the first IPv6 address and the third by the IPv4 address. Other
      entries can be added by adding more IP addresses. The opposite is the
      same, i.e. whenever an address is removed, the corresponding GID entry
      is removed.
      
      The process is done by the network and RDMA stacks. Whenever an address
      is added the ib_core driver is notified and calls the device driver's
      add_gid function which in turn update the device.
      
      To support this in pvrdma device we need to hook into the create_bind
      and destroy_bind HW commands triggered by pvrdma driver in guest.
      Whenever a changed is made to the pvrdma device's GID table a special
      QMP messages is sent to be processed by libvirt to update the address of
      the backend Ethernet device.
      Signed-off-by: NYuval Shaia <yuval.shaia@oracle.com>
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      ed357cef
  23. 21 12月, 2018 1 次提交
    • N
      qemu: don't log error for missing optional storage sources on stats · 318d807a
      Nikolay Shirokovskiy 提交于
      Every time we call all domain stats for inactive domain with
      unavailable storage source we get error message in logs [1]. It's a bit noisy.
      While it's arguable whether we need such message or not for mandatory
      disks we would like not to see messages for optional disks. Let's
      filter at least for cases of local files. Fixing other cases would
      require passing flag down the stack to .backendInit of storage
      which is ugly.
      
      Stats for active domain are fine because we either drop disks
      with unavailable sources or clean source which is handled
      by virStorageSourceIsEmpty in qemuDomainGetStatsOneBlockFallback.
      
      We have these logs for successful stats since 25aa7035 (version 1.2.15)
      which in turn fixes 596a1371 (version 1.2.12 )which added substantial
      stats for offline disks.
      
      [1] error message example:
      qemuOpenFileAs:3324 : Failed to open file '/path/to/optional/disk': No such file or directory
      Signed-off-by: NNikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
      318d807a
  24. 14 12月, 2018 1 次提交