1. 06 9月, 2019 3 次提交
  2. 29 8月, 2019 1 次提交
  3. 21 8月, 2019 1 次提交
  4. 09 8月, 2019 2 次提交
    • J
      qemu: Pass qemuCaps to qemuDomainDefFormatBufInternal · 900c5952
      Jiri Denemark 提交于
      Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
      sure it gets the capabilities stored in the domain's private data if the
      domain is running. Passing NULL may cause QEMU capabilities probing to
      be triggered in case QEMU binary changed in the meantime. When this
      happens while a running domain object is locked, QMP event delivered to
      the domain before QEMU capabilities probing finishes will deadlock the
      event loop.
      
      This patch fixes all paths leading to qemuDomainDefFormatBufInternal.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
      900c5952
    • J
      qemu: Pass qemuCaps to qemuDomainDefCopy · a42f8895
      Jiri Denemark 提交于
      Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
      sure it gets the capabilities stored in the domain's private data if the
      domain is running. Passing NULL may cause QEMU capabilities probing to
      be triggered in case QEMU binary changed in the meantime. When this
      happens while a running domain object is locked, QMP event delivered to
      the domain before QEMU capabilities probing finishes will deadlock the
      event loop.
      
      This patch fixes all paths leading to qemuDomainDefCopy.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
      a42f8895
  5. 29 7月, 2019 2 次提交
    • E
      backup: qemu: Add helper API for looking up node name · e3a4b8f4
      Eric Blake 提交于
      Qemu bitmap operations require knowing the node name associated with
      the format layer (the qcow2 file); as upcoming patches will be
      grabbing that information frequently, make a helper function to access
      it.
      
      Another potential benefit of this function is that we have a single
      place where we could insert a QMP node-name scraping call if we don't
      currently know the node name, when -blockdev is not supported;
      however, the goal is that we hopefully don't ever have to do that
      because we instead scrape node names only at the point where they
      change.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
      e3a4b8f4
    • E
      backup: qemu: Implement metadata tracking for checkpoint APIs · 5f4e0796
      Eric Blake 提交于
      A lot of this work heavily copies from the existing snapshot APIs.
      What's more, this patch is (intentionally) very similar to the
      checkpoint code just added in the test driver, to the point that qemu
      checkpoints are not fully usable in this patch, but it at least
      bisects and builds cleanly.  The separation between patches is done
      because the grunt work of saving and restoring XML and tracking
      relations between checkpoints is common to the test driver, while the
      later patch adding integration with QMP is specific to qemu.
      
      Also note that the interlocking to prevent checkpoints and snapshots
      from existing at the same time will be a separate patch, to make it
      easier to revert that restriction when we finally round out the design
      for supporting interaction between the two concepts.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      5f4e0796
  6. 18 7月, 2019 6 次提交
  7. 03 7月, 2019 1 次提交
  8. 21 6月, 2019 2 次提交
  9. 20 6月, 2019 1 次提交
  10. 19 6月, 2019 1 次提交
  11. 09 5月, 2019 3 次提交
  12. 10 4月, 2019 1 次提交
  13. 04 4月, 2019 2 次提交
  14. 03 4月, 2019 1 次提交
  15. 27 3月, 2019 1 次提交
  16. 22 3月, 2019 1 次提交
  17. 15 3月, 2019 1 次提交
    • M
      qemu_hotplug: Fix a rare race condition when detaching a device twice · c2bc4191
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1623389
      
      If a device is detached twice from the same domain the following
      race condition may happen:
      
      1) The first DetachDevice() call will issue "device_del" on qemu
      monitor, but since the DEVICE_DELETED event did not arrive in
      time, the API ends claiming "Device detach request sent
      successfully".
      
      2) The second DetachDevice() therefore still find the device in
      the domain and thus proceeds to detaching it again. It calls
      EnterMonitor() and qemuMonitorSend() trying to issue "device_del"
      command again. This gets both domain lock and monitor lock
      released.
      
      3) At this point, qemu sends us the DEVICE_DELETED event which is
      going to be handled by the event loop which ends up calling
      qemuDomainSignalDeviceRemoval() to determine who is going to
      remove the device from domain definition. Whether it is the
      caller that marked the device for removal or whether it is going
      to be the event processing thread.
      
      4) Because the device was marked for removal,
      qemuDomainSignalDeviceRemoval() returns true, which means the
      event is to be processed by the thread that has marked the device
      for removal (and is currently still trying to issue "device_del"
      command)
      
      5) The thread finally issues the "device_del" command, which
      fails (obviously) and therefore it calls
      qemuDomainResetDeviceRemoval() to reset the device marking and
      quits immediately after, NOT removing any device from the domain
      definition.
      
      At this point, the device is still present in the domain
      definition but doesn't exist in qemu anymore. Worse, there is no
      way to remove it from the domain definition.
      
      Solution is to note down that we've seen the event and if the
      second "device_del" fails, not take it as a failure but carry on
      with the usual execution.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      ACKed-by: NPeter Krempa <pkrempa@redhat.com>
      c2bc4191
  18. 12 3月, 2019 1 次提交
  19. 08 3月, 2019 1 次提交
  20. 08 2月, 2019 5 次提交
  21. 04 2月, 2019 1 次提交
  22. 31 1月, 2019 2 次提交