1. 06 9月, 2019 5 次提交
  2. 19 8月, 2019 1 次提交
  3. 09 8月, 2019 3 次提交
    • J
      qemu: Pass correct qemuCaps to virDomainDefPostParse · c90fb5a8
      Jiri Denemark 提交于
      Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
      sure it gets the capabilities stored in the domain's private data if the
      domain is running. Passing NULL may cause QEMU capabilities probing to
      be triggered in case QEMU binary changed in the meantime. When this
      happens while a running domain object is locked, QMP event delivered to
      the domain before QEMU capabilities probing finishes will deadlock the
      event loop.
      
      This patch fixes all paths leading to virDomainDefPostParse.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
      c90fb5a8
    • J
      qemu: Pass correct qemuCaps to virDomainDefCopy · bbcfa07b
      Jiri Denemark 提交于
      Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
      sure it gets the capabilities stored in the domain's private data if the
      domain is running. Passing NULL may cause QEMU capabilities probing to
      be triggered in case QEMU binary changed in the meantime. When this
      happens while a running domain object is locked, QMP event delivered to
      the domain before QEMU capabilities probing finishes will deadlock the
      event loop.
      
      Several general functions from domain_conf.c were lazily passing NULL as
      the parseOpaque pointer instead of letting their callers pass the right
      data. This patch fixes all paths leading to virDomainDefCopy to do the
      right thing.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
      bbcfa07b
    • J
      qemu: Pass qemuCaps to qemuDomainDefFormatBufInternal · 900c5952
      Jiri Denemark 提交于
      Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
      sure it gets the capabilities stored in the domain's private data if the
      domain is running. Passing NULL may cause QEMU capabilities probing to
      be triggered in case QEMU binary changed in the meantime. When this
      happens while a running domain object is locked, QMP event delivered to
      the domain before QEMU capabilities probing finishes will deadlock the
      event loop.
      
      This patch fixes all paths leading to qemuDomainDefFormatBufInternal.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
      900c5952
  4. 06 8月, 2019 1 次提交
    • W
      util, conf: Handle default monitor group of an allocation properly · 816cef07
      Wang Huaqiang 提交于
      'default monitor of an allocation' is defined as the resctrl
      monitor group that created along with an resctrl allocation,
      which is created by resctrl file system. If the monitor group
      specified in domain configuration file is happened to be a
      default monitor group of an allocation, then it is not necessary
      to create monitor group since it is already created. But if
      an monitor group is not an allocation default group, you
      should create the group under folder
      '/sys/fs/resctrl/mon_groups' and fill the vcpu PIDs to 'tasks'
      file.
      Signed-off-by: NWang Huaqiang <huaqiang.wang@intel.com>
      Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
      816cef07
  5. 30 7月, 2019 1 次提交
  6. 29 7月, 2019 1 次提交
    • J
      qemu: Fix hyperv features with QEMU 4.1 · 0ccdd476
      Jiri Denemark 提交于
      Originally the names of the hyperv CPU features were only used
      internally for looking up their CPUID bits. So we used "__kvm_hv_"
      prefix for them to make sure the names do not collide with normal CPU
      features stored in our CPU map.
      
      But with QEMU 4.1 we check which features were enabled or disabled by a
      freshly started QEMU process using their names rather than their CPUID
      bits (mostly because of MSR features). Thus we need to change our made
      up internal names into the actual names used by QEMU. Most of the names
      are only used with QEMU 4.1 and newer and the reset was introduced with
      QEMU recently enough to already support spelling with "-". Thus we don't
      need to define them as "hv_*" with a translation to "hv-*" for new QEMU.
      
      Without this patch libvirt would mistakenly report all hyperv features
      as unavailable and refuse to start any domain using them with QEMU 4.1.
      Reported-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      Tested-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Reviewed-by: NJán Tomko <jtomko@redhat.com>
      0ccdd476
  7. 27 7月, 2019 2 次提交
    • S
      tpm: Run swtpm_setup with less parameters on incoming migration · 72299db6
      Stefan Berger 提交于
      In case of an incoming migration we do not need to run swtpm_setup
      with all the parameters but only want to get the benefit of it
      creating a TPM state file for us that we can then label with an
      SELinux label. The actual state will be overwritten by the in-
      coming state. So we have to pass an indicator for incomingMigration
      all the way to the command line parameter generation for swtpm_setup.
      Signed-off-by: NStefan Berger <stefanb@linux.ibm.com>
      Reviewed-by: NMarc-André Lureau <marcandre.lureau@redhat.com>
      72299db6
    • E
      backup: qemu: Detect node names at domain startup · c82abfde
      Eric Blake 提交于
      If we are using -blockdev, then node names are always available
      (because we set them).  But when not using it, we have to scrape node
      names from QMP, and want to do so as infrequently as possible.  We
      were scraping node names after reconnecting a new libvirtd to an
      existing guest (see qemuProcessReconnect), and after any block job
      that may have changed the set of node names we care about (legacy
      block jobs), but forgot to scrape the names when first starting a
      guest.  Do so now in order to allow the checkpoint code to always have
      access to a node name without having to repeat a node name scrape
      itself.
      
      Future patches may need to clean up qemuDomainSetBlockThreshold (if
      node names are always available, then it doesn't need to repeat a
      scrape) and/or hotplug and media changes (if the addition of new nodes
      can result in a null node name, then scraping at that point in time
      would be appropriate).  But for now, this patch addresses only the
      most common instance of a missing node name.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Reviewed-by: NDaniel P. Berrangé <berrange@redhat.com>
      c82abfde
  8. 19 7月, 2019 1 次提交
    • P
      qemu: process: Don't use qemuBlockJobStartupFinalize in qemuProcessHandleBlockJob · 00c4c971
      Peter Krempa 提交于
      The block job event handler qemuProcessHandleBlockJob looks at the block
      job data to see whether the job requires synchronous handling. Since the
      block job event may arrive before we continue the job handling (if the
      job has no data to copy) we could hit the state when the job is still
      set as QEMU_BLOCKJOB_STATE_NEW (as we move it to the
      QEMU_BLOCKJOB_STATE_RUNNING state only after returning from monitor).
      
      If the event handler uses qemuBlockJobStartupFinalize it would
      unregister and free the job. Thankfully this is not a big problem for
      legacy blockjobs as we don't need much data for them but since we'd
      re-instantiate the job data structure we'd report wrong job type for
      active commit as qemu reports it as a regular commit job.
      
      Fix it by not using qemuBlockJobStartupFinalize function in
      qemuProcessHandleBlockJob as it is not starting the job anyways.
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1721375Signed-off-by: NPeter Krempa <pkrempa@redhat.com>
      Reviewed-by: NJán Tomko <jtomko@redhat.com>
      00c4c971
  9. 18 7月, 2019 7 次提交
  10. 21 6月, 2019 2 次提交
  11. 20 6月, 2019 11 次提交
  12. 14 6月, 2019 1 次提交
    • J
      qemu: Try harder to remove pr-helper object and kill pr-helper process · 7a232286
      Jie Wang 提交于
      If libvirt receives DISCONNECTED event and prDaemonRunning is set
      to false, and qemuDomainRemoveDiskDevice() is performing in the
      meantime, then qemuDomainRemoveDiskDevice() will fail to remove
      pr-helper object because prDaemonRunning is false. But removing
      that check from qemuHotplugRemoveManagedPR() is not enough,
      because after removing the object through monitor the
      qemuProcessKillManagedPRDaemon() is called which contains the
      same check. Thus the pr-helper process might be left behind.
      Signed-off-by: NJie Wang <wangjie88@huawei.com>
      Reviewed-by: NMichal Privoznik <mprivozn@redhat.com>
      7a232286
  13. 12 6月, 2019 1 次提交
    • P
      qemu: Use proper block job name when reconnecting to VM · 56c6893f
      Peter Krempa 提交于
      The hash table returned by qemuMonitorGetAllBlockJobInfo is organized by
      the frontend name (which skipps the 'drive-' prefix). While our code
      properly matches the jobs to the disk, qemu needs the full job name
      including the 'drive-' prefix to be able to identify jobs.
      
      Fix this by adding an argument to qemuMonitorGetAllBlockJobInfo which
      does not modify the job name before filling the hash.
      
      This fixes a regression where users would not be able to cancel/pivot
      block jobs after restarting libvirtd while a blockjob is running.
      Signed-off-by: NPeter Krempa <pkrempa@redhat.com>
      Reviewed-by: NJán Tomko <jtomko@redhat.com>
      56c6893f
  14. 06 6月, 2019 1 次提交
  15. 04 6月, 2019 2 次提交