1. 14 12月, 2018 2 次提交
  2. 05 12月, 2018 1 次提交
  3. 03 12月, 2018 2 次提交
  4. 29 11月, 2018 1 次提交
    • J
      qemu: Fix post-copy migration on the source · eca9d21e
      Jiri Denemark 提交于
      Post-copy migration has been broken on the source since commit
      v3.8.0-245-g32c29f10 which implemented support for
      pause-before-switchover QEMU migration capability.
      
      Even though the migration itself went well, the source did not really
      know when it switched to the post-copy mode despite the messages logged
      by MIGRATION event handler. As a result of this, the events emitted by
      source libvirtd were not accurate and statistics of the completed
      migration would cover only the pre-copy part of migration. Moreover, if
      migration failed during the post-copy phase for some reason, the source
      libvirtd would just happily resume the domain, which could lead to disk
      corruption.
      
      With the pause-before-switchover capability enabled, the order of events
      emitted by QEMU changed:
      
                          pause-before-switchover
                 disabled                        enabled
          MIGRATION, postcopy-active      STOP
          STOP                            MIGRATION, pre-switchover
                                          MIGRATION, postcopy-active
      
      The STOP even handler checks the migration status (postcopy-active) and
      sets the domain state accordingly. Which is sufficient when
      pause-before-switchover is disabled, but once we enable it, the
      migration status is still active when we get STOP from QEMU. Thus the
      domain state set in the STOP handler has to be corrected once we are
      notified that migration changed to postcopy-active.
      
      This results in two SUSPENDED events to be emitted by the source
      libvirtd during post-copy migration. The first one with
      VIR_DOMAIN_EVENT_SUSPENDED_MIGRATED detail, while the second one reports
      the corrected VIR_DOMAIN_EVENT_SUSPENDED_POSTCOPY detail. This is
      inevitable because we don't know whether migration will eventually
      switch to post-copy at the time we emit the first event.
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1647365Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      Reviewed-by: NJán Tomko <jtomko@redhat.com>
      eca9d21e
  5. 16 11月, 2018 7 次提交
    • M
      qemu_domain: Track if domain remembers original owner · 7a44ffa6
      Michal Privoznik 提交于
      For metadata locking we might need an extra fork() which given
      latest attempts to do fewer fork()-s is suboptimal. Therefore,
      there will be a qemu.conf knob to {en|dis}able this feature. But
      since the feature is actually not metadata locking itself rather
      than remembering of the original owner of the file this is named
      as 'rememberOwner'. But patches for that feature are not even
      posted yet so there is actually no qemu.conf entry in this patch
      nor a way to enable this feature.
      
      Even though this is effectively a dead code for now it is still
      desired.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      Reviewed-by: NJohn Ferlan <jferlan@redhat.com>
      7a44ffa6
    • M
      qemu_tpm: Pass virDomainObjPtr instead of virDomainDefPtr · 592ed505
      Michal Privoznik 提交于
      The TPM code currently accepts pointer to a domain definition.
      This is okay for now, but in near future the security driver APIs
      it calls will require domain object. Therefore, change the TPM
      code to accept the domain object pointer.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      Reviewed-by: NJohn Ferlan <jferlan@redhat.com>
      592ed505
    • D
      qemu_process.c: removing qemuProcessStartValidateXML · 91afd53c
      Daniel Henrique Barboza 提交于
      Commit ("qemu_domain.c: moving maxCpu validation to
      qemuDomainDefValidate") shortened the code of qemuProcessStartValidateXML.
      The function is called only by qemuProcessStartValidate, in the
      same file, and its code is now a single check that calls virDomainDefValidate.
      
      Instead of leaving a function call just to execute a single check,
      this patch puts the check in the body of qemuProcessStartValidate in the
      place where qemuProcessStartValidateXML was being called. The function can
      now be removed.
      Signed-off-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
      Reviewed-by: NJohn Ferlan <jferlan@redhat.com>
      91afd53c
    • D
      qemu_process.c: moving qemuValidateCpuCount to qemu_domain.c · 9c2fbe97
      Daniel Henrique Barboza 提交于
      Previous patch removed the call to qemuProcessValidateCpuCount
      from qemuProcessStartValidateXML, in qemu_process.c. The only
      caller left is qemuDomainDefValidate, in qemu_domain.c.
      
      Instead of having a public function declared inside qemu_process.c
      that isn't used in that file, this patch moves the function to
      qemu_domain.c, making in static and renaming it to
      qemuDomainValidateCpuCount to be compliant with other static
      functions names in the file.
      Signed-off-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
      Reviewed-by: NJohn Ferlan <jferlan@redhat.com>
      9c2fbe97
    • D
      qemu_domain.c: moving maxCpu validation to qemuDomainDefValidate · 2c4a6a34
      Daniel Henrique Barboza 提交于
      Adding maxCpu validation in qemuDomainDefValidate allows the user to
      spot over the board maxCpus counts at editing time, instead of
      facing a runtime error when starting the domain. This check is also
      arch independent.
      
      This leaves us with 2 calls to qemuProcessValidateCpuCount: one in
      qemuProcessStartValidateXML and the new one at qemuDomainDefValidate.
      
      The call in qemuProcessStartValidateXML is redundant. Following
      up in that code, there is a call to virDomainDefValidate, which
      in turn will call config.domainValidateCallback. In this case, the
      callback function is qemuDomainDefValidate. This means that, on startup
      time, qemuProcessValidateCpuCount will be called twice.
      
      To avoid that, let's also remove the qemuProcessValidateCpuCount call
      from qemuProcessStartValidateXML.
      Signed-off-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
      Reviewed-by: NJohn Ferlan <jferlan@redhat.com>
      2c4a6a34
    • D
      qemu_process.c: make qemuValidateCpuCount public · 9a8e0402
      Daniel Henrique Barboza 提交于
      qemuValidateCpuCount validates the maxCpus value of a domain at
      startup time, preventing it to start if the value exceeds a maximum.
      
      This checking is also done at qemu_domain.c, qemuDomainDefValidate.
      However, it is done only for x86 (and even then, in a specific
      scenario). We want this check to be done for all archs.
      
      To accomplish this, let's first make qemuValidateCpuCount public so
      it can be used inside qemuDomainDefValidate. The function was renamed
      to qemuProcessValidateCpuCount to be compliant with the other public
      methods at qemu_process.h. The method signature was slightly adapted
      to fit the const 'def' variable used in qemuDomainDefValidate. This
      change has no downside in in its original usage at
      qemuProcessStartValidateXML.
      Signed-off-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
      Reviewed-by: NJohn Ferlan <jferlan@redhat.com>
      9a8e0402
    • D
      qemu_process.c: adding maxCpus value to error message · 8aad8432
      Daniel Henrique Barboza 提交于
      Adding the maxCpus value in the error message of qemuValidateCpuCount
      allows the user to set an acceptable maxCpus count without knowing
      QEMU internals.
      
      x86 guests, that might have been created prior to the x86
      qemuDomainDefValidate maxCpus check code (that validates the maxCpus value
      in editing time), will also benefit from this change.
      Signed-off-by: NDaniel Henrique Barboza <danielhb413@gmail.com>
      Reviewed-by: NJohn Ferlan <jferlan@redhat.com>
      8aad8432
  6. 15 11月, 2018 6 次提交
  7. 14 11月, 2018 1 次提交
  8. 12 11月, 2018 2 次提交
  9. 08 11月, 2018 1 次提交
    • J
      qemu: Don't ignore resume events · e4794935
      Jiri Denemark 提交于
      Since commit v4.7.0-302-ge6d77a75 processing RESUME event is mandatory
      for updating domain state. But the event handler explicitly ignored this
      event in some cases. Thus the state would be wrong after a fake reboot
      or when a domain was rebooted after it crashed.
      
      BTW, the code to ignore RESUME event after SHUTDOWN didn't make sense
      even before making RESUME event mandatory. Most likely it was there as a
      result of careless copy&paste from qemuProcessHandleStop.
      
      The corresponding debug message was clarified since the original state
      does not have to be "paused" only and while we have a "resumed" event,
      the state is called "running".
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1612943Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      e4794935
  10. 07 11月, 2018 2 次提交
    • J
      qemu: Narrow the shutdown reconnection failure reason window · 8f0f8425
      John Ferlan 提交于
      The current qemuProcessReconnect logic paints a broad brush
      determining that the shutdown reason must be crashed if it was
      determined that the domain was started with -no-shutdown; however,
      there's many other ways to get to the error label, so let's narrow
      our reasoning window for using VIR_DOMAIN_SHUTOFF_CRASHED to the
      period where we essentially know we've tried to create to the
      monitor and before we were successful in opening the connection.
      
      Failures that occur outside that window would thus be considered
      as VIR_DOMAIN_SHUTOFF_UNKNOWN, at least for now.
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      ACKed-by: NMichal Privoznik <mprivozn@redhat.com>
      8f0f8425
    • J
      qemu: Restore lost shutdown reason · 296e05b5
      John Ferlan 提交于
      When qemuProcessReconnectHelper was introduced (commit d38897a5)
      reconnection failure used VIR_DOMAIN_SHUTOFF_FAILED; however, that
      was changed in commit bda2f17d to either VIR_DOMAIN_SHUTOFF_CRASHED
      or VIR_DOMAIN_SHUTOFF_UNKNOWN.
      
      When QEMU_CAPS_NO_SHUTDOWN checking was removed in commit fe35b1ad
      the conditional state was just left at VIR_DOMAIN_SHUTOFF_CRASHED.
      
      So introduce qemuDomainIsUsingNoShutdown which will manage the
      condition when the domain was started with -no-shutdown so that
      when/if reconnection failure occurs we can restore the decision
      point used to determine whether CRASHED or UNKNOWN is provided.
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      ACKed-by: NMichal Privoznik <mprivozn@redhat.com>
      296e05b5
  11. 06 11月, 2018 1 次提交
  12. 20 10月, 2018 1 次提交
  13. 18 10月, 2018 1 次提交
  14. 02 10月, 2018 1 次提交
  15. 26 9月, 2018 4 次提交
  16. 25 9月, 2018 2 次提交
  17. 22 9月, 2018 2 次提交
    • W
      qemu: Update hostdevs device lists before connecting qemu monitor · 2f754b26
      Wu Zongyong 提交于
      In a following case:
      
          virsh start $domain
          service libvirtd stop
          <shutdown> the guest from within the $domain
          service libvirtd start
      
      Notice that PCI devices which have been assigned to the $domain will
      still be bound to stub drivers instead rebound to host drivers.
      In that case the call stack is like below:
      
          libvirtd start
              qemuProcessReconnect
                  qemuProcessStop (because $domain was shutdown without
                                   libvirtd event to process that)
                      qemuHostdevReAttachDomainDevices
                          qemuHostdevReAttachPCIDevices
                              virHostdevReAttachPCIDevices
      
      However, because qemuHostdevUpdateActiveDomainDevices was called
      after the qemuConnectMonitor, the setup of the tracking of each
      host device in the $domain on either the activePCIHostdevs list
      or inactivePCIHostdev list will not occur in an orderly manner.
      Therefore, virHostdevReAttachPCIDevices just neglects these host PCI
      devices which are bound to stub drivers and doesn't rebind them to
      host drivers.
      
      This patch fixs that by moving qemuHostdevUpdateActiveDomainDevices before
      qemuConnectMonitor during libvirtd reconnection processing.
      Signed-off-by: NWu Zongyong <cordius.wu@huawei.com>
      Reviewed-by: NJohn Ferlan <jferlan@redhat.com>
      2f754b26
    • W
      qemu: Fix deadlock if create qemuProcessReconnect thread failed · fad65432
      Wang Yechao 提交于
      Use the new qemuDomainRemoveInactiveJobLocked to remove the
      @obj during the virDomainObjListForEach call which holds a
      lock on the domain object list.
      Signed-off-by: NWang Yechao <wang.yechao255@zte.com.cn>
      Reviewed-by: NJohn Ferlan <jferlan@redhat.com>
      fad65432
  18. 20 9月, 2018 1 次提交
    • J
      qemu: Ignore nwfilter binding instantiation issues during reconnect · 9e52c649
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1607202
      
      It's essentially stated in the nwfilterBindingDelete that we
      will allow the admin to shoot themselves in the foot by deleting
      the nwfilter binding which then allows them to undefine the
      nwfilter that is in use for the running guest...
      
      However, by allowing this we cause a problem for libvirtd
      restart reconnect processing which would then try to recreate
      the missing binding attempting to use the deleted filter
      resulting in an error and thus shutting the guest down.
      
      So rather than keep adding virDomainConfNWFilterInstantiate
      flags to "ignore" specific error conditions, modify the logic
      to ignore, but VIR_WARN errors other than ignoreExists. This
      will at least allow the guest to not shutdown for only nwfilter
      binding errors that we can now perhaps recover from since we
      have the binding create/delete capability.
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      ACKed-by: NMichal Privoznik <mprivozn@redhat.com>
      9e52c649
  19. 17 9月, 2018 2 次提交