1. 19 6月, 2015 7 次提交
    • J
      qemu: Properly report failed migration · 39564891
      Jiri Denemark 提交于
      Because we are polling we may detect some errors after we asked QEMU for
      migration status even though they occurred before. If this happens and
      QEMU reports migration completed successfully, we would happily report
      the migration succeeded even though we should have cancelled it because
      of the other error.
      
      In practise it is not a big issue now but it will become a much bigger
      issue once the check for storage migration status is moved inside the
      loop in qemuMigrationWaitForCompletion.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      39564891
    • J
      qemu: Introduce qemuBlockJobUpdate · e2cc0e66
      Jiri Denemark 提交于
      The wrapper is useful for calling qemuBlockJobEventProcess with the
      event details stored in disk's privateData, which is the most likely
      usage of qemuBlockJobEventProcess.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      e2cc0e66
    • J
      conf: Introduce per-domain condition variable · e0713c4b
      Jiri Denemark 提交于
      Complex jobs, such as migration, need to monitor several events at once,
      which is impossible when each of the event uses its own condition
      variable. This patch adds a single condition variable to each domain
      object. This variable can be used instead of the other event specific
      conditions.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      e0713c4b
    • M
      virNetServerServiceClose: Don't leak sockets · 355d8f47
      Michal Privoznik 提交于
      Well, if a server is being destructed, all underlying services and
      their sockets should disappear with it. But due to bug in our
      implementation this is not the case. Yes, we are closing the sockets,
      but that's not enough. We must also:
      
      1) Unregister them from the event loop
      2) Unref the service for each socket
      
      The last step is needed, because each socket callback holds a
      reference to the service object. Since in the first step we are
      unregistering the callbacks, they no longer need the reference.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      355d8f47
    • M
      virNetSocket: Fix @watch corner case · 9ee5a798
      Michal Privoznik 提交于
      Although highly unlikely, nobody says that virEventAddHandle()
      can't return 0 as a handle to socket callback. It can't happen
      with our default implementation since all watches will have value
      1 or greater, but users can register their own callback functions
      (which can re-use unused watch IDs for instance). If this is the
      case, weird things may happen.
      
      Also, there's a little bug I'm fixing too, upon
      virNetSocketRemoveIOCallback(), the variable holding callback ID
      was not reset. Therefore calling AddIOCallback() once again would
      fail. Not that we are doing it right now, but we might.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      9ee5a798
    • M
      virNetSocketRemoveIOCallback: Be explicit about unref · 899e49a2
      Michal Privoznik 提交于
      When going through the code I've notice that
      virNetSocketAddIOCallback() increases the reference counter of
      @socket. However, its counter part RemoveIOCallback does not. It took
      me a while to realize this disproportion. The AddIOCallback registers
      our own callback which eventually calls the desired callback and then
      unref the @sock. Yeah, a bit complicated but it works. So, lets note
      this hard learned fact in a comment in RemoveIOCallback().
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      899e49a2
    • P
      lib: setvcpus: Remove bogus flag check · 1d0fc808
      Peter Krempa 提交于
      Since VIR_DOMAIN_AFFECT_CURRENT is 0 the flag check does not make sense
      as masking @flags with 0 will always equal to false.
      1d0fc808
  2. 18 6月, 2015 33 次提交