1. 19 6月, 2015 11 次提交
    • J
      qemu: Abort migration early if disk mirror failed · a9ba39a1
      Jiri Denemark 提交于
      Abort migration as soon as we detect that some of the disk mirrors
      failed. There's no sense in trying to finish memory migration first.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      a9ba39a1
    • J
      qemu: Cancel storage migration in parallel · cebb110f
      Jiri Denemark 提交于
      Instead of cancelling disk mirrors sequentially, let's just call
      block-job-cancel for all migrating disks and then wait until all
      disappear.
      
      In case we cancel disk mirrors at the end of successful migration we
      also need to check all block jobs completed successfully. Otherwise we
      have to abort the migration.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      cebb110f
    • J
      qemu: Use domain condition for synchronous block jobs · 4172b96a
      Jiri Denemark 提交于
      By switching block jobs to use domain conditions, we can drop some
      pretty complicated code in NBD storage migration.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      4172b96a
    • J
      qemu: Properly report failed migration · 39564891
      Jiri Denemark 提交于
      Because we are polling we may detect some errors after we asked QEMU for
      migration status even though they occurred before. If this happens and
      QEMU reports migration completed successfully, we would happily report
      the migration succeeded even though we should have cancelled it because
      of the other error.
      
      In practise it is not a big issue now but it will become a much bigger
      issue once the check for storage migration status is moved inside the
      loop in qemuMigrationWaitForCompletion.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      39564891
    • J
      qemu: Introduce qemuBlockJobUpdate · e2cc0e66
      Jiri Denemark 提交于
      The wrapper is useful for calling qemuBlockJobEventProcess with the
      event details stored in disk's privateData, which is the most likely
      usage of qemuBlockJobEventProcess.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      e2cc0e66
    • J
      conf: Introduce per-domain condition variable · e0713c4b
      Jiri Denemark 提交于
      Complex jobs, such as migration, need to monitor several events at once,
      which is impossible when each of the event uses its own condition
      variable. This patch adds a single condition variable to each domain
      object. This variable can be used instead of the other event specific
      conditions.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      e0713c4b
    • M
      virNetServerServiceClose: Don't leak sockets · 355d8f47
      Michal Privoznik 提交于
      Well, if a server is being destructed, all underlying services and
      their sockets should disappear with it. But due to bug in our
      implementation this is not the case. Yes, we are closing the sockets,
      but that's not enough. We must also:
      
      1) Unregister them from the event loop
      2) Unref the service for each socket
      
      The last step is needed, because each socket callback holds a
      reference to the service object. Since in the first step we are
      unregistering the callbacks, they no longer need the reference.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      355d8f47
    • M
      virNetSocket: Fix @watch corner case · 9ee5a798
      Michal Privoznik 提交于
      Although highly unlikely, nobody says that virEventAddHandle()
      can't return 0 as a handle to socket callback. It can't happen
      with our default implementation since all watches will have value
      1 or greater, but users can register their own callback functions
      (which can re-use unused watch IDs for instance). If this is the
      case, weird things may happen.
      
      Also, there's a little bug I'm fixing too, upon
      virNetSocketRemoveIOCallback(), the variable holding callback ID
      was not reset. Therefore calling AddIOCallback() once again would
      fail. Not that we are doing it right now, but we might.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      9ee5a798
    • M
      virNetSocketRemoveIOCallback: Be explicit about unref · 899e49a2
      Michal Privoznik 提交于
      When going through the code I've notice that
      virNetSocketAddIOCallback() increases the reference counter of
      @socket. However, its counter part RemoveIOCallback does not. It took
      me a while to realize this disproportion. The AddIOCallback registers
      our own callback which eventually calls the desired callback and then
      unref the @sock. Yeah, a bit complicated but it works. So, lets note
      this hard learned fact in a comment in RemoveIOCallback().
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      899e49a2
    • M
      daemonSetupNetworking: Don't leak services · 058d18bd
      Michal Privoznik 提交于
      When setting up the daemon networking, new services are created. These
      services then have sockets to listen on. Once created, the service
      objects are added to corresponding server object. However, during that
      process, server increases reference counter of the service object. So,
      at the end of the function, we should decrease it again. This way the
      service objects will have only 1 reference, but that's okay since
      servers are the only objects having a reference.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      058d18bd
    • P
      lib: setvcpus: Remove bogus flag check · 1d0fc808
      Peter Krempa 提交于
      Since VIR_DOMAIN_AFFECT_CURRENT is 0 the flag check does not make sense
      as masking @flags with 0 will always equal to false.
      1d0fc808
  2. 18 6月, 2015 29 次提交