1. 19 6月, 2015 5 次提交
  2. 18 6月, 2015 3 次提交
  3. 11 6月, 2015 1 次提交
  4. 04 6月, 2015 1 次提交
  5. 21 5月, 2015 1 次提交
  6. 15 5月, 2015 4 次提交
  7. 13 5月, 2015 1 次提交
  8. 05 5月, 2015 1 次提交
  9. 04 5月, 2015 1 次提交
  10. 29 4月, 2015 1 次提交
    • M
      qemu: migration: use sync block job helpers · 99725f94
      Michael Chapman 提交于
      In qemuMigrationDriveMirror we can start all disk mirrors in parallel.
      We wait until they are all ready, or one of them aborts.
      
      In qemuMigrationCancelDriveMirror, we wait until all mirrors are
      properly stopped. This is necessary to ensure that destination VM is
      fully in sync with the (paused) source VM.
      
      If a drive mirror can not be cancelled, then the destination is not in a
      consistent state. In this case it is not safe to continue with the
      migration.
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      99725f94
  11. 24 4月, 2015 2 次提交
  12. 22 4月, 2015 1 次提交
  13. 21 4月, 2015 1 次提交
    • C
      domain: conf: Drop expectedVirtTypes · 835cf84b
      Cole Robinson 提交于
      This needs to specified in way too many places for a simple validation
      check. The ostype/arch/virttype validation checks later in
      DomainDefParseXML should catch most of the cases that this was covering.
      835cf84b
  14. 14 4月, 2015 2 次提交
  15. 13 4月, 2015 2 次提交
  16. 08 4月, 2015 3 次提交
    • M
      qemu_migrate: use nested job when adding NBD to cookie · 72df8314
      Michael Chapman 提交于
      qemuMigrationCookieAddNBD is usually called from within an async
      MIGRATION_OUT or MIGRATION_IN job, so it needs to start a nested job.
      
      (The one exception is during the Begin phase when change protection
      isn't enabled, but qemuDomainObjEnterMonitorAsync will behave the same
      as qemuDomainObjEnterMonitor in this case.)
      
      This bug was encountered with a libvirt client that repeatedly queries
      the disk mirroring block job info during a migration. If one of these
      queries occurs just as the Perform migration cookie is baked, libvirt
      crashes.
      
      Relevant logs are as follows:
      
          6701: warning : qemuDomainObjEnterMonitorInternal:1544 : This thread seems to be the async job owner; entering monitor without asking for a nested job is dangerous
      [1] 6701: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block","id":"libvirt-629"}
      [2] 6699: info : qemuMonitorIOWrite:503 : QEMU_MONITOR_IO_WRITE: mon=0x7fefdc004700 buf={"execute":"query-block","id":"libvirt-629"}
      [3] 6704: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block-jobs","id":"libvirt-630"}
      [4] 6699: info : qemuMonitorJSONIOProcessLine:203 : QEMU_MONITOR_RECV_REPLY: mon=0x7fefdc004700 reply={"return": [...], "id": "libvirt-629"}
          6699: error : qemuMonitorJSONIOProcessLine:211 : internal error: Unexpected JSON reply '{"return": [...], "id": "libvirt-629"}'
      
      At [1] qemuMonitorBlockStatsUpdateCapacity sends its request, then waits
      on mon->notify. At [2] the request is written out to the monitor socket.
      At [3] qemuMonitorBlockJobInfo sends its request, and also waits on
      mon->notify. The reply from the first request is received at [4].
      However, qemuMonitorJSONIOProcessLine is not expecting this reply since
      the second request hadn't completed sending. The reply is dropped and an
      error is returned.
      
      qemuMonitorIO signals mon->notify twice during its error handling,
      waking up both of the threads waiting on it. One of them clears mon->msg
      as it exits qemuMonitorSend; the other crashes:
      
        qemuMonitorSend (mon=0x7fefdc004700, msg=<value optimized out>) at qemu/qemu_monitor.c:975
        975         while (!mon->msg->finished) {
        (gdb) print mon->msg
        $1 = (qemuMonitorMessagePtr) 0x0
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      72df8314
    • M
      qemu: fix race between disk mirror fail and cancel · e5d729ba
      Michael Chapman 提交于
      If a VM migration is aborted, a disk mirror may be failed by QEMU before
      libvirt has a chance to cancel it. The disk->mirrorState remains at
      _ABORT in this case, and this breaks subsequent mirrorings of that disk.
      
      We should instead check the mirrorState directly and transition to _NONE
      if it is already aborted. Do the check *after* aborting the block job in
      QEMU to avoid a race.
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      e5d729ba
    • M
      qemu: fix error propagation in qemuMigrationBegin · 77ddd0bb
      Michael Chapman 提交于
      If virCloseCallbacksSet fails, qemuMigrationBegin must return NULL to
      indicate an error occurred.
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      77ddd0bb
  17. 02 4月, 2015 1 次提交
    • S
      conf: Rename virDomainHasDiskMirror and detect block jobs properly · ffe3d3e8
      Shanzhi Yu 提交于
      virDomainHasDiskMirror() currently detects only jobs that add the mirror
      elements. Since some operations like migration are interlocked by
      existing block jobs on the given domain the check needs to be
      instrumented to check regular jobs too.
      
      This patch renames virDomainHasDiskMirror to virDomainHasDiskBlockjob
      and adds an argument that allows to select that it returns true only for
      block copy jobs as those interlock making the domain persistent.
      
      Other two uses trigger on any block job type.
      Signed-off-by: NShanzhi Yu <shyu@redhat.com>
      Signed-off-by: NPeter Krempa <pkrempa@redhat.com>
      ffe3d3e8
  18. 23 3月, 2015 2 次提交
    • P
      qemu: migration: Forbid migration with memory modules lacking info · c5710066
      Peter Krempa 提交于
      Make sure that libvirt has all vital information needed to reliably
      represent configuration of guest's memory devices in case of a
      migration.
      
      This patch forbids migration in case the required slot number and module
      base address are not present (failed to be loaded from qemu via
      monitor).
      c5710066
    • M
      qemu: skip precreation of network disks · a1b18051
      Michael Chapman 提交于
      Commit cf54c606 introduced the ability
      to create missing storage volumes during migration. For network disks,
      however, we may not necessarily be able to detect whether they already
      exist -- there is no straight-forward way to map the disk to a storage
      volume, and even if there were it's possible no configured storage pool
      actually contains the disk.
      
      It is better to assume the network disk exists in this case, rather than
      aborting the migration completely. If the volume really is missing, QEMU
      will generate an appropriate error later in the migration.
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      a1b18051
  19. 19 3月, 2015 2 次提交
    • E
      qemu: track 'cancelling' migration state · e2660cb8
      Eric Blake 提交于
      In qemu 2.3, the migration status will include 'cancelling' in the
      window between when an asynchronous cancel has been requested and
      when the migration is actually halted.  Previously, qemu hid this
      state and reported 'active'.  Libvirt manages the sequence okay
      even when the string is unrecognized (that is, it will report an
      unknown state:
      
      Migration: [ 69 %]^Cerror: internal error: unexpected migration status in cancelling.
      
      but the migration is still cancelled), but recognizing the string
      makes for a smoother user experience.
      
      * src/qemu/qemu_monitor.h
      (QEMU_MONITOR_MIGRATION_STATUS_CANCELLING): Add enum.
      * src/qemu/qemu_monitor.c (qemuMonitorMigrationStatus): Map it.
      * src/qemu/qemu_migration.c (qemuMigrationUpdateJobStatus): Adjust
      clients.
      * src/qemu/qemu_monitor_json.c
      (qemuMonitorJSONGetMigrationStatusReply): Likewise.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      e2660cb8
    • L
      util: clean up #includes of virnetdevopenvswitch.h · 451547a4
      Laine Stump 提交于
      virnetdevopenvswitch.h declares a few functions that can be called to
      add ports to and remove them from OVS bridges, and retrieve the
      migration data for a port. It does not contain any data definitions
      that are used by domain_conf.h. But for some reason, domain_conf.h
      virnetdevopenvswitch.h should be directly #including it. This adds a
      few lines to the project, but saves all the files that don't need it
      from the extra computing, and makes the dependencies more clear cut.
      451547a4
  20. 06 3月, 2015 1 次提交
    • P
      memtune: change the way how we store unlimited value · cf521fc8
      Pavel Hrdina 提交于
      There was a mess in the way how we store unlimited value for memory
      limits and how we handled values provided by user.  Internally there
      were two possible ways how to store unlimited value: as 0 value or as
      VIR_DOMAIN_MEMORY_PARAM_UNLIMITED.  Because we chose to store memory
      limits as unsigned long long, we cannot use -1 to represent unlimited.
      It's much easier for us to say that everything greater than
      VIR_DOMAIN_MEMORY_PARAM_UNLIMITED means unlimited and leave 0 as valid
      value despite that it makes no sense to set limit to 0.
      
      Remove unnecessary function virCompareLimitUlong.  The update of test
      is to prevent the 0 to be miss-used as unlimited in future.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1146539Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
      cf521fc8
  21. 20 2月, 2015 1 次提交
  22. 19 2月, 2015 1 次提交
    • M
      qemuMigrationDriveMirror: Listen to events · 80c5f10e
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1179678
      
      When migrating with storage, libvirt iterates over domain disks and
      instruct qemu to migrate the ones we are interested in (shared, RO and
      source-less disks are skipped). The disks are migrated in series. No
      new disk is transferred until the previous one hasn't been quiesced.
      This is checked on the qemu monitor via 'query-jobs' command. If the
      disk has been quiesced, it practically went from copying its content
      to mirroring state, where all disk writes are mirrored to the other
      side of migration too. Having said that, there's one inherent error in
      the design. The monitor command we use reports only active jobs. So if
      the job fails for whatever reason, we will not see it anymore in the
      command output. And this can happen fairly simply: just try to migrate
      a domain with storage. If the storage migration fails (e.g. due to
      ENOSPC on the destination) we resume the host on the destination and
      let it run on partly copied disk.
      
      The proper fix is what even the comment in the code says: listen for
      qemu events instead of polling. If storage migration changes state an
      event is emitted and we can act accordingly: either consider disk
      copied and continue the process, or consider disk mangled and abort
      the migration.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      80c5f10e
  23. 11 2月, 2015 1 次提交
  24. 05 2月, 2015 1 次提交