1. 29 9月, 2015 1 次提交
  2. 28 7月, 2015 1 次提交
  3. 07 7月, 2015 1 次提交
  4. 24 6月, 2015 1 次提交
  5. 12 6月, 2015 2 次提交
  6. 09 3月, 2015 1 次提交
  7. 10 2月, 2015 1 次提交
  8. 06 2月, 2015 1 次提交
    • A
      migration: Append JSON description of migration stream · 8118f095
      Alexander Graf 提交于
      One of the annoyances of the current migration format is the fact that
      it's not self-describing. In fact, it's not properly describing at all.
      Some code randomly scattered throughout QEMU elaborates roughly how to
      read and write a stream of bytes.
      
      We discussed an idea during KVM Forum 2013 to add a JSON description of
      the migration protocol itself to the migration stream. This patch
      adds a section after the VM_END migration end marker that contains
      description data on what the device sections of the stream are composed of.
      
      This approach is backwards compatible with any QEMU version reading the
      stream, because QEMU just stops reading after the VM_END marker and ignores
      any data following it.
      
      With an additional external program this allows us to decipher the
      contents of any migration stream and hopefully make migration bugs easier
      to track down.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAmit Shah <amit.shah@redhat.com>
      Signed-off-by: NJuan Quintela <quintela@redhat.com>
      8118f095
  9. 26 1月, 2015 2 次提交
  10. 16 1月, 2015 1 次提交
  11. 14 10月, 2014 1 次提交
  12. 27 6月, 2014 1 次提交
    • A
      vmstate: Add preallocation for migrating arrays (VMS_ALLOC flag) · f32935ea
      Alexey Kardashevskiy 提交于
      There are few helpers already to support array migration. However they all
      require the destination side to preallocate arrays before migration which
      is not always possible due to unknown array size as it might be some
      sort of dynamic state. One of the examples is an array of MSIX-enabled
      devices in SPAPR PHB - this array may vary from 0 to 65536 entries and
      its size depends on guest's ability to enable MSIX or do PCI hotplug.
      
      This adds new VMSTATE_VARRAY_STRUCT_ALLOC macro which is pretty similar to
      VMSTATE_STRUCT_VARRAY_POINTER_INT32 but it can alloc memory for migratign
      array on the destination side.
      
      This defines VMS_ALLOC flag for a field.
      
      This changes vmstate_base_addr() to do the allocation when receiving
      migration.
      Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru>
      Reviewed-by: NJuan Quintela <quintela@redhat.com>
      [agraf: drop g_malloc_n usage]
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      f32935ea
  13. 24 6月, 2014 1 次提交
  14. 19 6月, 2014 2 次提交
  15. 06 5月, 2014 1 次提交
  16. 05 5月, 2014 2 次提交
  17. 14 3月, 2014 1 次提交
  18. 08 2月, 2014 1 次提交
  19. 04 2月, 2014 1 次提交
  20. 18 12月, 2013 1 次提交
  21. 24 9月, 2013 1 次提交
  22. 28 6月, 2013 1 次提交
  23. 05 4月, 2013 2 次提交
  24. 26 3月, 2013 5 次提交
  25. 12 3月, 2013 2 次提交
  26. 11 3月, 2013 3 次提交
  27. 01 3月, 2013 1 次提交
  28. 21 12月, 2012 1 次提交
    • J
      savevm: New save live migration method: pending · e4ed1541
      Juan Quintela 提交于
      Code just now does (simplified for clarity)
      
          if (qemu_savevm_state_iterate(s->file) == 1) {
             vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
             qemu_savevm_state_complete(s->file);
          }
      
      Problem here is that qemu_savevm_state_iterate() returns 1 when it
      knows that remaining memory to sent takes less than max downtime.
      
      But this means that we could end spending 2x max_downtime, one
      downtime in qemu_savevm_iterate, and the other in
      qemu_savevm_state_complete.
      
      Changed code to:
      
          pending_size = qemu_savevm_state_pending(s->file, max_size);
          DPRINTF("pending size %lu max %lu\n", pending_size, max_size);
          if (pending_size >= max_size) {
              ret = qemu_savevm_state_iterate(s->file);
           } else {
              vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
              qemu_savevm_state_complete(s->file);
           }
      
      So what we do is: at current network speed, we calculate the maximum
      number of bytes we can sent: max_size.
      
      Then we ask every save_live section how much they have pending.  If
      they are less than max_size, we move to complete phase, otherwise we
      do an iterate one.
      
      This makes things much simpler, because now individual sections don't
      have to caluclate the bandwidth (it was implossible to do right from
      there).
      Signed-off-by: NJuan Quintela <quintela@redhat.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      e4ed1541