1. 25 1月, 2017 1 次提交
  2. 10 1月, 2017 2 次提交
    • P
      intel_iommu: allow migration · 8cdcf3c1
      Peter Xu 提交于
      IOMMU needs to be migrated before all the PCI devices (in case there are
      devices that will request for address translation). So marking it with a
      priority higher than the default (which PCI devices and other belong).
      Migration framework handled the rest.
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      8cdcf3c1
    • P
      migration: allow to prioritize save state entries · f37bc036
      Peter Xu 提交于
      During migration, save state entries are saved/loaded without a specific
      order - we just traverse the savevm_state.handlers list and do it one by
      one. This might not be enough.
      
      There are requirements that we need to load specific device's vmstate
      first before others. For example, VT-d IOMMU contains DMA address
      remapping information, which is required by all the PCI devices to do
      address translations. We need to make sure IOMMU's device state is
      loaded before the rest of the PCI devices, so that DMA address
      translation can work properly.
      
      This patch provide a VMStateDescription.priority value to allow specify
      the priority of the saved states. The loadvm operation will be done with
      those devices with higher vmsd priority.
      
      Before this patch, we are possibly achieving the ordering requirement by
      an assumption that the ordering will be the same with the ordering that
      objects are created. A better way is to mark it out explicitly in the
      VMStateDescription table, like what this patch does.
      
      Current ordering logic is still naive and slow, but after all that's not
      a critical path so IMO it's a workable solution for now.
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
      f37bc036
  3. 12 7月, 2016 2 次提交
  4. 29 6月, 2016 1 次提交
  5. 17 6月, 2016 1 次提交
  6. 07 6月, 2016 1 次提交
  7. 23 5月, 2016 1 次提交
  8. 26 2月, 2016 1 次提交
  9. 05 2月, 2016 1 次提交
  10. 16 1月, 2016 2 次提交
  11. 10 1月, 2016 1 次提交
  12. 10 11月, 2015 3 次提交
  13. 04 11月, 2015 1 次提交
  14. 29 9月, 2015 1 次提交
  15. 28 7月, 2015 1 次提交
  16. 07 7月, 2015 1 次提交
  17. 24 6月, 2015 1 次提交
  18. 12 6月, 2015 2 次提交
  19. 09 3月, 2015 1 次提交
  20. 10 2月, 2015 1 次提交
  21. 06 2月, 2015 1 次提交
    • A
      migration: Append JSON description of migration stream · 8118f095
      Alexander Graf 提交于
      One of the annoyances of the current migration format is the fact that
      it's not self-describing. In fact, it's not properly describing at all.
      Some code randomly scattered throughout QEMU elaborates roughly how to
      read and write a stream of bytes.
      
      We discussed an idea during KVM Forum 2013 to add a JSON description of
      the migration protocol itself to the migration stream. This patch
      adds a section after the VM_END migration end marker that contains
      description data on what the device sections of the stream are composed of.
      
      This approach is backwards compatible with any QEMU version reading the
      stream, because QEMU just stops reading after the VM_END marker and ignores
      any data following it.
      
      With an additional external program this allows us to decipher the
      contents of any migration stream and hopefully make migration bugs easier
      to track down.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAmit Shah <amit.shah@redhat.com>
      Signed-off-by: NJuan Quintela <quintela@redhat.com>
      8118f095
  22. 26 1月, 2015 2 次提交
  23. 16 1月, 2015 1 次提交
  24. 14 10月, 2014 1 次提交
  25. 27 6月, 2014 1 次提交
    • A
      vmstate: Add preallocation for migrating arrays (VMS_ALLOC flag) · f32935ea
      Alexey Kardashevskiy 提交于
      There are few helpers already to support array migration. However they all
      require the destination side to preallocate arrays before migration which
      is not always possible due to unknown array size as it might be some
      sort of dynamic state. One of the examples is an array of MSIX-enabled
      devices in SPAPR PHB - this array may vary from 0 to 65536 entries and
      its size depends on guest's ability to enable MSIX or do PCI hotplug.
      
      This adds new VMSTATE_VARRAY_STRUCT_ALLOC macro which is pretty similar to
      VMSTATE_STRUCT_VARRAY_POINTER_INT32 but it can alloc memory for migratign
      array on the destination side.
      
      This defines VMS_ALLOC flag for a field.
      
      This changes vmstate_base_addr() to do the allocation when receiving
      migration.
      Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru>
      Reviewed-by: NJuan Quintela <quintela@redhat.com>
      [agraf: drop g_malloc_n usage]
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      f32935ea
  26. 24 6月, 2014 1 次提交
  27. 19 6月, 2014 2 次提交
  28. 06 5月, 2014 1 次提交
  29. 05 5月, 2014 2 次提交
  30. 14 3月, 2014 1 次提交
  31. 08 2月, 2014 1 次提交