1. 04 2月, 2014 1 次提交
  2. 13 1月, 2014 3 次提交
  3. 20 11月, 2013 1 次提交
  4. 29 9月, 2013 3 次提交
  5. 24 9月, 2013 3 次提交
  6. 03 9月, 2013 1 次提交
  7. 23 8月, 2013 1 次提交
  8. 21 8月, 2013 1 次提交
    • M
      arch_init: align MR size to target page size · 0851c9f7
      Michael S. Tsirkin 提交于
      Migration code assumes that each MR is a multiple of TARGET_PAGE_SIZE:
      MR size is divided by TARGET_PAGE_SIZE, so if it isn't migration
      never completes.
      But this isn't really required for regions set up with
      memory_region_init_ram, since that calls qemu_ram_alloc
      which aligns size up using TARGET_PAGE_ALIGN.
      
      Align MR size up to full target page sizes, this way
      migration completes even if we create a RAM MR
      which is not a full target page size.
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NLaszlo Ersek <lersek@redhat.com>
      0851c9f7
  9. 20 8月, 2013 1 次提交
  10. 23 7月, 2013 2 次提交
  11. 13 7月, 2013 1 次提交
    • C
      Force auto-convegence of live migration · 7ca1dfad
      Chegu Vinod 提交于
      If a user chooses to turn on the auto-converge migration capability
      these changes detect the lack of convergence and throttle down the
      guest. i.e. force the VCPUs out of the guest for some duration
      and let the migration thread catchup and help converge.
      
      Verified the convergence using the following :
       - Java Warehouse workload running on a 20VCPU/256G guest(~80% busy)
       - OLTP like workload running on a 80VCPU/512G guest (~80% busy)
      
      Sample results with Java warehouse workload : (migrate speed set to 20Gb and
      migrate downtime set to 4seconds).
      
       (qemu) info migrate
       capabilities: xbzrle: off auto-converge: off  <----
       Migration status: active
       total time: 1487503 milliseconds
       expected downtime: 519 milliseconds
       transferred ram: 383749347 kbytes
       remaining ram: 2753372 kbytes
       total ram: 268444224 kbytes
       duplicate: 65461532 pages
       skipped: 64901568 pages
       normal: 95750218 pages
       normal bytes: 383000872 kbytes
       dirty pages rate: 67551 pages
      
       ---
      
       (qemu) info migrate
       capabilities: xbzrle: off auto-converge: on   <----
       Migration status: completed
       total time: 241161 milliseconds
       downtime: 6373 milliseconds
       transferred ram: 28235307 kbytes
       remaining ram: 0 kbytes
       total ram: 268444224 kbytes
       duplicate: 64946416 pages
       skipped: 64903523 pages
       normal: 7044971 pages
       normal bytes: 28179884 kbytes
      Signed-off-by: NChegu Vinod <chegu_vinod@hp.com>
      Signed-off-by: NJuan Quintela <quintela@redhat.com>
      7ca1dfad
  12. 01 7月, 2013 1 次提交
  13. 29 6月, 2013 1 次提交
  14. 27 6月, 2013 4 次提交
  15. 14 6月, 2013 4 次提交
  16. 25 5月, 2013 1 次提交
  17. 30 4月, 2013 3 次提交
  18. 16 4月, 2013 1 次提交
  19. 15 4月, 2013 1 次提交
  20. 09 4月, 2013 1 次提交
    • P
      hw: move headers to include/ · 0d09e41a
      Paolo Bonzini 提交于
      Many of these should be cleaned up with proper qdev-/QOM-ification.
      Right now there are many catch-all headers in include/hw/ARCH depending
      on cpu.h, and this makes it necessary to compile these files per-target.
      However, fixing this does not belong in these patches.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      0d09e41a
  21. 05 4月, 2013 3 次提交
  22. 26 3月, 2013 2 次提交
    • O
      Use qemu_put_buffer_async for guest memory pages · 500f0061
      Orit Wasserman 提交于
      This will remove an unneeded copy of guest memory pages.
      For the page header and device state we still copy the data to the
      static buffer the other option is to allocate the memory on demand
      which is more expensive.
      Signed-off-by: NOrit Wasserman <owasserm@redhat.com>
      Signed-off-by: NJuan Quintela <quintela@redhat.com>
      500f0061
    • P
      migration: use XBZRLE only after bulk stage · 5cc11c46
      Peter Lieven 提交于
      at the beginning of migration all pages are marked dirty and
      in the first round a bulk migration of all pages is performed.
      
      currently all these pages are copied to the page cache regardless
      of whether they are frequently updated or not. this doesn't make sense
      since most of these pages are never transferred again.
      
      this patch changes the XBZRLE transfer to only be used after
      the bulk stage has been completed. that means a page is added
      to the page cache the second time it is transferred and XBZRLE
      can benefit from the third time of transfer.
      
      since the page cache is likely smaller than the number of pages
      it's also likely that in the second round the page is missing in the
      cache due to collisions in the bulk phase.
      
      on the other hand a lot of unnecessary mallocs, memdups and frees
      are saved.
      
      the following results have been taken earlier while executing
      the test program from docs/xbzrle.txt. (+) with the patch and (-)
      without. (thanks to Eric Blake for reformatting and comments)
      
      + total time: 22185 milliseconds
      - total time: 22410 milliseconds
      
      Shaved 0.3 seconds, better than 1%!
      
      + downtime: 29 milliseconds
      - downtime: 21 milliseconds
      
      Not sure why downtime seemed worse, but probably not the end of the world.
      
      + transferred ram: 706034 kbytes
      - transferred ram: 721318 kbytes
      
      Fewer bytes sent - good.
      
      + remaining ram: 0 kbytes
      - remaining ram: 0 kbytes
      + total ram: 1057216 kbytes
      - total ram: 1057216 kbytes
      + duplicate: 108556 pages
      - duplicate: 105553 pages
      + normal: 175146 pages
      - normal: 179589 pages
      + normal bytes: 700584 kbytes
      - normal bytes: 718356 kbytes
      
      Fewer normal bytes...
      
      + cache size: 67108864 bytes
      - cache size: 67108864 bytes
      + xbzrle transferred: 3127 kbytes
      - xbzrle transferred: 630 kbytes
      
      ...and more compressed pages sent - good.
      
      + xbzrle pages: 117811 pages
      - xbzrle pages: 21527 pages
      + xbzrle cache miss: 18750
      - xbzrle cache miss: 179589
      
      And very good improvement on the cache miss rate.
      
      + xbzrle overflow : 0
      - xbzrle overflow : 0
      Signed-off-by: NPeter Lieven <pl@kamp.de>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Reviewed-by: NOrit Wasserman <owasserm@redhat.com>
      Signed-off-by: NJuan Quintela <quintela@redhat.com>
      5cc11c46