1. 11 10月, 2017 1 次提交
  2. 22 9月, 2017 8 次提交
  3. 20 9月, 2017 1 次提交
  4. 01 8月, 2017 1 次提交
    • A
      exec: Add lock parameter to qemu_ram_ptr_length · f5aa69bd
      Anthony PERARD 提交于
      Commit 04bf2526 (exec: use
      qemu_ram_ptr_length to access guest ram) start using qemu_ram_ptr_length
      instead of qemu_map_ram_ptr, but when used with Xen, the behavior of
      both function is different. They both call xen_map_cache, but one with
      "lock", meaning the mapping of guest memory is never released
      implicitly, and the second one without, which means, mapping can be
      release later, when needed.
      
      In the context of address_space_{read,write}_continue, the ptr to those
      mapping should not be locked because it is used immediatly and never
      used again.
      
      The lock parameter make it explicit in which context qemu_ram_ptr_length
      is called.
      Signed-off-by: NAnthony PERARD <anthony.perard@citrix.com>
      Message-Id: <20170726165326.10327-1-anthony.perard@citrix.com>
      Reviewed-by: NStefano Stabellini <sstabellini@kernel.org>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f5aa69bd
  5. 14 7月, 2017 5 次提交
  6. 04 7月, 2017 2 次提交
  7. 15 6月, 2017 3 次提交
  8. 07 6月, 2017 1 次提交
  9. 26 5月, 2017 1 次提交
  10. 19 5月, 2017 2 次提交
  11. 18 5月, 2017 3 次提交
  12. 17 5月, 2017 1 次提交
    • S
      xen/mapcache: store dma information in revmapcache entries for debugging · 1ff7c598
      Stefano Stabellini 提交于
      The Xen mapcache is able to create long term mappings, they are called
      "locked" mappings. The third parameter of the xen_map_cache call
      specifies if a mapping is a "locked" mapping.
      
      >From the QEMU point of view there are two kinds of long term mappings:
      
      [a] device memory mappings, such as option roms and video memory
      [b] dma mappings, created by dma_memory_map & friends
      
      After certain operations, ballooning a VM in particular, Xen asks QEMU
      kindly to destroy all mappings. However, certainly [a] mappings are
      present and cannot be removed. That's not a problem as they are not
      affected by balloonning. The *real* problem is that if there are any
      mappings of type [b], any outstanding dma operations could fail. This is
      a known shortcoming. In other words, when Xen asks QEMU to destroy all
      mappings, it is an error if any [b] mappings exist.
      
      However today we have no way of distinguishing [a] from [b]. Because of
      that, we cannot even print a decent warning.
      
      This patch introduces a new "dma" bool field to MapCacheRev entires, to
      remember if a given mapping is for dma or is a long term device memory
      mapping. When xen_invalidate_map_cache is called, we print a warning if
      any [b] mappings exist. We ignore [a] mappings.
      
      Mappings created by qemu_map_ram_ptr are assumed to be [a], while
      mappings created by address_space_map->qemu_ram_ptr_length are assumed
      to be [b].
      
      The goal of the patch is to make debugging and system understanding
      easier.
      Signed-off-by: NStefano Stabellini <sstabellini@kernel.org>
      Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
      Acked-by: NAnthony PERARD <anthony.perard@citrix.com>
      1ff7c598
  13. 24 4月, 2017 1 次提交
    • G
      memory: add support getting and using a dirty bitmap copy. · 8deaf12c
      Gerd Hoffmann 提交于
      This patch adds support for getting and using a local copy of the dirty
      bitmap.
      
      memory_region_snapshot_and_clear_dirty() will create a snapshot of the
      dirty bitmap for the specified range, clear the dirty bitmap and return
      the copy.  The returned bitmap can be a bit larger than requested, the
      range is expanded so the code can copy unsigned longs from the bitmap
      and avoid atomic bit update operations.
      
      memory_region_snapshot_get_dirty() will return the dirty status of
      pages, pretty much like memory_region_get_dirty(), but using the copy
      returned by memory_region_copy_and_clear_dirty().
      Signed-off-by: NGerd Hoffmann <kraxel@redhat.com>
      Message-id: 20170421091632.30900-3-kraxel@redhat.com
      Signed-off-by: NGerd Hoffmann <kraxel@redhat.com>
      8deaf12c
  14. 21 4月, 2017 3 次提交
  15. 03 4月, 2017 1 次提交
    • P
      exec: revert MemoryRegionCache · 90c4fe5f
      Paolo Bonzini 提交于
      MemoryRegionCache did not know about virtio support for IOMMUs (because the
      two features were developed at the same time).  Revert MemoryRegionCache
      to "normal" address_space_* operations for 2.9, as it is simpler than
      undoing the virtio patches.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      90c4fe5f
  16. 16 3月, 2017 1 次提交
  17. 14 3月, 2017 2 次提交
    • C
      exec: add cpu_synchronize_state to cpu_memory_rw_debug · 79ca7a1b
      Christian Borntraeger 提交于
      I sometimes got "Cannot access memory" when using the x command
      on the monitor. Turns out that the cpu env did contain stale data
      (e.g. wrong control register content for page table origin).
      We must synchronize the state of the CPU before walking the page
      tables. A similar issues happens for a remote gdb, so lets
      do the cpu_synchronize_state in cpu_memory_rw_debug.
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Message-Id: <1488896348-13560-1-git-send-email-borntraeger@de.ibm.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      79ca7a1b
    • J
      mem-prealloc: reduce large guest start-up and migration time. · 1e356fc1
      Jitendra Kolhe 提交于
      Using "-mem-prealloc" option for a large guest leads to higher guest
      start-up and migration time. This is because with "-mem-prealloc" option
      qemu tries to map every guest page (create address translations), and
      make sure the pages are available during runtime. virsh/libvirt by
      default, seems to use "-mem-prealloc" option in case the guest is
      configured to use huge pages. The patch tries to map all guest pages
      simultaneously by spawning multiple threads. Currently limiting the
      change to QEMU library functions on POSIX compliant host only, as we are
      not sure if the problem exists on win32. Below are some stats with
      "-mem-prealloc" option for guest configured to use huge pages.
      
      ------------------------------------------------------------------------
      Idle Guest      | Start-up time | Migration time
      ------------------------------------------------------------------------
      Guest stats with 2M HugePage usage - single threaded (existing code)
      ------------------------------------------------------------------------
      64 Core - 4TB   | 54m11.796s    | 75m43.843s
      64 Core - 1TB   | 8m56.576s     | 14m29.049s
      64 Core - 256GB | 2m11.245s     | 3m26.598s
      ------------------------------------------------------------------------
      Guest stats with 2M HugePage usage - map guest pages using 8 threads
      ------------------------------------------------------------------------
      64 Core - 4TB   | 5m1.027s      | 34m10.565s
      64 Core - 1TB   | 1m10.366s     | 8m28.188s
      64 Core - 256GB | 0m19.040s     | 2m10.148s
      -----------------------------------------------------------------------
      Guest stats with 2M HugePage usage - map guest pages using 16 threads
      -----------------------------------------------------------------------
      64 Core - 4TB   | 1m58.970s     | 31m43.400s
      64 Core - 1TB   | 0m39.885s     | 7m55.289s
      64 Core - 256GB | 0m11.960s     | 2m0.135s
      -----------------------------------------------------------------------
      
      Changed in v2:
       - modify number of memset threads spawned to min(smp_cpus, 16).
       - removed 64GB memory restriction for spawning memset threads.
      
      Changed in v3:
       - limit number of threads spawned based on
         min(sysconf(_SC_NPROCESSORS_ONLN), 16, smp_cpus)
       - implement memset thread specific siglongjmp in SIGBUS signal_handler.
      
      Changed in v4
       - remove sigsetjmp/siglongjmp and SIGBUS unblock/block for main thread
         as main thread no longer touches any pages.
       - simplify code my returning memset_thread_failed status from
         touch_all_pages.
      Signed-off-by: NJitendra Kolhe <jitendra.kolhe@hpe.com>
      Message-Id: <1487907103-32350-1-git-send-email-jitendra.kolhe@hpe.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      1e356fc1
  18. 03 3月, 2017 1 次提交
  19. 28 2月, 2017 2 次提交