1. 29 5月, 2016 1 次提交
  2. 23 5月, 2016 1 次提交
  3. 07 3月, 2016 2 次提交
  4. 11 2月, 2016 1 次提交
  5. 09 2月, 2016 2 次提交
  6. 05 2月, 2016 1 次提交
  7. 10 1月, 2016 1 次提交
  8. 18 12月, 2015 1 次提交
    • P
      memory: try to inline constant-length reads · 3cc8f884
      Paolo Bonzini 提交于
      memcpy can take a large amount of time for small reads and writes.
      Handle the common case of reading s/g descriptors from memory (there
      is no corresponding "write" case that is as common, because writes
      often use address_space_st* functions) by inlining the relevant
      parts of address_space_read into the caller.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      3cc8f884
  9. 17 12月, 2015 1 次提交
    • E
      exec: Eliminate qemu_ram_free_from_ptr() · a29ac166
      Eduardo Habkost 提交于
      Replace qemu_ram_free_from_ptr() with qemu_ram_free().
      
      The only difference between qemu_ram_free_from_ptr() and
      qemu_ram_free() is that g_free_rcu() is used instead of
      call_rcu(reclaim_ramblock). We can safely replace it because:
      
      * RAM blocks allocated by qemu_ram_alloc_from_ptr() always have
        RAM_PREALLOC set;
      * reclaim_ramblock(block) will do nothing except g_free(block)
        if RAM_PREALLOC is set at block->flags.
      Signed-off-by: NEduardo Habkost <ehabkost@redhat.com>
      Message-Id: <1446844805-14492-2-git-send-email-ehabkost@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      a29ac166
  10. 10 11月, 2015 1 次提交
  11. 13 10月, 2015 1 次提交
  12. 09 9月, 2015 1 次提交
  13. 05 6月, 2015 9 次提交
  14. 08 1月, 2015 2 次提交
    • M
      exec: qemu_ram_alloc_resizeable, qemu_ram_resize · 62be4e3a
      Michael S. Tsirkin 提交于
      Add API to allocate "resizeable" RAM.
      This looks just like regular RAM generally, but
      has a special property that only a portion of it
      (used_length) is actually used, and migrated.
      
      This used_length size can change across reboots.
      
      Follow up patches will change used_length for such blocks at migration,
      making it easier to extend devices using such RAM (notably ACPI,
      but in the future thinkably other ROMs) without breaking migration
      compatibility or wasting ROM (guest) memory.
      
      Device is notified on resize, so it can adjust if necessary.
      
      qemu_ram_alloc_resizeable allocates this memory, qemu_ram_resize resizes
      it.
      
      Note: nothing prevents making all RAM resizeable in this way.
      However, reviewers felt that only enabling this selectively will
      make some class of errors easier to detect.
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      62be4e3a
    • M
      exec: cpu_physical_memory_set/clear_dirty_range · c8d6f66a
      Michael S. Tsirkin 提交于
      Make cpu_physical_memory_set/clear_dirty_range
      behave symmetrically.
      
      To clear range for a given client type only, add
      cpu_physical_memory_clear_dirty_range_type.
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      c8d6f66a
  15. 18 11月, 2014 1 次提交
    • P
      exec: Handle multipage ranges in invalidate_and_set_dirty() · f874bf90
      Peter Maydell 提交于
      The code in invalidate_and_set_dirty() needs to handle addr/length
      combinations which cross guest physical page boundaries. This can happen,
      for example, when disk I/O reads large blocks into guest RAM which previously
      held code that we have cached translations for. Unfortunately we were only
      checking the clean/dirty status of the first page in the range, and then
      were calling a tb_invalidate function which only handles ranges that don't
      cross page boundaries. Fix the function to deal with multipage ranges.
      
      The symptoms of this bug were that guest code would misbehave (eg segfault),
      in particular after a guest reboot but potentially any time the guest
      reused a page of its physical RAM for new code.
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      Message-id: 1416167061-13203-1-git-send-email-peter.maydell@linaro.org
      f874bf90
  16. 09 9月, 2014 1 次提交
  17. 22 7月, 2014 1 次提交
    • P
      exec: fix migration with devices that use address_space_rw · 6886867e
      Paolo Bonzini 提交于
      Devices that use address_space_rw to write large areas to memory
      (as opposed to address_space_map/unmap) were broken with respect
      to migration since fe680d0d (exec: Limit translation limiting in
      address_space_translate to xen, 2014-05-07).  Such devices include
      IDE CD-ROMs.
      
      The reason is that invalidate_and_set_dirty (called by address_space_rw
      but not address_space_map/unmap) was only setting the dirty bit for
      the first page in the translation.
      
      To fix this, introduce cpu_physical_memory_set_dirty_range_nocode that
      is the same as cpu_physical_memory_set_dirty_range except it does not
      muck with the DIRTY_MEMORY_CODE bitmap.  This function can be used if
      the caller invalidates translations with tb_invalidate_phys_page_range.
      
      There is another difference between cpu_physical_memory_set_dirty_range
      and cpu_physical_memory_set_dirty_flag; the former includes a call
      to xen_modified_memory.  This is handled separately in
      invalidate_and_set_dirty, and is not needed in other callers of
      cpu_physical_memory_set_dirty_range_nocode, so leave it alone.
      
      Just one nit: now that invalidate_and_set_dirty takes care of handling
      multiple pages, there is no need for address_space_unmap to wrap it
      in a loop.  In fact that loop would now be O(n^2).
      Reported-by: NDave Gilbert <dgilbert@redhat.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Tested-by: NGerd Hoffmann <kraxel@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      6886867e
  18. 30 6月, 2014 1 次提交
    • D
      vhost-user: fix regions provied with VHOST_USER_SET_MEM_TABLE message · 3fd74b84
      Damjan Marion 提交于
      Old code was affected by memory gaps which resulted in buffer pointers
      pointing to address outside of the mapped regions.
      
      Here we are introducing following changes:
       - new function qemu_get_ram_block_host_ptr() returns host pointer
         to the ram block, it is needed to calculate offset of specific
         region in the host memory
       - new field mmap_offset is added to the VhostUserMemoryRegion. It
         contains offset where specific region starts in the mapped memory.
         As there is stil no wider adoption of vhost-user agreement was made
         that we will not bump version number due to this change
       - other fileds in VhostUserMemoryRegion struct are not changed, as
         they are all needed for usermode app implementation
       - region data is not taken from ram_list.blocks anymore, instead we
         use region data which is alredy calculated for use in vhost-net
       - Now multiple regions can have same FD and user applicaton can call
         mmap() multiple times with the same FD but with different offset
         (user needs to take care for offset page alignment)
      Signed-off-by: NDamjan Marion <damarion@cisco.com>
      Acked-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NDamjan Marion <damarion@cisco.com>
      3fd74b84
  19. 19 6月, 2014 4 次提交
  20. 11 6月, 2014 1 次提交
  21. 04 2月, 2014 1 次提交
  22. 16 1月, 2014 1 次提交
  23. 13 1月, 2014 4 次提交