1. 25 10月, 2017 1 次提交
  2. 18 10月, 2017 2 次提交
  3. 12 10月, 2017 2 次提交
    • P
      exec: simplify address_space_get_iotlb_entry · 076a93d7
      Peter Xu 提交于
      This patch let address_space_get_iotlb_entry() to use the newly
      introduced page_mask parameter in flatview_do_translate(). Then we
      will be sure the IOTLB can be aligned to page mask, also we should
      nicely support huge pages now when introducing a764040c.
      
      Fixes: a764040c ("exec: abstract address_space_do_translate()")
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Signed-off-by: NMaxime Coquelin <maxime.coquelin@redhat.com>
      Acked-by: NMichael S. Tsirkin <mst@redhat.com>
      Message-Id: <20171010094247.10173-3-maxime.coquelin@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      076a93d7
    • P
      exec: add page_mask for flatview_do_translate · d5e5fafd
      Peter Xu 提交于
      The function is originally used for flatview_space_translate() and what
      we care about most is (xlat, plen) range. However for iotlb requests, we
      don't really care about "plen", but the size of the page that "xlat" is
      located on. While, plen cannot really contain this information.
      
      A simple example to show why "plen" is not good for IOTLB translations:
      
      E.g., for huge pages, it is possible that guest mapped 1G huge page on
      device side that used this GPA range:
      
        0x100000000 - 0x13fffffff
      
      Then let's say we want to translate one IOVA that finally mapped to GPA
      0x13ffffe00 (which is located on this 1G huge page). Then here we'll
      get:
      
        (xlat, plen) = (0x13fffe00, 0x200)
      
      So the IOTLB would be only covering a very small range since from
      "plen" (which is 0x200 bytes) we cannot tell the size of the page.
      
      Actually we can really know that this is a huge page - we just throw the
      information away in flatview_do_translate().
      
      This patch introduced "page_mask" optional parameter to capture that
      page mask info. Also, I made "plen" an optional parameter as well, with
      some comments for the whole function.
      
      No functional change yet.
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Signed-off-by: NMaxime Coquelin <maxime.coquelin@redhat.com>
      Message-Id: <20171010094247.10173-2-maxime.coquelin@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d5e5fafd
  4. 11 10月, 2017 1 次提交
  5. 22 9月, 2017 8 次提交
  6. 20 9月, 2017 1 次提交
  7. 01 8月, 2017 1 次提交
    • A
      exec: Add lock parameter to qemu_ram_ptr_length · f5aa69bd
      Anthony PERARD 提交于
      Commit 04bf2526 (exec: use
      qemu_ram_ptr_length to access guest ram) start using qemu_ram_ptr_length
      instead of qemu_map_ram_ptr, but when used with Xen, the behavior of
      both function is different. They both call xen_map_cache, but one with
      "lock", meaning the mapping of guest memory is never released
      implicitly, and the second one without, which means, mapping can be
      release later, when needed.
      
      In the context of address_space_{read,write}_continue, the ptr to those
      mapping should not be locked because it is used immediatly and never
      used again.
      
      The lock parameter make it explicit in which context qemu_ram_ptr_length
      is called.
      Signed-off-by: NAnthony PERARD <anthony.perard@citrix.com>
      Message-Id: <20170726165326.10327-1-anthony.perard@citrix.com>
      Reviewed-by: NStefano Stabellini <sstabellini@kernel.org>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f5aa69bd
  8. 14 7月, 2017 5 次提交
  9. 04 7月, 2017 2 次提交
  10. 15 6月, 2017 3 次提交
  11. 07 6月, 2017 1 次提交
  12. 26 5月, 2017 1 次提交
  13. 19 5月, 2017 2 次提交
  14. 18 5月, 2017 3 次提交
  15. 17 5月, 2017 1 次提交
    • S
      xen/mapcache: store dma information in revmapcache entries for debugging · 1ff7c598
      Stefano Stabellini 提交于
      The Xen mapcache is able to create long term mappings, they are called
      "locked" mappings. The third parameter of the xen_map_cache call
      specifies if a mapping is a "locked" mapping.
      
      >From the QEMU point of view there are two kinds of long term mappings:
      
      [a] device memory mappings, such as option roms and video memory
      [b] dma mappings, created by dma_memory_map & friends
      
      After certain operations, ballooning a VM in particular, Xen asks QEMU
      kindly to destroy all mappings. However, certainly [a] mappings are
      present and cannot be removed. That's not a problem as they are not
      affected by balloonning. The *real* problem is that if there are any
      mappings of type [b], any outstanding dma operations could fail. This is
      a known shortcoming. In other words, when Xen asks QEMU to destroy all
      mappings, it is an error if any [b] mappings exist.
      
      However today we have no way of distinguishing [a] from [b]. Because of
      that, we cannot even print a decent warning.
      
      This patch introduces a new "dma" bool field to MapCacheRev entires, to
      remember if a given mapping is for dma or is a long term device memory
      mapping. When xen_invalidate_map_cache is called, we print a warning if
      any [b] mappings exist. We ignore [a] mappings.
      
      Mappings created by qemu_map_ram_ptr are assumed to be [a], while
      mappings created by address_space_map->qemu_ram_ptr_length are assumed
      to be [b].
      
      The goal of the patch is to make debugging and system understanding
      easier.
      Signed-off-by: NStefano Stabellini <sstabellini@kernel.org>
      Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
      Acked-by: NAnthony PERARD <anthony.perard@citrix.com>
      1ff7c598
  16. 24 4月, 2017 1 次提交
    • G
      memory: add support getting and using a dirty bitmap copy. · 8deaf12c
      Gerd Hoffmann 提交于
      This patch adds support for getting and using a local copy of the dirty
      bitmap.
      
      memory_region_snapshot_and_clear_dirty() will create a snapshot of the
      dirty bitmap for the specified range, clear the dirty bitmap and return
      the copy.  The returned bitmap can be a bit larger than requested, the
      range is expanded so the code can copy unsigned longs from the bitmap
      and avoid atomic bit update operations.
      
      memory_region_snapshot_get_dirty() will return the dirty status of
      pages, pretty much like memory_region_get_dirty(), but using the copy
      returned by memory_region_copy_and_clear_dirty().
      Signed-off-by: NGerd Hoffmann <kraxel@redhat.com>
      Message-id: 20170421091632.30900-3-kraxel@redhat.com
      Signed-off-by: NGerd Hoffmann <kraxel@redhat.com>
      8deaf12c
  17. 21 4月, 2017 3 次提交
  18. 03 4月, 2017 1 次提交
    • P
      exec: revert MemoryRegionCache · 90c4fe5f
      Paolo Bonzini 提交于
      MemoryRegionCache did not know about virtio support for IOMMUs (because the
      two features were developed at the same time).  Revert MemoryRegionCache
      to "normal" address_space_* operations for 2.9, as it is simpler than
      undoing the virtio patches.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      90c4fe5f
  19. 16 3月, 2017 1 次提交