- 19 1月, 2018 1 次提交
-
-
由 Haozhong Zhang 提交于
When mmap(2) the backend files, QEMU uses the host page size (getpagesize(2)) by default as the alignment of mapping address. However, some backends may require alignments different than the page size. For example, mmap a device DAX (e.g., /dev/dax0.0) on Linux kernel 4.13 to an address, which is 4K-aligned but not 2M-aligned, fails with a kernel message like [617494.969768] dax dax0.0: qemu-system-x86: dax_mmap: fail, unaligned vma (0x7fa37c579000 - 0x7fa43c579000, 0x1fffff) Because there is no common approach to get such alignment requirement, we add the 'align' option to 'memory-backend-file', so that users or management utils, which have enough knowledge about the backend, can specify a proper alignment via this option. Signed-off-by: NHaozhong Zhang <haozhong.zhang@intel.com> Message-Id: <20171211072806.2812-2-haozhong.zhang@intel.com> Reviewed-by: NMichael S. Tsirkin <mst@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> [ehabkost: fixed typo, fixed error_setg() format string] Signed-off-by: NEduardo Habkost <ehabkost@redhat.com>
-
- 18 12月, 2017 1 次提交
-
-
由 Marc-André Lureau 提交于
This was never used since its introduction in commit 196ea131 ("memory: Add global-locking property to memory regions"). Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: NMichael Tokarev <mjt@tls.msk.ru>
-
- 22 9月, 2017 6 次提交
-
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Alexey Kardashevskiy 提交于
Since FlatViews are shared now and ASes not, this gets rid of address_space_init_shareable(). This should cause no behavioural change. Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Message-Id: <20170921085110.25598-17-aik@ozlabs.ru> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Alexey Kardashevskiy 提交于
This adds a new "-d" switch to "info mtree" to print dispatch tree internals. This changes the way "-f" is handled - it prints now flat views and associated address spaces. Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Message-Id: <20170921085110.25598-15-aik@ozlabs.ru> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Alexey Kardashevskiy 提交于
FlatView's will be shared between AddressSpace's and subpage_t and MemoryRegionSection cannot store AS anymore, hence this change. In particular, for: typedef struct subpage_t { MemoryRegion iomem; - AddressSpace *as; + FlatView *fv; hwaddr base; uint16_t sub_section[]; } subpage_t; struct MemoryRegionSection { MemoryRegion *mr; - AddressSpace *address_space; + FlatView *fv; hwaddr offset_within_region; Int128 size; hwaddr offset_within_address_space; bool readonly; }; This should cause no behavioural change. Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Message-Id: <20170921085110.25598-7-aik@ozlabs.ru> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Alexey Kardashevskiy 提交于
As we are going to share FlatView's between AddressSpace's, and AddressSpaceDispatch is a structure to perform quick lookup in FlatView, this moves ASD to FlatView. After previosly open coded ASD rendering, we can also remove as->next_dispatch as the new FlatView pointer is stored on a stack and set to an AS atomically. flatview_destroy() is executed under RCU instead of address_space_dispatch_free() now. This makes mem_begin/mem_commit to work with ASD and mem_add with FV as later on mem_add will be taking FV as an argument anyway. This should cause no behavioural change. Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Message-Id: <20170921085110.25598-5-aik@ozlabs.ru> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Alexey Kardashevskiy 提交于
We are going to share FlatView's between AddressSpace's and per-AS memory listeners won't suit the purpose anymore so open code the dispatch tree rendering. Since there is a good chance that dispatch_listener was the only listener, this avoids address_space_update_topology_pass() if there is no registered listeners; this should improve starting time. This should cause no behavioural change. Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Message-Id: <20170921085110.25598-3-aik@ozlabs.ru> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 04 9月, 2017 1 次提交
-
-
由 Peter Maydell 提交于
Move the MemTxResult type to memattrs.h. We're going to want to use it in cpu/qom.h, which doesn't want to include all of memory.h. In practice MemTxResult and MemTxAttrs are pretty closely linked since both are used for the new-style read_with_attrs and write_with_attrs callbacks, so memattrs.h is a reasonable home for this rather than creating a whole new header file for it. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: NAlistair Francis <alistair.francis@xilinx.com>
-
- 15 7月, 2017 4 次提交
-
-
由 Peter Maydell 提交于
Add new utility functions which both initialize a RAM MemoryRegion and arrange for its contents to be migrated; we give thes the memory_region_init_ram(), memory_region_init_rom() and memory_region_init_rom_device() names that we just freed up by renaming the old implementations to _nomigrate(). Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Message-id: 1499438577-7674-6-git-send-email-peter.maydell@linaro.org
-
由 Peter Maydell 提交于
Rename memory_region_init_rom() to memory_region_init_rom_nomigrate() and memory_region_init_rom_device() to memory_region_init_rom_device_nomigrate(). Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Message-id: 1499438577-7674-5-git-send-email-peter.maydell@linaro.org
-
由 Peter Maydell 提交于
Rename memory_region_init_ram() to memory_region_init_ram_nomigrate(). This leaves the way clear for us to provide a memory_region_init_ram() which does handle migration. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Message-id: 1499438577-7674-4-git-send-email-peter.maydell@linaro.org
-
由 Peter Maydell 提交于
The various functions for initializing RAM MemoryRegions do not do anything to cause the data in the MemoryRegion to be migrated. Note in their documentation comments that this is the responsibility of the caller. (We will shortly add a new function that *does* do this for you.) Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Message-id: 1499438577-7674-3-git-send-email-peter.maydell@linaro.org
-
- 14 7月, 2017 2 次提交
-
-
由 Alexey Kardashevskiy 提交于
This finishes QOM'fication of IOMMUMemoryRegion by introducing a IOMMUMemoryRegionClass. This also provides a fastpath analog for IOMMU_MEMORY_REGION_GET_CLASS(). This makes IOMMUMemoryRegion an abstract class. Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Message-Id: <20170711035620.4232-3-aik@ozlabs.ru> Acked-by: NCornelia Huck <cohuck@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Alexey Kardashevskiy 提交于
This defines new QOM object - IOMMUMemoryRegion - with MemoryRegion as a parent. This moves IOMMU-related fields from MR to IOMMU MR. However to avoid dymanic QOM casting in fast path (address_space_translate, etc), this adds an @is_iommu boolean flag to MR and provides new helper to do simple cast to IOMMU MR - memory_region_get_iommu. The flag is set in the instance init callback. This defines memory_region_is_iommu as memory_region_get_iommu()!=NULL. This switches MemoryRegion to IOMMUMemoryRegion in most places except the ones where MemoryRegion may be an alias. Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Message-Id: <20170711035620.4232-2-aik@ozlabs.ru> Acked-by: NCornelia Huck <cohuck@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 27 6月, 2017 1 次提交
-
-
由 KONRAD Frederic 提交于
This introduces a special callback which allows to run code from some MMIO devices. SysBusDevice with a MemoryRegion which implements the request_ptr callback will be notified when the guest try to execute code from their offset. Then it will be able to eg: pre-load some code from an SPI device or ask a pointer from an external simulator, etc.. When the pointer or the data in it are no longer valid the device has to invalidate it. Reviewed-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com> Signed-off-by: NKONRAD Frederic <fred.konrad@greensocs.com> Signed-off-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com>
-
- 15 6月, 2017 2 次提交
-
-
由 Marc-André Lureau 提交于
Now unnecessary since ivshmem uses memory_region_init_ram_from_fd. Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20170602141229.15326-7-marcandre.lureau@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Marc-André Lureau 提交于
Add a new function to initialize a RAM memory region with a file descriptor to be mmap-ed. Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20170602141229.15326-5-marcandre.lureau@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 04 6月, 2017 1 次提交
-
-
由 Juan Quintela 提交于
All the file is surounded already by #ifndef CONFIG_USER_ONLY. Signed-off-by: NJuan Quintela <quintela@redhat.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: NLaurent Vivier <lvivier@redhat.com> Signed-off-by: NMichael Tokarev <mjt@tls.msk.ru>
-
- 26 5月, 2017 2 次提交
-
-
由 Peter Xu 提交于
We were always passing in that one as "false" to assume that's an read operation, and we also assume that IOMMU translation would always have that read permission. A better permission would be IOMMU_NONE since the replay is after all not a real read operation, but just a page table rebuilding process. CC: David Gibson <david@gibson.dropbear.id.au> CC: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Acked-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Reviewed-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Reviewed-by: NJason Wang <jasowang@redhat.com>
-
由 Peter Xu 提交于
This patch converts the old "is_write" bool into IOMMUAccessFlags. The difference is that "is_write" can only express either read/write, but sometimes what we really want is "none" here (neither read nor write). Replay is an good example - during replay, we should not check any RW permission bits since thats not an actual IO at all. CC: Paolo Bonzini <pbonzini@redhat.com> CC: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Acked-by: NDavid Gibson <david@gibson.dropbear.id.au> Acked-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Reviewed-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Reviewed-by: NJason Wang <jasowang@redhat.com>
-
- 24 4月, 2017 1 次提交
-
-
由 Gerd Hoffmann 提交于
This patch adds support for getting and using a local copy of the dirty bitmap. memory_region_snapshot_and_clear_dirty() will create a snapshot of the dirty bitmap for the specified range, clear the dirty bitmap and return the copy. The returned bitmap can be a bit larger than requested, the range is expanded so the code can copy unsigned longs from the bitmap and avoid atomic bit update operations. memory_region_snapshot_get_dirty() will return the dirty status of pages, pretty much like memory_region_get_dirty(), but using the copy returned by memory_region_copy_and_clear_dirty(). Signed-off-by: NGerd Hoffmann <kraxel@redhat.com> Message-id: 20170421091632.30900-3-kraxel@redhat.com Signed-off-by: NGerd Hoffmann <kraxel@redhat.com>
-
- 21 4月, 2017 6 次提交
-
-
由 Peter Xu 提交于
The default replay() don't work for VT-d since vt-d will have a huge default memory region which covers address range 0-(2^64-1). This will normally consumes a lot of time (which looks like a dead loop). The solution is simple - we don't walk over all the regions. Instead, we jump over the regions when we found that the page directories are empty. It'll greatly reduce the time to walk the whole region. To achieve this, we provided a page walk helper to do that, invoking corresponding hook function when we found an page we are interested in. vtd_page_walk_level() is the core logic for the page walking. It's interface is designed to suite further use case, e.g., to invalidate a range of addresses. Reviewed-by: NJason Wang <jasowang@redhat.com> Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: N\"Michael S. Tsirkin\" <mst@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <1491562755-23867-8-git-send-email-peterx@redhat.com> Signed-off-by: NEduardo Habkost <ehabkost@redhat.com>
-
由 Peter Xu 提交于
Originally we have one memory_region_iommu_replay() function, which is the default behavior to replay the translations of the whole IOMMU region. However, on some platform like x86, we may want our own replay logic for IOMMU regions. This patch adds one more hook for IOMMUOps for the callback, and it'll override the default if set. Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: N\"Michael S. Tsirkin\" <mst@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <1491562755-23867-6-git-send-email-peterx@redhat.com> Signed-off-by: NEduardo Habkost <ehabkost@redhat.com>
-
由 Peter Xu 提交于
Generalizing the notify logic in memory_region_notify_iommu() into a single function. This can be further used in customized replay() functions for IOMMUs. Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: N\"Michael S. Tsirkin\" <mst@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <1491562755-23867-5-git-send-email-peterx@redhat.com> Signed-off-by: NEduardo Habkost <ehabkost@redhat.com>
-
由 Peter Xu 提交于
This is an "global" version of existing memory_region_iommu_replay() - we announce the translations to all the registered notifiers, instead of a specific one. Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: N\"Michael S. Tsirkin\" <mst@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <1491562755-23867-4-git-send-email-peterx@redhat.com> Signed-off-by: NEduardo Habkost <ehabkost@redhat.com>
-
由 Peter Xu 提交于
A new macro is provided to iterate all the IOMMU notifiers hooked under specific IOMMU memory region. Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: N\"Michael S. Tsirkin\" <mst@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <1491562755-23867-3-git-send-email-peterx@redhat.com> Signed-off-by: NEduardo Habkost <ehabkost@redhat.com>
-
由 Peter Xu 提交于
In this patch, IOMMUNotifier.{start|end} are introduced to store section information for a specific notifier. When notification occurs, we not only check the notification type (MAP|UNMAP), but also check whether the notified iova range overlaps with the range of specific IOMMU notifier, and skip those notifiers if not in the listened range. When removing an region, we need to make sure we removed the correct VFIOGuestIOMMU by checking the IOMMUNotifier.start address as well. This patch is solving the problem that vfio-pci devices receive duplicated UNMAP notification on x86 platform when vIOMMU is there. The issue is that x86 IOMMU has a (0, 2^64-1) IOMMU region, which is splitted by the (0xfee00000, 0xfeefffff) IRQ region. AFAIK this (splitted IOMMU region) is only happening on x86. This patch also helps vhost to leverage the new interface as well, so that vhost won't get duplicated cache flushes. In that sense, it's an slight performance improvement. Suggested-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NMichael S. Tsirkin <mst@redhat.com> Acked-by: NAlex Williamson <alex.williamson@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <1491562755-23867-2-git-send-email-peterx@redhat.com> [ehabkost: included extra vhost_iommu_region_del() change from Peter Xu] Signed-off-by: NEduardo Habkost <ehabkost@redhat.com>
-
- 03 4月, 2017 1 次提交
-
-
由 Paolo Bonzini 提交于
MemoryRegionCache did not know about virtio support for IOMMUs (because the two features were developed at the same time). Revert MemoryRegionCache to "normal" address_space_* operations for 2.9, as it is simpler than undoing the virtio patches. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 14 3月, 2017 1 次提交
-
-
由 Dr. David Alan Gilbert 提交于
The 'name' parameter to memory_region_init_* had been marked as debug only, however vmstate_region_ram uses it as a parameter to qemu_ram_set_idstr to set RAMBlock names and these form part of the migration stream. Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20170309152708.30635-1-dgilbert@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 18 2月, 2017 1 次提交
-
-
由 Paolo Bonzini 提交于
For now, the cache is created on every virtqueue_pop. Later on, direct descriptors will be able to reuse it. Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 28 1月, 2017 1 次提交
-
-
由 Peter Xu 提交于
Adding one more option "-f" for "info mtree" to dump the flat views of all the address spaces. This will be useful to debug the memory rendering logic, also it'll be much easier with it to know what memory region is handling what address range. Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <1484556005-29701-3-git-send-email-peterx@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 17 1月, 2017 1 次提交
-
-
由 Paolo Bonzini 提交于
This adds a notify interface of ram block additions and removals. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 10 1月, 2017 2 次提交
-
-
由 Jason Wang 提交于
Cc: Paolo Bonzini <pbonzini@redhat.com> Acked-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NJason Wang <jasowang@redhat.com> Reviewed-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Reviewed-by: NPeter Xu <peterx@redhat.com>
-
由 Jason Wang 提交于
This patch introduces a helper to query the iotlb entry for a possible iova. This will be used by later device IOTLB API to enable the capability for a dataplane (e.g vhost) to query the IOTLB. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Peter Crosthwaite <crosthwaite.peter@gmail.com> Cc: Richard Henderson <rth@twiddle.net> Acked-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NJason Wang <jasowang@redhat.com> Reviewed-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 22 12月, 2016 2 次提交
-
-
由 Paolo Bonzini 提交于
Device models often have to perform multiple access to a single memory region that is known in advance, but would to use "DMA-style" functions instead of address_space_map/unmap. This can happen for example when the data has to undergo endianness conversion. Introduce a new data structure to cache the result of address_space_translate without forcing usage of a host address like address_space_map does. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Templatize the address_space_* and *_phys functions, so that we can add similar functions in the next patch that work with a lightweight, cache-like version of address_space_map/unmap. Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 31 10月, 2016 2 次提交
-
-
由 Alex Williamson 提交于
With a vfio assigned device we lay down a base MemoryRegion registered as an IO region, giving us read & write accessors. If the region supports mmap, we lay down a higher priority sub-region MemoryRegion on top of the base layer initialized as a RAM device pointer to the mmap. Finally, if we have any quirks for the device (ie. address ranges that need additional virtualization support), we put another IO sub-region on top of the mmap MemoryRegion. When this is flattened, we now potentially have sub-page mmap MemoryRegions exposed which cannot be directly mapped through KVM. This is as expected, but a subtle detail of this is that we end up with two different access mechanisms through QEMU. If we disable the mmap MemoryRegion, we make use of the IO MemoryRegion and service accesses using pread and pwrite to the vfio device file descriptor. If the mmap MemoryRegion is enabled and results in one of these sub-page gaps, QEMU handles the access as RAM, using memcpy to the mmap. Using either pread/pwrite or the mmap directly should be correct, but using memcpy causes us problems. I expect that not only does memcpy not necessarily honor the original width and alignment in performing a copy, but it potentially also uses processor instructions not intended for MMIO spaces. It turns out that this has been a problem for Realtek NIC assignment, which has such a quirk that creates a sub-page mmap MemoryRegion access. To resolve this, we disable memory_access_is_direct() for ram_device regions since QEMU assumes that it can use memcpy for those regions. Instead we access through MemoryRegionOps, which replaces the memcpy with simple de-references of standard sizes to the host memory. With this patch we attempt to provide unrestricted access to the RAM device, allowing byte through qword access as well as unaligned access. The assumption here is that accesses initiated by the VM are driven by a device specific driver, which knows the device capabilities. If unaligned accesses are not supported by the device, we don't want them to work in a VM by performing multiple aligned accesses to compose the unaligned access. A down-side of this philosophy is that the xp command from the monitor attempts to use the largest available access weidth, unaware of the underlying device. Using memcpy had this same restriction, but at least now an operator can dump individual registers, even if blocks of device memory may result in access widths beyond the capabilities of a given device (RTL NICs only support up to dword). Reported-by: NThorsten Kohfeldt <thorsten.kohfeldt@gmx.de> Signed-off-by: NAlex Williamson <alex.williamson@redhat.com> Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Alex Williamson 提交于
Setting skip_dump on a MemoryRegion allows us to modify one specific code path, but the restriction we're trying to address encompasses more than that. If we have a RAM MemoryRegion backed by a physical device, it not only restricts our ability to dump that region, but also affects how we should manipulate it. Here we recognize that MemoryRegions do not change to sometimes allow dumps and other times not, so we replace setting the skip_dump flag with a new initializer so that we know exactly the type of region to which we're applying this behavior. Signed-off-by: NAlex Williamson <alex.williamson@redhat.com> Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 24 10月, 2016 1 次提交
-
-
由 Paolo Bonzini 提交于
This speeds up MEMORY_LISTENER_CALL noticeably. Right now, with many PCI devices you have N regions added to M AddressSpaces (M = # PCI devices with bus-master enabled) and each call looks up the whole listener list, with at least M listeners in it. Because most of the regions in N are BARs, which are also roughly proportional to M, the whole thing is O(M^3). This changes it to O(M^2), which is the best we can do without rewriting the whole thing. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-