- 03 9月, 2019 3 次提交
-
-
由 Tony Nguyen 提交于
Now that MemOp has been pushed down into the memory API, and callers are encoding endianness, we can collapse byte swaps along the I/O path into the accelerator and target independent adjust_endianness. Collapsing byte swaps along the I/O path enables additional endian inversion logic, e.g. SPARC64 Invert Endian TTE bit, with redundant byte swaps cancelling out. Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Suggested-by: NRichard Henderson <richard.henderson@linaro.org> Signed-off-by: NTony Nguyen <tony.nguyen@bt.com> Message-Id: <911ff31af11922a9afba9b7ce128af8b8b80f316.1566466906.git.tony.nguyen@bt.com> Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
-
由 Tony Nguyen 提交于
Preparation for collapsing the two byte swaps adjust_endianness and handle_bswap into the former. Call memory_region_dispatch_{read|write} with endianness encoded into the "MemOp op" operand. This patch does not change any behaviour as memory_region_dispatch_{read|write} is yet to handle the endianness. Once it does handle endianness, callers with byte swaps can collapse them into adjust_endianness. Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Signed-off-by: NTony Nguyen <tony.nguyen@bt.com> Message-Id: <8066ab3eb037c0388dfadfe53c5118429dd1de3a.1566466906.git.tony.nguyen@bt.com> Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
-
由 Tony Nguyen 提交于
Convert memory_region_dispatch_{read|write} operand "unsigned size" into a "MemOp op". Signed-off-by: NTony Nguyen <tony.nguyen@bt.com> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Message-Id: <1dd82df5801866743f838f1d046475115a1d32da.1566466906.git.tony.nguyen@bt.com> Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
-
- 21 8月, 2019 4 次提交
-
-
由 Peter Xu 提交于
The old memory_region_{add|clear}_coalescing() has some defects because they both changed mr->coalesced before updating the regions using memory_region_update_coalesced_range_as(). Then when the regions were updated in memory_region_update_coalesced_range_as() the mr->coalesced will always be either one more or one less. So: - For memory_region_add_coalescing: it'll always trying to remove the newly added coalesced region while it shouldn't, and, - For memory_region_clear_coalescing: when it calls the update there will be no coalesced ranges on mr->coalesced because they were all removed before hand so the update will probably do nothing for real. Let's fix this. Now we've got flat_range_coalesced_io_notify() to notify a single CoalescedMemoryRange instance change, so use it in the existing memory_region_update_coalesced_range() logic by only notify either an addition or deletion. Then we hammer both the memory_region_{add|clear}_coalescing() to use it. Fixes: 3ac7d43aSigned-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <20190820141328.10009-5-peterx@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Peter Xu 提交于
The has_coalesced_range could potentially be problematic in that it only works for additions of coalesced mmio ranges but not deletions. The reason is that has_coalesced_range information can be lost when the FlatView updates the topology again when the updated region is not covering the coalesced regions. When that happens, due to flatrange_equal() is not checking against has_coalesced_range, the new FlatRange will be seen as the same one as the old and the new instance (whose has_coalesced_range will be zero) will replace the old instance (whose has_coalesced_range _could_ be non-zero). The counter was originally used to make sure every FlatRange will only notify once for coalesced_io_{add|del} memory listeners, because each FlatRange can be used by multiple address spaces, so logically speaking it could be called multiple times. However we should not limit that, because memory listeners should will only be registered with specific address space rather than multiple address spaces. So let's fix this up by simply removing the whole has_coalesced_range. Fixes: 3ac7d43aSigned-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <20190820141328.10009-3-peterx@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Peter Xu 提交于
It is a workaround of current KVM's KVM_UNREGISTER_COALESCED_MMIO interface. The kernel interface only allows to unregister an mmio device with exactly the zone size when registered, or any smaller zone that is included in the device mmio zone. It does not support the userspace to specify a very large zone to remove all the small mmio devices within the zone covered. Logically speaking it would be nicer to fix this from KVM side, though in all cases we still need to coop with old kernels so let's do this. Fixes: 3ac7d43aSigned-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <20190820141328.10009-2-peterx@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Peter Xu 提交于
Removing the update variable and quit earlier if the memory region has no coalesced range. This prepares for the next patch. Fixes: 3ac7d43aSigned-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <20190820141328.10009-4-peterx@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 20 8月, 2019 2 次提交
-
-
由 Paolo Bonzini 提交于
There is a race between TCG and accesses to the dirty log: vCPU thread reader thread ----------------------- ----------------------- TLB check -> slow path notdirty_mem_write write to RAM set dirty flag clear dirty flag TLB check -> fast path read memory write to RAM Fortunately, in order to fix it, no change is required to the vCPU thread. However, the reader thread must delay the read after the vCPU thread has finished the write. This can be approximated conservatively by run_on_cpu, which waits for the end of the current translation block. A similar technique is used by KVM, which has to do a synchronous TLB flush after doing a test-and-clear of the dirty-page flags. Reported-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Yan Zhao 提交于
It is wrong for an entry to have parts out of scope of notifier's range. assert this condition. Out of scope mapping/unmapping would cause problem, as in below case: 1. initially there are two notifiers with ranges 0-0xfedfffff, 0xfef00000-0xffffffffffffffff, IOVAs from 0x3c000000 - 0x3c1fffff is in shadow page table. 2. in vfio, memory_region_register_iommu_notifier() is followed by memory_region_iommu_replay(), which will first call address space unmap, and walk and add back all entries in vtd shadow page table. e.g. (1) for notifier 0-0xfedfffff, IOVAs from 0 - 0xffffffff get unmapped, and IOVAs from 0x3c000000 - 0x3c1fffff get mapped (2) for notifier 0xfef00000-0xffffffffffffffff IOVAs from 0 - 0x7fffffffff get unmapped, but IOVAs from 0x3c000000 - 0x3c1fffff cannot get mapped back. Cc: Eric Auger <eric.auger@redhat.com> Signed-off-by: NYan Zhao <yan.y.zhao@intel.com> Message-Id: <1561432878-13754-1-git-send-email-yan.y.zhao@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 16 8月, 2019 4 次提交
-
-
由 Markus Armbruster 提交于
sysemu/sysemu.h is a rather unfocused dumping ground for stuff related to the system-emulator. Evidence: * It's included widely: in my "build everything" tree, changing sysemu/sysemu.h still triggers a recompile of some 1100 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h, down from 5400 due to the previous two commits). * It pulls in more than a dozen additional headers. Split stuff related to run state management into its own header sysemu/runstate.h. Touching sysemu/sysemu.h now recompiles some 850 objects. qemu/uuid.h also drops from 1100 to 850, and qapi/qapi-types-run-state.h from 4400 to 4200. Touching new sysemu/runstate.h recompiles some 500 objects. Since I'm touching MAINTAINERS to add sysemu/runstate.h anyway, also add qemu/main-loop.h. Suggested-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Message-Id: <20190812052359.30071-30-armbru@redhat.com> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> [Unbreak OS-X build]
-
由 Markus Armbruster 提交于
In my "build everything" tree, changing hw/qdev-properties.h triggers a recompile of some 2700 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). Many places including hw/qdev-properties.h (directly or via hw/qdev.h) actually need only hw/qdev-core.h. Include hw/qdev-core.h there instead. hw/qdev.h is actually pointless: all it does is include hw/qdev-core.h and hw/qdev-properties.h, which in turn includes hw/qdev-core.h. Replace the remaining uses of hw/qdev.h by hw/qdev-properties.h. While there, delete a few superfluous inclusions of hw/qdev-core.h. Touching hw/qdev-properties.h now recompiles some 1200 objects. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Daniel P. Berrangé" <berrange@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Reviewed-by: NEduardo Habkost <ehabkost@redhat.com> Message-Id: <20190812052359.30071-22-armbru@redhat.com>
-
由 Markus Armbruster 提交于
In my "build everything" tree, changing qemu/main-loop.h triggers a recompile of some 5600 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). It includes block/aio.h, which in turn includes qemu/event_notifier.h, qemu/notify.h, qemu/processor.h, qemu/qsp.h, qemu/queue.h, qemu/thread-posix.h, qemu/thread.h, qemu/timer.h, and a few more. Include qemu/main-loop.h only where it's needed. Touching it now recompiles only some 1700 objects. For block/aio.h and qemu/event_notifier.h, these numbers drop from 5600 to 2800. For the others, they shrink only slightly. Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Message-Id: <20190812052359.30071-21-armbru@redhat.com> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Tested-by: NPhilippe Mathieu-Daudé <philmd@redhat.com>
-
由 Markus Armbruster 提交于
TYPE_IOMMU_MEMORY_REGION is a direct subtype of TYPE_MEMORY_REGION. Its instance struct is IOMMUMemoryRegion, and its first member is a MemoryRegion. Correct. Its class struct is IOMMUMemoryRegionClass, and its first member is a DeviceClass. Wrong. Messed up when commit 1221a474 introduced the QOM type. It even included hw/qdev-core.h just for that. TYPE_MEMORY_REGION doesn't bother to define a class struct. This is fine, it simply defaults to its super-type TYPE_OBJECT's class struct ObjectClass. Changing IOMMUMemoryRegionClass's first member's type to ObjectClass would be a minimal fix, if a bit brittle: if TYPE_MEMORY_REGION ever acquired own class struct, we'd have to update IOMMUMemoryRegionClass to use it. Fix it the clean and robust way instead: give TYPE_MEMORY_REGION its own class struct MemoryRegionClass now, and use it for IOMMUMemoryRegionClass's first member. Revert the include of hw/qdev-core.h, and fix the few files that have come to rely on it. Cc: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Reviewed-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Tested-by: NPhilippe Mathieu-Daudé <philmd@redhat.com> Acked-by: NPaolo Bonzini <pbonzini@redhat.com> Message-Id: <20190812052359.30071-5-armbru@redhat.com>
-
- 20 7月, 2019 1 次提交
-
-
由 Alexey Kardashevskiy 提交于
This adds an accelerator name to the "into mtree -f" to tell the user if a particular memory section is registered with the accelerator; the primary user for this is KVM and such information is useful for debugging purposes. This adds a has_memory() callback to the accelerator class allowing any accelerator to have a label in that memory tree dump. Since memory sections are passed to memory listeners and get registered in accelerators (rather than memory regions), this only prints new labels for flatviews attached to the system address space. An example: Root memory region: system 0000000000000000-0000002fffffffff (prio 0, ram): /objects/mem0 kvm 0000003000000000-0000005fffffffff (prio 0, ram): /objects/mem1 kvm 0000200000000020-000020000000003f (prio 1, i/o): virtio-pci 0000200080000000-000020008000003f (prio 0, i/o): capabilities Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Message-Id: <20190614015237.82463-1-aik@ozlabs.ru> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 15 7月, 2019 4 次提交
-
-
由 Peter Xu 提交于
Introduce a new memory region listener hook log_clear() to allow the listeners to hook onto the points where the dirty bitmap is cleared by the bitmap users. Previously log_sync() contains two operations: - dirty bitmap collection, and, - dirty bitmap clear on remote site. Let's take KVM as example - log_sync() for KVM will first copy the kernel dirty bitmap to userspace, and at the same time we'll clear the dirty bitmap there along with re-protecting all the guest pages again. We add this new log_clear() interface only to split the old log_sync() into two separated procedures: - use log_sync() to collect the collection only, and, - use log_clear() to clear the remote dirty bitmap. With the new interface, the memory listener users will still be able to decide how to implement the log synchronization procedure, e.g., they can still only provide log_sync() method only and put all the two procedures within log_sync() (that's how the old KVM works before KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 is introduced). However with this new interface the memory listener users will start to have a chance to postpone the log clear operation explicitly if the module supports. That can really benefit users like KVM at least for host kernels that support KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2. There are three places that can clear dirty bits in any one of the dirty bitmap in the ram_list.dirty_memory[3] array: cpu_physical_memory_snapshot_and_clear_dirty cpu_physical_memory_test_and_clear_dirty cpu_physical_memory_sync_dirty_bitmap Currently we hook directly into each of the functions to notify about the log_clear(). Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <20190603065056.25211-7-peterx@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Peter Xu 提交于
Also we change the 2nd parameter of it to be the relative offset within the memory region. This is to be used in follow up patches. Signed-off-by: NPeter Xu <peterx@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Message-Id: <20190603065056.25211-6-peterx@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Peter Xu 提交于
Similar to 9460dee4 ("memory: do not touch code dirty bitmap unless TCG is enabled", 2015-06-05) but for the migration bitmap - we can skip the MIGRATION bitmap update if migration not enabled. Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <20190603065056.25211-4-peterx@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 King Wang 提交于
The memory region reference is increased when insert a range into flatview range array, then decreased by destroy flatview. If some flat range merged by flatview_simplify, the memory region reference can not be decreased by destroy flatview any more. In this case, start virtual machine by the command line: qemu-system-x86_64 -name guest=ubuntu,debug-threads=on -machine pc,accel=kvm,usb=off,dump-guest-core=off -cpu host -m 16384 -realtime mlock=off -smp 8,sockets=2,cores=4,threads=1 -object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages,share=yes,size=8589934592 -numa node,nodeid=0,cpus=0-3,memdev=ram-node0 -object memory-backend-file,id=ram-node1,prealloc=yes,mem-path=/dev/hugepages,share=yes,size=8589934592 -numa node,nodeid=1,cpus=4-7,memdev=ram-node1 -no-user-config -nodefaults -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive file=ubuntu.qcow2,format=qcow2,if=none,id=drive-virtio-disk0,cache=none,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:0 -device VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x5 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on And run the script in guest OS: while true do setpci -s 00:06.0 04.b=03 setpci -s 00:06.0 04.b=07 done I found the reference of node0 HostMemoryBackendFile is a big one. (gdb) p numa_info[0]->node_memdev->parent.ref $6 = 1636278 (gdb) Signed-off-by: King Wang<king.wang@huawei.com> Message-Id: <20190712065241.11784-1-king.wang@huawei.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 02 7月, 2019 1 次提交
-
-
由 Greg Kurz 提交于
Hot-unplugging a PHB with a VFIO device connected to it crashes QEMU: -device spapr-pci-host-bridge,index=1,id=phb1 \ -device vfio-pci,host=0034:01:00.3,id=vfio0 (qemu) device_del phb1 [ 357.207183] iommu: Removing device 0001:00:00.0 from group 1 [ 360.375523] rpadlpar_io: slot PHB 1 removed qemu-system-ppc64: memory.c:2742: do_address_space_destroy: Assertion `QTAILQ_EMPTY(&as->listeners)' failed. 'as' is the IOMMU address space, which indeed has a listener registered to by vfio_connect_container() when the VFIO device is realized. This listener is supposed to be unregistered by vfio_disconnect_container() when the VFIO device is finalized. Unfortunately, the VFIO device hasn't reached finalize yet at the time the PHB unrealize function is called, and address_space_destroy() gets called with the VFIO listener still being registered. All regions have just been unmapped from the address space. Listeners aren't needed anymore at this point. Remove them before destroying the address space. The VFIO code will try to remove them _again_ at device finalize, but it is okay since memory_listener_unregister() is idempotent. Signed-off-by: NGreg Kurz <groug@kaod.org> Message-Id: <156110925375.92514.11649846071216864570.stgit@bahia.lan> Reviewed-by: NAlexey Kardashevskiy <aik@ozlabs.ru> [dwg: Correct spelling error pointed out by aik] Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
- 12 6月, 2019 1 次提交
-
-
由 Markus Armbruster 提交于
Other accelerators have their own headers: sysemu/hax.h, sysemu/hvf.h, sysemu/kvm.h, sysemu/whpx.h. Only tcg_enabled() & friends sit in qemu-common.h. This necessitates inclusion of qemu-common.h into headers, which is against the rules spelled out in qemu-common.h's file comment. Move tcg_enabled() & friends into their own header sysemu/tcg.h, and adjust #include directives. Cc: Richard Henderson <rth@twiddle.net> Cc: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Message-Id: <20190523143508.25387-2-armbru@redhat.com> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> [Rebased with conflicts resolved automatically, except for accel/tcg/tcg-all.c]
-
- 03 6月, 2019 1 次提交
-
-
由 Peter Xu 提交于
It's never used anywhere. Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NPeter Xu <peterx@redhat.com> Message-Id: <20190520030839.6795-5-peterx@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 17 5月, 2019 1 次提交
-
-
由 Wei Yang 提交于
The dirty bit is DIRTY_MEMORY_MIGRATION. Correct the comment. Signed-off-by: NWei Yang <richardw.yang@linux.intel.com> Message-Id: <20190426020927.25470-1-richardw.yang@linux.intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 19 4月, 2019 1 次提交
-
-
由 Markus Armbruster 提交于
mtree_info() takes an fprintf()-like callback and a FILE * to pass to it, and so do its helper functions. Passing around callback and argument is rather tiresome. Its only caller hmp_info_mtree() passes monitor_printf() cast to fprintf_function and the current monitor cast to FILE *. The type-punning is technically undefined behaviour, but works in practice. Clean up: drop the callback, and call qemu_printf() instead. Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190417191805.28198-9-armbru@redhat.com>
-
- 18 3月, 2019 1 次提交
-
-
由 Singh, Brijesh 提交于
Currently, a callback registered through the RAMBlock notifier is not able to get the memory region type (i.e callback is not able to use memory_region_is_ram_device function). This is because mr->ram assignment happens _after_ the memory is allocated whereas the callback is executed during allocation. Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1667249Suggested-by: NAlex Williamson <alex.williamson@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: NAlex Williamson <alex.williamson@redhat.com> Signed-off-by: NBrijesh Singh <brijesh.singh@amd.com> Message-Id: <20190204222322.26766-2-brijesh.singh@amd.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 11 3月, 2019 1 次提交
-
-
由 Jagannathan Raman 提交于
Do not add/del coalesced IO ranges in the case where the same FlatRanges are present in both old and new FlatViews Fixes: 3ac7d43a ("memory: update coalesced_range on transaction_commit") Signed-off-by: NJagannathan Raman <jag.raman@oracle.com> Message-Id: <59572a7353830be4b7aa57d79ccb7ad6b72f0dda.1549406119.git.jag.raman@oracle.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 11 1月, 2019 5 次提交
-
-
由 Paolo Bonzini 提交于
The new definition of QTAILQ does not require passing the headname, remove it. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Most list head structs need not be given a name. In most cases the name is given just in case one is going to use QTAILQ_LAST, QTAILQ_PREV or reverse iteration, but this does not apply to lists of other kinds, and even for QTAILQ in practice this is only rarely needed. In addition, we will soon reimplement those macros completely so that they do not need a name for the head struct. So clean up everything, not giving a name except in the rare case where it is necessary. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
The e1000 driver calls memory_region_add_coalescing but kvm_coalesce_mmio_region is never called for those regions. The bug dates back to the introduction of the memory region API; to fix it, delete and re-add coalesced MMIO ranges when building the FlatViews. Because coalesced MMIO regions apply to all address spaces, the has_coalesced_range flag has to be changed into an int. Fixes: 093bc2cd ("Hierarchical memory region API") Reported-by: NAtsushi Nemoto <atsushi.nemoto@sord.co.jp> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Store whether the FlatRange has had any coalesced I/O ranges applied, and if not avoid calling coalesced_io_del. This is useful in preparation for the next patch, which will call coalesced_io_del when rendering memory regions. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Extract two new functions from memory_region_update_coalesced_range_as. To avoid duplication in the creation of the MemoryRegionSection, use MEMORY_LISTENER_UPDATE_REGION instead of MEMORY_LISTENER_CALL to invoke the listener callback. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 07 11月, 2018 1 次提交
-
-
由 Marc-André Lureau 提交于
Add a new flag to mark memory region that are used as non-volatile, by NVDIMM for example. That bit is propagated down to the flat view, and reflected in HMP info mtree with a "nv-" prefix on the memory type. This way, guest_phys_blocks_region_add() can skip the NV memory regions for dumps and TCG memory clear in a following patch. Cc: dgilbert@redhat.com Cc: imammedo@redhat.com Cc: pbonzini@redhat.com Cc: guangrong.xiao@linux.intel.com Cc: mst@redhat.com Cc: xiaoguangrong.eric@gmail.com Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20181003114454.5662-2-marcandre.lureau@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 19 10月, 2018 1 次提交
-
-
由 Peng Hao 提交于
the primary API realization. Signed-off-by: NPeng Hao <peng.hao2@zte.com.cn> Reviewed-by: NEduardo Habkost <ehabkost@redhat.com> Message-Id: <1539795177-21038-3-git-send-email-peng.hao2@zte.com.cn> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 03 10月, 2018 6 次提交
-
-
由 Peter Maydell 提交于
Now that all the users of old_mmio MemoryRegion accessors have been converted, we can remove the core code support. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Message-Id: <20180824170422.5783-2-peter.maydell@linaro.org> Based-on: <20180802174042.29234-1-peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Philippe Mathieu-Daudé 提交于
Memory regions configured as DEVICE_BIG_ENDIAN (or DEVICE_NATIVE_ENDIAN on big-endian guest) behave incorrectly when the memory access 'size' is smaller than the implementation 'access_size'. In the following code segment from access_with_adjusted_size(): if (memory_region_big_endian(mr)) { for (i = 0; i < size; i += access_size) { r |= access_fn(mr, addr + i, value, access_size, (size - access_size - i) * 8, access_mask, attrs); } (size - access_size - i) * 8 is the number of bits that will arithmetic shift the current value. Currently we can only 'left' shift a read() access, and 'right' shift a write(). When the access 'size' is smaller than the implementation, we get a negative number of bits to shift. For the read() case, a negative 'left' shift is a 'right' shift :) However since the 'shift' type is unsigned, there is currently no way to right shift. Fix this by changing the access_fn() prototype to handle signed shift values, and modify the memory_region_shift_read|write_access() helpers to correctly arithmetic shift the opposite direction when the 'shift' value is negative. Signed-off-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20180927002416.1781-4-f4bug@amsat.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Philippe Mathieu-Daudé 提交于
Signed-off-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20180927002416.1781-3-f4bug@amsat.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Philippe Mathieu-Daudé 提交于
Suggested-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20180927002416.1781-2-f4bug@amsat.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Hikaru Nishida 提交于
Before this change, memory-backend-file object is valid for Linux hosts only because hostmem-file.c is compiled only on Linux hosts. However, other POSIX-based hosts (such as macOS) can support memory-backend-file object in the same way as on Linux hosts. This patch makes hostmem-file.c and related functions to be compiled on all POSIX-based hosts to make available memory-backend-file on them. Signed-off-by: NHikaru Nishida <hikarupsp@gmail.com> Message-Id: <20180924123205.29651-1-hikarupsp@gmail.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Igor Mammedov 提交于
if MemoryRegion intialization fails it's left in semi-initialized state, where it's size is not 0 and attached as child to owner object. And this leds to crash in following use-case: (monitor) object_add memory-backend-file,id=mem1,size=99999G,mem-path=/tmp/foo,discard-data=yes memory.c:2083: memory_region_get_ram_ptr: Assertion `mr->ram_block' failed Aborted (core dumped) it happens due to assumption that memory region is intialized when memory_region_size() != 0 and therefore it's ok to access it in file_backend_unparent() if (memory_region_size() != 0) memory_region_get_ram_ptr() which happens when object_add fails and unparents failed backend making file_backend_unparent() access invalid memory region. Fix it by making sure that memory_region_init_foo() APIs cleanup externally visible side effects on failure (like set size to 0 and unparenting object) Signed-off-by: NIgor Mammedov <imammedo@redhat.com> Message-Id: <1536064777-42312-1-git-send-email-imammedo@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 20 8月, 2018 1 次提交
-
-
由 Peter Maydell 提交于
Remove the obsolete MMIO request_ptr APIs; they have no users now. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: NAlistair Francis <alistair.francis@wdc.com> Reviewed-by: NKONRAD Frederic <frederic.konrad@adacore.com> Message-id: 20180817114619.22354-3-peter.maydell@linaro.org
-
- 15 8月, 2018 1 次提交
-
-
由 Peter Maydell 提交于
The io_readx() function needs to know whether the load it is doing is an MMU_DATA_LOAD or an MMU_INST_FETCH, so that it can pass the right value to the cpu_transaction_failed() function. Plumb this information through from the softmmu code. This is currently not often going to give the wrong answer, because usually instruction fetches go via get_page_addr_code(). However once we switch over to handling execution from non-RAM by creating single-insn TBs, the path for an insn fetch to generate a bus error will be through cpu_ld*_code() and io_readx(), so without this change we will generate a d-side fault when we should generate an i-side fault. We also have to pass the access type via a CPU struct global down to unassigned_mem_read(), for the benefit of the targets which still use the cpu_unassigned_access() hook (m68k, mips, sparc, xtensa). Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Tested-by: NCédric Le Goater <clg@kaod.org> Message-id: 20180710160013.26559-2-peter.maydell@linaro.org
-