- 06 9月, 2013 3 次提交
-
-
由 Jan Kiszka 提交于
Accesses to unassigned io ports shall return -1 on read and be ignored on write. Ensure these properties via dedicated ops, decoupling us from the memory core's handling of unassigned accesses. Cc: qemu-stable@nongnu.org Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Hu Tao 提交于
If offset_within_address_space falls in a page, then we register a subpage. So check offset_within_address_space rather than offset_within_region. Cc: qemu-stable@nongnu.org Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Richard Henderson <rth@twiddle.net> Cc: "Andreas Färber" <afaerber@suse.de> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Blue Swirl <blauwirbel@gmail.com> Signed-off-by: NHu Tao <hutao@cn.fujitsu.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
The problem is introduced by commit 23326164 (exec: Support 64-bit operations in address_space_rw, 2013-07-08). Before that commit, memory_access_size would only return 1/2/4. Since alignment is already handled above, reduce l to the largest power of two that is smaller than l. Cc: qemu-stable@nongnu.org Reported-by: NOleksii Shevchuk <alxchk@gmail.com> Tested-by: NOleksii Shevchuk <alxchk@gmail.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 03 9月, 2013 2 次提交
-
-
由 Andreas Färber 提交于
It was introduced to loop over CPUs from target-independent code, but since commit 182735ef target-independent CPUState is used. A loop can be considered more efficient than function calls in a loop, and CPU_FOREACH() hides implementation details just as well, so use that instead. Suggested-by: NMarkus Armbruster <armbru@redhat.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
由 Andreas Färber 提交于
Introduce CPU_FOREACH(), CPU_FOREACH_SAFE() and CPU_NEXT() shorthand macros. Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
- 01 8月, 2013 1 次提交
-
-
由 Andreas Färber 提交于
Commit 1a1562f5 prepared a VMSTATE_CPU() macro for device-style VMStateDescription registration, but missed to adapt cpu_exec_init(), so that the "cpu_common" VMStateDescription was still registered for AlphaCPU (fe31e737) and OpenRISCCPU (da697214). Fix this. Cc: Richard Henderson <rth@twiddle.net> Tested-by: NJia Liu <proljc@gmail.com> Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
- 27 7月, 2013 1 次提交
-
-
由 Stefan Weil 提交于
Passing a CPUState pointer instead of a CPUArchState pointer eliminates the last target dependent data type in sysemu/kvm.h. It also simplifies the code. Signed-off-by: NStefan Weil <sw@weilnetz.de> Acked-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
- 23 7月, 2013 5 次提交
-
-
由 Alexander Graf 提交于
When a new thread gets created, we need to reset non arch specific state to get the new CPU into clean state. However this reset should happen before the arch specific CPU contents get copied over. Otherwise we end up having clean reset state in our newly created thread. Signed-off-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NRiku Voipio <riku.voipio@linaro.org>
-
由 Andreas Färber 提交于
Propagate X86CPU in kvmvapic for simplicity. Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
由 Andreas Färber 提交于
Change breakpoint_invalidate() argument to CPUState alongside. Since all targets now assign a softmmu-only field, we can drop helpers cpu_class_set_{do_unassigned_access,vmsd}() and device_class_set_vmsd(). Prepares for changing cpu_memory_rw_debug() argument to CPUState. Acked-by: Max Filippov <jcmvbkbc@gmail.com> (for xtensa) Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
由 Andreas Färber 提交于
Use CPUState::env_ptr for now. Needed for GdbState::c_cpu. Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
由 Andreas Färber 提交于
Prepares for changing cpu_single_step() argument to CPUState. Acked-by: Michael Walle <michael@walle.cc> (for lm32) Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
- 18 7月, 2013 2 次提交
-
-
由 Paolo Bonzini 提交于
access_size_min can be 1 because erroneous accesses must not crash QEMU, they should trigger exceptions in the guest or just return garbage (depending on the CPU). I am not sure I understand the comment: placing a 4-byte field at the last byte of a region makes no sense (unless impl.unaligned is true), and that is why memory.c:access_with_adjusted_size does not bother with minimums larger than the remaining length. access_size_max can be mr->ops->valid.max_access_size because memory.c can and will still break accesses bigger than mr->ops->impl.max_access_size. Reported-by: NMarkus Armbruster <armbru@redhat.com> Tested-by: NMarkus Armbruster <armbru@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Peter Maydell 提交于
Commit e3127ae0 introduced a problem where we're passing a hwaddr* to qemu_ram_ptr_length() but it wants a ram_addr_t*; this will cause problems on 32 bit hosts and in any case provokes a clang warning on MacOSX: CC arm-softmmu/exec.o exec.c:2164:46: warning: incompatible pointer types passing 'hwaddr *' (aka 'unsigned long long *') to parameter of type 'ram_addr_t *' (aka 'unsigned long *') [-Wincompatible-pointer-types] return qemu_ram_ptr_length(raddr + base, plen); ^~~~ exec.c:1392:63: note: passing argument to parameter 'size' here static void *qemu_ram_ptr_length(ram_addr_t addr, ram_addr_t *size) ^ Since this function is only used in one place, change its prototype to pass a hwaddr* rather than a ram_addr_t*, rather than contorting the calling code to get the type right. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Tested-by: NRiku Voipio <riku.voipio@linaro.org> Tested-by: NPeter Crosthwaite <peter.crosthwaite@xilinx.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 15 7月, 2013 1 次提交
-
-
由 Richard Henderson 提交于
Honor the implementation maximum access size, and at least check the minimum access size. Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
- 10 7月, 2013 5 次提交
-
-
由 Andreas Färber 提交于
Since commit 878096ee (cpu: Turn cpu_dump_{state,statistics}() into CPUState hooks) CPUArchState is no longer needed. Add documentation and make the functions available through qemu/log.h outside NEED_CPU_H to allow use in qom/cpu.c. Moving them to qom/cpu.h was not yet possible due to convoluted include paths, so that some devices grow an implicit and unneeded dependency on qom/cpu.h for now. Acked-by: Michael Walle <michael@walle.cc> (for lm32) Reviewed-by: NRichard Henderson <rth@twiddle.net> [AF: Simplified mb_cpu_do_interrupt() and do_interrupt_all() changes] Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
由 Andreas Färber 提交于
Move next_cpu from CPU_COMMON to CPUState. Move first_cpu variable to qom/cpu.h. gdbstub needs to use CPUState::env_ptr for now. cpu_copy() no longer needs to save and restore cpu_next. Acked-by: NPaolo Bonzini <pbonzini@redhat.com> [AF: Rebased, simplified cpu_copy()] Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
由 Andreas Färber 提交于
Move it to qom/cpu.h. Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
由 Markus Armbruster 提交于
The previous two commits fixed bugs in -machine option queries. I can't find fault with the remaining queries, but let's use qemu_get_machine_opts() everywhere, for consistency, simplicity and robustness. Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Message-id: 1372943363-24081-7-git-send-email-armbru@redhat.com Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
-
由 Stefan Weil 提交于
It seems to be unused since several years (commit be995c27 in 2006). Signed-off-by: NStefan Weil <sw@weilnetz.de> Reviewed-by: NAndreas Färber <afaerber@suse.de> Message-id: 1373044036-14443-1-git-send-email-sw@weilnetz.de Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
-
- 04 7月, 2013 17 次提交
-
-
由 Paolo Bonzini 提交于
Reviewed-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
cur_map is not used anymore; instead, each AddressSpaceDispatch has its own nodes/sections pair. The priorities of the MemoryListeners, and in the future RCU, guarantee that the nodes/sections are not freed while they are still in use. (In fact, next_map itself is not needed except to free the data on the next update). To avoid incorrect use, replace cur_map with a temporary copy that is only valid while the topology is being updated. If you use it, the name prev_map makes it clear that you're doing something weird. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
After this patch, AddressSpaceDispatch holds a constistent tuple of (phys_map, nodes, sections). This will be important when updates of the topology will run concurrently with reads. cur_map is not used anymore except for freeing it at the end of the topology update. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
This same treatment previously done to phys_node_map and phys_sections is now applied to the dispatch field of AddressSpace. Topology updates use as->next_dispatch while accesses use as->dispatch. Reviewed-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
This will help having two copies of AddressSpaceDispatch during the recreation of the radix tree (one being built, and one that is complete and will be protected by RCU). We do not want to have to unregister and re-register the listener. Reviewed-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Currently, phys_node_map and phys_sections are shared by all of the AddressSpaceDispatch. When updating mem topology, all AddressSpaceDispatch will rebuild dispatch tables sequentially on them. In order to prepare for RCU access, leave the old memory map alive while the next one is being accessed. When rebuilding, the new dispatch tables will build and lookup next_map; after all dispatch tables are rebuilt, we can switch to next_* and free the previous table. Based on a patch from Liu Ping Fan. Signed-off-by: NLiu Ping Fan <qemulist@gmail.com> Reviewed-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Liu Ping Fan 提交于
Sections like phys_section_unassigned always have fixed address in phys_sections. Declared as macro, so we can use them when having more than one phys_sections array. Signed-off-by: NLiu Ping Fan <pingfank@linux.vnet.ibm.com> Signed-off-by: NLiu Ping Fan <qemulist@gmail.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
The iothread mutex might be released between map and unmap, so the mapped region might disappear. Reviewed-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
First of all, rename "todo" to "done". Second, clearly separate the case of done == 0 with the case of done != 0. This will help handling reference counting in the next patch. Third, this test: if (memory_region_get_ram_addr(mr) + xlat != raddr + todo) { does not guarantee that the memory region is the same across two iterations of the while loop. For example, you could have two blocks: A) size 640 K, mapped at physical address 0, ram_addr_t 0 B) size 64 K, mapped at physical address 0xa0000, ram_addr_t 0xa0000 then mapping 1 M starting at physical address zero will erroneously treat B as the continuation of block A. qemu_ram_ptr_length ensures that no invalid memory is accessed, but it is still a pointless complication of the algorithm. The patch makes the logic clearer with an explicit test that the memory region is the same. Reviewed-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
It will be needed in the next patch. Reviewed-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
After the next patch it would not be used elsewhere anyway. Also, the _nofail and the standard versions of this function return different things, which is confusing. Removing the function from the public headers limits the confusion. Reviewed-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
This function is not used outside the iothread mutex, so it can use ram_list.mru_block. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Add ref/unref calls at the following places: - places where memory regions are stashed by a listener and used outside the BQL (including in Xen or KVM). - memory_region_find callsites - creation of aliases and containers (only the aliased/contained region gets a reference to avoid loops) - around calls to del_subregion/add_subregion, where the region could disappear after the first call Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Do not bother visiting the radix tree when an address space is destroyed. After the previous patch, this has become a pointless exercise. When called from address_space_destroy_dispatch, all you're doing is zeroing out a structure that will be freed as soon as you come back. When called from mem_begin, when phys_page_set_level will call phys_map_node_alloc the radix tree's array will be zeroed too. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
phys_sections_clear is invoked after the dispatch tree has been destroyed. This leaves a window where phys_sections_nb > 0 but the subpages are not valid anymore, which is a recipe for use-after-free bugs. Move the destruction of subpages in phys_sections_clear. We will still destroy the subpages when an address space is cleaned up, because address_space_destroy will clear as->root and commit the change before it calls address_space_destroy_dispatch. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Jan Kiszka 提交于
The current ioport dispatcher is a complex beast, mostly due to the need to deal with old portio interface users. But we can overcome it without converting all portio users by embedding the required base address of a MemoryRegionPortio access into that data structure. That removes the need to have the additional MemoryRegionIORange structure in the loop on every access. To handle old portio memory ops, we simply install dispatching handlers for portio memory regions when registering them with the memory core. This removes the need for the old_portio field. We can drop the additional aliasing of ioport regions and also the special address space listener. cpu_in and cpu_out now simply call address_space_read/write. And we can concentrate portio handling in a single source file. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 28 6月, 2013 3 次提交
-
-
由 Andreas Färber 提交于
Make cpustats monitor command available unconditionally. Prepares for changing kvm_handle_internal_error() and kvm_cpu_exec() arguments to CPUState. Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
由 Andreas Färber 提交于
It no longer depends on CPUArchState, so move it to qom/cpu.c. Prepares for changing GDBState::c_cpu to CPUState. Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
由 Andreas Färber 提交于
To be used to embed common CPU state into CPU subclasses. Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NAndreas Färber <afaerber@suse.de>
-