- 23 3月, 2016 1 次提交
-
-
由 Peter Maydell 提交于
Improve the TB execution logging so that it is easier to identify what is happening from trace logs: * move the "Trace" logging of executed TBs into cpu_tb_exec() so that it is emitted if and only if we actually execute a TB, and for consistency for the CPU state logging * log when we link two TBs together via tb_add_jump() * log when cpu_tb_exec() returns early from a chain of TBs The new style logging looks like this: Trace 0x7fb7cc822ca0 [ffffffc0000dce00] Linking TBs 0x7fb7cc822ca0 [ffffffc0000dce00] index 0 -> 0x7fb7cc823110 [ffffffc0000dce10] Trace 0x7fb7cc823110 [ffffffc0000dce10] Trace 0x7fb7cc823420 [ffffffc000302688] Trace 0x7fb7cc8234a0 [ffffffc000302698] Trace 0x7fb7cc823520 [ffffffc0003026a4] Trace 0x7fb7cc823560 [ffffffc0000dce44] Linking TBs 0x7fb7cc823560 [ffffffc0000dce44] index 1 -> 0x7fb7cc8235d0 [ffffffc0000dce70] Trace 0x7fb7cc8235d0 [ffffffc0000dce70] Stopped execution of TB chain before 0x7fb7cc8235d0 [ffffffc0000dce70] Trace 0x7fb7cc8235d0 [ffffffc0000dce70] Trace 0x7fb7cc822fd0 [ffffffc0000dd52c] Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NAlex Bennée <alex.bennee@linaro.org> [AJB: reword patch title, Abandoned->Stopped] Reviewed-by: NAurelien Jarno <aurelien@aurel32.net> Reviewed-by: NRichard Henderson <rth@twiddle.net> Message-Id: <1458052224-9316-6-git-send-email-alex.bennee@linaro.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 21 1月, 2016 6 次提交
-
-
由 Peter Maydell 提交于
Add a function to return the AddressSpace for a CPU based on its numerical index. (Callers outside exec.c don't have access to the CPUAddressSpace struct so can't just fish it out of the CPUState struct directly.) Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Acked-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com>
-
由 Peter Maydell 提交于
Pass the MemTxAttrs for the memory access to iotlb_to_region(); this allows it to determine the correct AddressSpace to use for the lookup. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Acked-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com>
-
由 Peter Maydell 提交于
When looking up the MemoryRegionSection for the new TLB entry in tlb_set_page_with_attrs(), use cpu_asidx_from_attrs() to determine the correct address space index for the lookup, and pass it into address_space_translate_for_iotlb(). Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Acked-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com>
-
由 Peter Maydell 提交于
Add documentation comments for tlb_set_page_with_attrs() and tlb_set_page(). Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Acked-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com>
-
由 Peter Maydell 提交于
Allow multiple calls to cpu_address_space_init(); each call adds an entry to the cpu->ases array at the specified index. It is up to the target-specific CPU code to actually use these extra address spaces. Since this multiple AddressSpace support won't work with KVM, add an assertion to avoid confusing failures. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Acked-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com>
-
由 Peter Maydell 提交于
Rather than setting cpu->as unconditionally in cpu_exec_init (and then having target-i386 override this later), don't set it until the first call to cpu_address_space_init. This requires us to initialise the address space for both TCG and KVM (KVM doesn't need the AS listener but it does require cpu->as to be set). For target CPUs which don't set up any address spaces (currently everything except i386), add the default address_space_memory in qemu_init_vcpu(). Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com> Acked-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com>
-
- 10 11月, 2015 1 次提交
-
-
由 Dr. David Alan Gilbert 提交于
The HOST_PAGE_ALIGN macros don't work until the page size variables have been set up; later in postcopy I use those macros in the RAM code, and it can be triggered using -object. Fix this by initialising page_size_init() earlier - it's currently initialised inside the accelerators, move it up into vl.c. Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
- 05 11月, 2015 1 次提交
-
-
由 Pavel Dovgalyuk 提交于
This patch is required for deterministic replay to generate an exception by trying executing an instruction without changing icount. It adds new flag to TB for disabling icount while translating it. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NPavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Message-Id: <20150917162359.8676.77011.stgit@PASHA-ISP.def.inno> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 13 10月, 2015 2 次提交
-
-
由 Paolo Bonzini 提交于
The header is included from basically everywhere, thanks to cpu.h. It should be moved to the (TCG only) files that actually need it. As a start, remove non-TCG stuff. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Peter Maydell 提交于
Gather up all the fields currently in CPUState which deal with the CPU's AddressSpace into a separate CPUAddressSpace struct. This paves the way for allowing the CPU to know about more than one AddressSpace. The rearrangement also allows us to make the MemoryListener a directly embedded object in the CPUAddressSpace (it could not be embedded in CPUState because 'struct MemoryListener' isn't defined for the user-only builds). This allows us to resolve the FIXME in tcg_commit() by going directly from the MemoryListener to the CPUAddressSpace. This patch extracts the actual update of the cached dispatch pointer from cpu_reload_memory_map() (which is renamed accordingly to cpu_reloading_memory_map() as it is only responsible for breaking cpu-exec.c's RCU critical section now). This lets us keep the definition of the CPUAddressSpace struct private to exec.c. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Message-Id: <1443709790-25180-4-git-send-email-peter.maydell@linaro.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 07 10月, 2015 6 次提交
-
-
由 Richard Henderson 提交于
At present, the "average" guestimate of TB size is way too small, leading to many unused entries in the pre-allocated TB array. For a guest with 1GB ram, we're currently allocating 256MB for the array. Survey arm, alpha, aarch64, ppc, sparc, i686, x86_64 guests running on x86_64 and ppc64 hosts and select a new average. The size of the array drops to 81MB with no more flushing than before. Reviewed-by: NAurelien Jarno <aurelien@aurel32.net> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
由 Richard Henderson 提交于
We currently pre-compute an worst case code size for any TB, which works out to be 122kB. Since the average TB size is near 1kB, this wastes quite a lot of storage. Instead, check for overflow in between generating code for each opcode. The overhead of the check isn't measurable and wastage is minimized. Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
由 Richard Henderson 提交于
It is no longer used, so tidy up everything reached by it. This includes the gen_opc_* arrays, the search_pc parameter and the inline gen_intermediate_code_internal functions. Reviewed-by: NAurelien Jarno <aurelien@aurel32.net> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
由 Richard Henderson 提交于
We can now restore state without retranslation. Reviewed-by: NAurelien Jarno <aurelien@aurel32.net> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
由 Richard Henderson 提交于
The gen_opc_* arrays are already redundant with the data stored in the insn_start arguments. Transition restore_state_to_opc to use data from the latter. Reviewed-by: NAurelien Jarno <aurelien@aurel32.net> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
由 Richard Henderson 提交于
As it's only caller, this tidies things a bit. Reviewed-by: NAurelien Jarno <aurelien@aurel32.net> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
- 16 9月, 2015 1 次提交
-
-
由 Peter Crosthwaite 提交于
Move the architecture agnostic function prototypes for exec.c out of cputlb.h to exec-all.h. This allows hiding of the arch specific cputlb.h from exec.c which should be getting close to having no architecture specifics. Prepares support for multi-arch, which will have a minimal cpu.h that services exec.c but not cputlb.h. Reviewed-by: NRichard Henderson <rth@twiddle.net> Signed-off-by: NPeter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <b4fe754c58c860315e35d44430c26b1c967ce2c9.1441614289.git.crosthwaite.peter@gmail.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 11 9月, 2015 1 次提交
-
-
由 Pavel Dovgalyuk 提交于
This patch introduces loop exit function, which also restores guest CPU state according to the value of host program counter. Reviewed-by: NAurelien Jarno <aurelien@aurel32.net> Signed-off-by: NPavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Message-Id: <20150710095702.13280.97477.stgit@PASHA-ISP> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
- 09 9月, 2015 4 次提交
-
-
由 Paolo Bonzini 提交于
There is some iffy lock hierarchy going on in translate-all.c. To fix it, we need to take the mmap_lock in cpu-exec.c. Make the functions globally available. Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 KONRAD Frederic 提交于
spinlock is only used in two cases: * cpu-exec.c: to protect TranslationBlock * mem_helper.c: for lock helper in target-i386 (which seems broken). It's a pthread_mutex_t in user-mode, so we can use QemuMutex directly, with an #ifdef. The #ifdef will be removed when multithreaded TCG will need the mutex as well. Signed-off-by: NKONRAD Frederic <fred.konrad@greensocs.com> Message-Id: <1439220437-23957-5-git-send-email-fred.konrad@greensocs.com> Signed-off-by: NEmilio G. Cota <cota@braap.org> [Merge Emilio G. Cota's patch to remove volatile. - Paolo] Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signals are slow and do not exist on Win32. The previous patches have done most of the legwork to introduce memory barriers (some of them were even there already for the sake of Windows!) and we can now set the flags directly in the iothread. qemu_cpu_kick_thread is not used anymore on TCG, since the TCG thread is never outside usermode while the CPU is running (not halted). Instead run the content of the signal handler (now in qemu_cpu_kick_no_halt) directly. qemu_cpu_kick_no_halt is also used in qemu_mutex_lock_iothread to avoid the overhead of qemu_cond_broadcast. Reviewed-by: NRichard Henderson <rth@twiddle.net> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
This is already useful on Windows in order to remove tls.h, because accesses to current_cpu are done from a different thread on that platform. It will be used on POSIX platforms as soon TCG stops using signals to interrupt the execution of translated code. Reviewed-by: NRichard Henderson <rth@twiddle.net> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 27 8月, 2015 1 次提交
-
-
由 Peter Crosthwaite 提交于
This subtraction of return addresses applies directly to TCI as well as host-TCG. This fixes Linux boots for at least Microblaze, CRIS, ARM and SH4 when using TCI. [sw: Removed indentation for preprocessor statement] [sw: The patch also fixes Linux boot for x86_64] Reviewed-by: NRichard Henderson <rth@twiddle.net> Signed-off-by: NStefan Weil <sw@weilnetz.de> Signed-off-by: NPeter Crosthwaite <crosthwaite.peter@gmail.com>
-
- 25 8月, 2015 1 次提交
-
-
由 Peter Maydell 提交于
Guest CPU TLB maintenance operations may be sufficiently specialized to only need to flush TLB entries corresponding to a particular MMU index. Implement cputlb functions for this, to avoid the inefficiency of flushing TLB entries which we don't need to. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com> Message-id: 1439548879-1972-2-git-send-email-peter.maydell@linaro.org
-
- 15 8月, 2015 1 次提交
-
-
由 Paolo Bonzini 提交于
After commit 626cf8f4 (icount: set can_do_io outside TB execution, 2014-12-08), can_do_io is set to 1 if not executing code. It is no longer necessary to make this assumption in cpu_can_do_io. It is also possible to remove the use_icount test, simply by never setting cpu->can_do_io to 0 unless use_icount is true. With these changes cpu_can_do_io boils down to a read of cpu->can_do_io. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 06 8月, 2015 1 次提交
-
-
由 Sergey Fedorov 提交于
Instead of invalidating an original TB in cpu_exec_nocache() prematurely, just save a link to it in the temporary generated TB. If cpu_io_recompile() is raised subsequently from the temporary TB, invalidate the original one as well. That allows reusing the original TB each time cpu_exec_nocache() is called to handle expired instruction counter in icount mode. Signed-off-by: NSergey Fedorov <serge.fdrv@gmail.com> Message-Id: <1435656909-29116-1-git-send-email-serge.fdrv@gmail.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 09 7月, 2015 3 次提交
-
-
由 Peter Crosthwaite 提交于
The callers (most of them in target-foo/cpu.c) to this function all have the cpu pointer handy. Just pass it to avoid an ENV_GET_CPU() from core code (in exec.c). Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Michael Walle <michael@walle.cc> Cc: Leon Alrae <leon.alrae@imgtec.com> Cc: Anthony Green <green@moxielogic.com> Cc: Jia Liu <proljc@gmail.com> Cc: Alexander Graf <agraf@suse.de> Cc: Blue Swirl <blauwirbel@gmail.com> Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Max Filippov <jcmvbkbc@gmail.com> Reviewed-by: NAndreas Färber <afaerber@suse.de> Reviewed-by: NAurelien Jarno <aurelien@aurel32.net> Signed-off-by: NPeter Crosthwaite <crosthwaite.peter@gmail.com> Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
由 Peter Crosthwaite 提交于
All of the core-code usages of this API have the cpu pointer handy so pass it in. There are only 3 architecture specific usages (2 of which are commented out) which can just use ENV_GET_CPU() locally to get the cpu pointer. The reduces core code usage of the CPU env, which brings us closer to common-obj'ing these core files. Cc: Riku Voipio <riku.voipio@iki.fi> Cc: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: NEduardo Habkost <ehabkost@redhat.com> Acked-by: NEduardo Habkost <ehabkost@redhat.com> Signed-off-by: NPeter Crosthwaite <crosthwaite.peter@gmail.com> Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
由 Bharata B Rao 提交于
Add an Error argument to cpu_exec_init() to let users collect the error. This is in preparation to change the CPU enumeration logic in cpu_exec_init(). With the new enumeration logic, cpu_exec_init() can fail if cpu_index values corresponding to max_cpus have already been handed out. Since all current callers of cpu_exec_init() are from instance_init, use error_abort Error argument to abort in case of an error. Signed-off-by: NBharata B Rao <bharata@linux.vnet.ibm.com> Reviewed-by: NEduardo Habkost <ehabkost@redhat.com> Reviewed-by: NIgor Mammedov <imammedo@redhat.com> Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NPeter Crosthwaite <peter.crosthwaite@xilinx.com> Acked-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NPeter Crosthwaite <crosthwaite.peter@gmail.com> Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
- 07 7月, 2015 1 次提交
-
-
由 Li Zhijian 提交于
Prevously, if we hotplug a device(e.g. device_add e1000) during migration is processing in source side, qemu will add a new ram block but migration_bitmap is not extended. In this case, migration_bitmap will overflow and lead qemu abort unexpectedly. Signed-off-by: NLi Zhijian <lizhijian@cn.fujitsu.com> Signed-off-by: NWen Congyang <wency@cn.fujitsu.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
- 26 6月, 2015 1 次提交
-
-
由 Peter Crosthwaite 提交于
This is one of very few things in exec-all with a genuine CPU architecture dependency. Move these hashing helpers to a new header to trim exec-all.h down to a near architecture-agnostic header. The defs are only used by cpu-exec and translate-all which are both arch-obj's so the new tb-hash.h has no core code usage. Reviewed-by: NRichard Henderson <rth@redhat.com> Signed-off-by: NPeter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <9d048b96f7cfa64a4d9c0b88e0dd2877fac51d41.1433052532.git.crosthwaite.peter@gmail.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 11 6月, 2015 1 次提交
-
-
由 Yongbok Kim 提交于
Probe for whether the specified guest write access is permitted. If it is not permitted then an exception will be taken in the same way as if this were a real write access (and we will not return). Otherwise the function will return, and there will be a valid entry in the TLB for this access. Signed-off-by: NYongbok Kim <yongbok.kim@imgtec.com> Reviewed-by: NLeon Alrae <leon.alrae@imgtec.com> Signed-off-by: NLeon Alrae <leon.alrae@imgtec.com>
-
- 05 6月, 2015 1 次提交
-
-
由 Paolo Bonzini 提交于
Remove them from the sundry exec-all.h header, since they are only used by the TCG runtime in exec.c and user-exec.c. Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 26 4月, 2015 2 次提交
-
-
由 Peter Maydell 提交于
Add a MemTxAttrs field to the IOTLB, and allow target-specific code to set it via a new tlb_set_page_with_attrs() function; pass the attributes through to the device when making IO accesses. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
-
由 Peter Maydell 提交于
Rather than retaining io_mem_read/write as simple wrappers around the memory_region_dispatch_read/write functions, make the latter public and change all the callers to use them, since we need to touch all the callsites anyway to add MemTxAttrs and MemTxResult support. Delete io_mem_read and io_mem_write entirely. (All the callers currently pass MEMTXATTRS_UNSPECIFIED and convert the return value back to bool or ignore it.) Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
-
- 17 2月, 2015 3 次提交
-
-
由 Paolo Bonzini 提交于
Note that even after this patch, most callers of address_space_* functions must still be under the big QEMU lock, otherwise the memory region returned by address_space_translate can disappear as soon as address_space_translate returns. This will be fixed in the next part of this series. Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
After the previous patch, TLBs will be flushed on every change to the memory mapping. This patch augments that with synchronization of the MemoryRegionSections referred to in the iotlb array. With this change, it is guaranteed that iotlb_to_region will access the correct memory map, even once the TLB will be accessed outside the BQL. Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
This for now is a simple TLB flush. This can change later for two reasons: 1) an AddressSpaceDispatch will be cached in the CPUState object 2) it will not be possible to do tlb_flush once the TCG-generated code runs outside the BQL. Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 23 12月, 2014 1 次提交
-
-
由 Paolo Bonzini 提交于
Signed-off-by: NPavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-