- 27 6月, 2018 1 次提交
-
-
由 Peter Maydell 提交于
Add support for MMU protection regions that are smaller than TARGET_PAGE_SIZE. We do this by marking the TLB entry for those pages with a flag TLB_RECHECK. This flag causes us to always take the slow-path for accesses. In the slow path we can then special case them to always call tlb_fill() again, so we have the correct information for the exact address being accessed. This change allows us to handle reading and writing from small regions; we cannot deal with execution from the small region. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Message-id: 20180620130619.11362-2-peter.maydell@linaro.org
-
- 16 6月, 2018 1 次提交
-
-
由 Emilio G. Cota 提交于
The acquisition of tb_lock was added when the async tlb_flush was introduced in e3b9ca81 ("cputlb: introduce tlb_flush_* async work.") tb_lock was there to allow us to do memset() on the tb_jmp_cache's. However, since f3ced3c5 ("tcg: consistently access cpu->tb_jmp_cache atomically") all accesses to tb_jmp_cache are atomic, so tb_lock is not needed here. Get rid of it. Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Signed-off-by: NEmilio G. Cota <cota@braap.org> Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
-
- 15 6月, 2018 3 次提交
-
-
由 Peter Maydell 提交于
Currently we don't support board configurations that put an IOMMU in the path of the CPU's memory transactions, and instead just assert() if the memory region fonud in address_space_translate_for_iotlb() is an IOMMUMemoryRegion. Remove this limitation by having the function handle IOMMUs. This is mostly straightforward, but we must make sure we have a notifier registered for every IOMMU that a transaction has passed through, so that we can flush the TLB appropriately when any of the IOMMUs change their mappings. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Message-id: 20180604152941.20374-5-peter.maydell@linaro.org
-
由 Peter Maydell 提交于
The API for cpu_transaction_failed() says that it takes the physical address for the failed transaction. However we were actually passing it the offset within the target MemoryRegion. We don't currently have any target CPU implementations of this hook that require the physical address; fix this bug so we don't get confused if we ever do add one. Suggested-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Message-id: 20180611125633.32755-3-peter.maydell@linaro.org
-
由 Peter Maydell 提交于
The 'addr' field in the CPUIOTLBEntry struct has a rather non-obvious use; add a comment documenting it (reverse-engineered from what the code that sets it is doing). Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Message-id: 20180611125633.32755-2-peter.maydell@linaro.org
-
- 25 1月, 2018 1 次提交
-
-
由 Laurent Vivier 提交于
The MC68040 MMU provides the size of the access that triggers the page fault. This size is set in the Special Status Word which is written in the stack frame of the access fault exception. So we need the size in m68k_cpu_unassigned_access() and m68k_cpu_handle_mmu_fault(). To be able to do that, this patch modifies the prototype of handle_mmu_fault handler, tlb_fill() and probe_write(). do_unassigned_access() already includes a size parameter. This patch also updates handle_mmu_fault handlers and tlb_fill() of all targets (only parameter, no code change). Signed-off-by: NLaurent Vivier <laurent@vivier.eu> Reviewed-by: NDavid Hildenbrand <david@redhat.com> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Message-Id: <20180118193846.24953-2-laurent@vivier.eu>
-
- 21 11月, 2017 1 次提交
-
-
由 Peter Maydell 提交于
To do a write to memory that is marked as notdirty, we need to invalidate any TBs we have cached for that memory, and update the cpu physical memory dirty flags for VGA and migration. The slowpath code in notdirty_mem_write() does all this correctly, but the new atomic handling code in atomic_mmu_lookup() doesn't do anything at all, it just clears the dirty bit in the TLB. The effect of this bug is that if the first write to a notdirty page for which we have cached TBs is by a guest atomic access, we fail to invalidate the TBs and subsequently will execute incorrect code. This can be seen by trying to run 'javac' on AArch64. Use the new notdirty_call_before() and notdirty_call_after() functions to correctly handle the update to notdirty memory in the atomic codepath. Cc: qemu-stable@nongnu.org Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Message-id: 1511201308-23580-3-git-send-email-peter.maydell@linaro.org
-
- 15 11月, 2017 1 次提交
-
-
由 Richard Henderson 提交于
When we handle a signal from a fault within a user-only memory helper, we cannot cpu_restore_state with the PC found within the signal frame. Use a TLS variable, helper_retaddr, to record the unwind start point to find the faulting guest insn. Tested-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Reported-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
-
- 20 10月, 2017 1 次提交
-
-
由 David Hildenbrand 提交于
Background: s390x implements Low-Address Protection (LAP). If LAP is enabled, writing to effective addresses (before any translation) 0-511 and 4096-4607 triggers a protection exception. So we have subpage protection on the first two pages of every address space (where the lowcore - the CPU private data resides). By immediately invalidating the write entry but allowing the caller to continue, we force every write access onto these first two pages into the slow path. we will get a tlb fault with the specific accessed addresses and can then evaluate if protection applies or not. We have to make sure to ignore the invalid bit if tlb_fill() succeeds. Signed-off-by: NDavid Hildenbrand <david@redhat.com> Message-Id: <20171016202358.3633-2-david@redhat.com> Signed-off-by: NCornelia Huck <cohuck@redhat.com>
-
- 10 10月, 2017 1 次提交
-
-
由 Emilio G. Cota 提交于
Commit f0aff0f1 ("cputlb: add assert_cpu_is_self checks") buried the increment of tlb_flush_count under TLB_DEBUG. This results in "info jit" always (mis)reporting 0 TLB flushes when !TLB_DEBUG. Besides, under MTTCG tlb_flush_count is updated by several threads, so in order not to lose counts we'd either have to use atomic ops or distribute the counter, which is more scalable. This patch does the latter by embedding tlb_flush_count in CPUArchState. The global count is then easily obtained by iterating over the CPU list. Note that this change also requires updating the accessors to tlb_flush_count to use atomic_read/set whenever there may be conflicting accesses (as defined in C11) to it. Reviewed-by: NRichard Henderson <rth@twiddle.net> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Signed-off-by: NEmilio G. Cota <cota@braap.org> Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
-
- 26 9月, 2017 1 次提交
-
-
由 Alex Bennée 提交于
The mmio path (see exec.c:prepare_mmio_access) already protects itself against recursive locking and it makes sense to do the same for io_readx/writex. Otherwise any helper running in the BQL context will assert when it attempts to write to device memory as in the case of the bug report. Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NAlex Bennée <alex.bennee@linaro.org> CC: Richard Jones <rjones@redhat.com> CC: Paolo Bonzini <bonzini@gnu.org> CC: qemu-stable@nongnu.org Message-Id: <20170921110625.9500-1-alex.bennee@linaro.org> Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
-
- 04 9月, 2017 1 次提交
-
-
由 Peter Maydell 提交于
Call the new cpu_transaction_failed() hook at the places where CPU generated code interacts with the memory system: io_readx() io_writex() get_page_addr_code() Any access from C code (eg via cpu_physical_memory_rw(), address_space_rw(), ld/st_*_phys()) will *not* trigger CPU exceptions via cpu_transaction_failed(). Handling for transactions failures for this kind of call should be done by using a function which returns a MemTxResult and treating the failure case appropriately in the calling code. In an ideal world we would not generate CPU exceptions for instruction fetch failures in get_page_addr_code() but instead wait until the code translation process tried a load and it failed; however that change would require too great a restructuring and redesign to attempt at this point. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com>
-
- 01 7月, 2017 1 次提交
-
-
由 Emilio G. Cota 提交于
Some code paths can lead to atomic accesses racing with memset() on cpu->tb_jmp_cache, which can result in torn reads/writes and is undefined behaviour in C11. These torn accesses are unlikely to show up as bugs, but from code inspection they seem possible. For example, tb_phys_invalidate does: /* remove the TB from the hash list */ h = tb_jmp_cache_hash_func(tb->pc); CPU_FOREACH(cpu) { if (atomic_read(&cpu->tb_jmp_cache[h]) == tb) { atomic_set(&cpu->tb_jmp_cache[h], NULL); } } Here atomic_set might race with a concurrent memset (such as the ones scheduled via "unsafe" async work, e.g. tlb_flush_page) and therefore we might end up with a torn pointer (or who knows what, because we are under undefined behaviour). This patch converts parallel accesses to cpu->tb_jmp_cache to use atomic primitives, thereby bringing these accesses back to defined behaviour. The price to pay is to potentially execute more instructions when clearing cpu->tb_jmp_cache, but given how infrequently they happen and the small size of the cache, the performance impact I have measured is within noise range when booting debian-arm. Note that under "safe async" work (e.g. do_tb_flush) we could use memset because no other vcpus are running. However I'm keeping these accesses atomic as well to keep things simple and to avoid confusing analysis tools such as ThreadSanitizer. Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NRichard Henderson <rth@twiddle.net> Signed-off-by: NEmilio G. Cota <cota@braap.org> Message-Id: <1497486973-25845-1-git-send-email-cota@braap.org> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
- 27 6月, 2017 4 次提交
-
-
由 KONRAD Frederic 提交于
This introduces a special callback which allows to run code from some MMIO devices. SysBusDevice with a MemoryRegion which implements the request_ptr callback will be notified when the guest try to execute code from their offset. Then it will be able to eg: pre-load some code from an SPI device or ask a pointer from an external simulator, etc.. When the pointer or the data in it are no longer valid the device has to invalidate it. Reviewed-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com> Signed-off-by: NKONRAD Frederic <fred.konrad@greensocs.com> Signed-off-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com>
-
由 KONRAD Frederic 提交于
get_page_addr_code(..) does a cpu_ldub_code to fill the tlb: This can lead to some side effects if a device is mapped at this address. So this patch replaces the cpu_memory_ld by a tlb_fill. Reviewed-by: NRichard Henderson <rth@twiddle.net> Reviewed-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com> Signed-off-by: NKONRAD Frederic <fred.konrad@greensocs.com> Signed-off-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com>
-
由 KONRAD Frederic 提交于
This just moves the code before VICTIM_TLB_HIT macro definition so we can use it. Reviewed-by: NRichard Henderson <rth@twiddle.net> Reviewed-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com> Signed-off-by: NKONRAD Frederic <fred.konrad@greensocs.com> Signed-off-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com>
-
由 KONRAD Frederic 提交于
This replaces env1 and page_index variables by env and index so we can use VICTIM_TLB_HIT macro later. Reviewed-by: NRichard Henderson <rth@twiddle.net> Reviewed-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com> Signed-off-by: NKONRAD Frederic <fred.konrad@greensocs.com> Signed-off-by: NEdgar E. Iglesias <edgar.iglesias@xilinx.com>
-
- 15 6月, 2017 1 次提交
-
-
由 Yang Zhong 提交于
move cputlb.c, cpu-exec-common.c and cpu-exec.c related tcg exec file into accel/tcg/ subdirectory. Signed-off-by: NYang Zhong <yang.zhong@intel.com> Message-Id: <1496383606-18060-3-git-send-email-yang.zhong@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 11 5月, 2017 1 次提交
-
-
由 Nikunj A Dadhania 提交于
In case where the conditional write is the first write to the page, TLB_NOTDIRTY will be set and stop_the_world is triggered. Handle this as a special case and set the dirty bit. After that fall through to the actual atomic instruction below. Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: NRichard Henderson <rth@twiddle.net> Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
- 28 2月, 2017 1 次提交
-
-
由 Peter Maydell 提交于
In get_page_addr_code(), if the guest PC doesn't correspond to RAM then we currently run the CPU's do_unassigned_access() hook if it has one, and otherwise we give up and exit QEMU with a more-or-less useful message. This code assumes that the do_unassigned_access hook will never return, because if it does then we'll plough on attempting to use a non-RAM TLB entry to get a RAM address and will abort() in qemu_ram_addr_from_host_nofail(). Unfortunately some CPU implementations of this hook do return: Microblaze, SPARC and the ARM v7M. Change the code to call report_bad_exec() if the hook returns, as well as if it didn't have one. This means we can tidy it up to use the cpu_unassigned_access() function which wraps the "get the CPU class and call the hook if it has one" work, since we aren't trying to distinguish "no hook" from "hook existed and returned" any more. This brings the handling of this hook into line with the handling used for data accesses, where "hook returned" is treated the same as "no hook existed" and gets you the default behaviour. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <rth@twiddle.net>
-
- 24 2月, 2017 8 次提交
-
-
由 Alex Bennée 提交于
This introduces support to the cputlb API for flushing all CPUs TLBs with one call. This avoids the need for target helpers to iterate through the vCPUs themselves. An additional variant of the API (_synced) will cause the source vCPUs work to be scheduled as "safe work". The result will be all the flush operations will be complete by the time the originating vCPU executes its safe work. The calling implementation can either end the TB straight away (which will then pick up the cpu->exit_request on entering the next block) or defer the exit until the architectural sync point (usually a barrier instruction). Signed-off-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NRichard Henderson <rth@twiddle.net>
-
由 Alex Bennée 提交于
The main use case for tlb_reset_dirty is to set the TLB_NOTDIRTY flags in TLB entries to force the slow-path on writes. This is used to mark page ranges containing code which has been translated so it can be invalidated if written to. To do this safely we need to ensure the TLB entries in question for all vCPUs are updated before we attempt to run the code otherwise a race could be introduced. To achieve this we atomically set the flag in tlb_reset_dirty_range and take care when setting it when the TLB entry is filled. On 32 bit systems attempting to emulate 64 bit guests we don't even bother as we might not have the atomic primitives available. MTTCG is disabled in this case and can't be forced on. The copy_tlb_helper function helps keep the atomic semantics in one place to avoid confusion. The dirty helper function is made static as it isn't used outside of cputlb. Signed-off-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NRichard Henderson <rth@twiddle.net>
-
由 Alex Bennée 提交于
This converts the remaining TLB flush routines to use async work when detecting a cross-vCPU flush. The only minor complication is having to serialise the var_list of MMU indexes into a form that can be punted to an asynchronous job. The pending_tlb_flush field on QOM's CPU structure also becomes a bitfield rather than a boolean. Signed-off-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NRichard Henderson <rth@twiddle.net>
-
由 Alex Bennée 提交于
While the vargs approach was flexible the original MTTCG ended up having munge the bits to a bitmap so the data could be used in deferred work helpers. Instead of hiding that in cputlb we push the change to the API to make it take a bitmap of MMU indexes instead. For ARM some the resulting flushes end up being quite long so to aid readability I've tended to move the index shifting to a new line so all the bits being or-ed together line up nicely, for example: tlb_flush_page_by_mmuidx(other_cs, pageaddr, (1 << ARMMMUIdx_S1SE1) | (1 << ARMMMUIdx_S1SE0)); Signed-off-by: NAlex Bennée <alex.bennee@linaro.org> [AT: SPARC parts only] Reviewed-by: NArtyom Tarasenko <atar4qemu@gmail.com> Reviewed-by: NRichard Henderson <rth@twiddle.net> [PM: ARM parts only] Reviewed-by: NPeter Maydell <peter.maydell@linaro.org>
-
由 KONRAD Frederic 提交于
Some architectures allow to flush the tlb of other VCPUs. This is not a problem when we have only one thread for all VCPUs but it definitely needs to be an asynchronous work when we are in true multithreaded work. We take the tb_lock() when doing this to avoid racing with other threads which may be invalidating TB's at the same time. The alternative would be to use proper atomic primitives to clear the tlb entries en-mass. This patch doesn't do anything to protect other cputlb function being called in MTTCG mode making cross vCPU changes. Signed-off-by: NKONRAD Frederic <fred.konrad@greensocs.com> [AJB: remove need for g_malloc on defer, make check fixes, tb_lock] Signed-off-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NRichard Henderson <rth@twiddle.net>
-
由 Alex Bennée 提交于
This moves the helper function closer to where it is called and updates the error message to report via error_report instead of the deprecated fprintf. Signed-off-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NRichard Henderson <rth@twiddle.net>
-
由 Alex Bennée 提交于
For SoftMMU the TLB flushes are an example of a task that can be triggered on one vCPU by another. To deal with this properly we need to use safe work to ensure these changes are done safely. The new assert can be enabled while debugging to catch these cases. Signed-off-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NRichard Henderson <rth@twiddle.net>
-
由 Jan Kiszka 提交于
This finally allows TCG to benefit from the iothread introduction: Drop the global mutex while running pure TCG CPU code. Reacquire the lock when entering MMIO or PIO emulation, or when leaving the TCG loop. We have to revert a few optimization for the current TCG threading model, namely kicking the TCG thread in qemu_mutex_lock_iothread and not kicking it in qemu_cpu_kick. We also need to disable RAM block reordering until we have a more efficient locking mechanism at hand. Still, a Linux x86 UP guest and my Musicpal ARM model boot fine here. These numbers demonstrate where we gain something: 20338 jan 20 0 331m 75m 6904 R 99 0.9 0:50.95 qemu-system-arm 20337 jan 20 0 331m 75m 6904 S 20 0.9 0:26.50 qemu-system-arm The guest CPU was fully loaded, but the iothread could still run mostly independent on a second core. Without the patch we don't get beyond 32206 jan 20 0 330m 73m 7036 R 82 0.9 1:06.00 qemu-system-arm 32204 jan 20 0 330m 73m 7036 S 21 0.9 0:17.03 qemu-system-arm We don't benefit significantly, though, when the guest is not fully loading a host CPU. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Message-Id: <1439220437-23957-10-git-send-email-fred.konrad@greensocs.com> [FK: Rebase, fix qemu_devices_reset deadlock, rm address_space_* mutex] Signed-off-by: NKONRAD Frederic <fred.konrad@greensocs.com> [EGC: fixed iothread lock for cpu-exec IRQ handling] Signed-off-by: NEmilio G. Cota <cota@braap.org> [AJB: -smp single-threaded fix, clean commit msg, BQL fixes] Signed-off-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NRichard Henderson <rth@twiddle.net> Reviewed-by: NPranith Kumar <bobby.prani@gmail.com> [PM: target-arm changes] Acked-by: NPeter Maydell <peter.maydell@linaro.org>
-
- 13 1月, 2017 1 次提交
-
-
由 Alex Bennée 提交于
We have never has the concept of global TLB entries which would avoid the flush so we never actually use this flag. Drop it and make clear that tlb_flush is the sledge-hammer it has always been. Signed-off-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NRichard Henderson <rth@twiddle.net> [DG: ppc portions] Acked-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
- 28 10月, 2016 1 次提交
-
-
由 Anand J 提交于
Some files contain multiple #includes of the same header file. Removed most of those unnecessary duplicate entries using scripts/clean-includes. Reviewed-by: NThomas Huth <thuth@redhat.com> Signed-off-by: NAnand J <anand.indukala@gmail.com> Signed-off-by: NMichael Tokarev <mjt@tls.msk.ru>
-
- 26 10月, 2016 7 次提交
-
-
由 Richard Henderson 提交于
Allow qemu to build on 32-bit hosts without 64-bit atomic ops. Even if we only allow 32-bit hosts to multi-thread emulate 32-bit guests, we still need some way to handle the 32-bit guest using a 64-bit atomic operation. Do so by dropping back to single-step. Reviewed-by: NEmilio G. Cota <cota@braap.org> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
由 Richard Henderson 提交于
Force the use of cmpxchg16b on x86_64. Wikipedia suggests that only very old AMD64 (circa 2004) did not have this instruction. Further, it's required by Windows 8 so no new cpus will ever omit it. If we truely care about these, then we could check this at startup time and then avoid executing paths that use it. Reviewed-by: NEmilio G. Cota <cota@braap.org> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
由 Richard Henderson 提交于
Add all of cmpxchg, op_fetch, fetch_op, and xchg. Handle both endian-ness, and sizes up to 8. Handle expanding non-atomically, when emulating in serial. Reviewed-by: NEmilio G. Cota <cota@braap.org> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
由 Richard Henderson 提交于
TGT_LE and TGT_BE are not size dependent and do not need to be redefined. The others are no longer used at all. Reviewed-by: NEmilio G. Cota <cota@braap.org> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
由 Richard Henderson 提交于
Saves 2k code size off of a cold path. Reviewed-by: NEmilio G. Cota <cota@braap.org> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
由 Richard Henderson 提交于
Reviewed-by: NEmilio G. Cota <cota@braap.org> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
由 Richard Henderson 提交于
Reviewed-by: NEmilio G. Cota <cota@braap.org> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
- 16 9月, 2016 1 次提交
-
-
由 Richard Henderson 提交于
The return address argument to the softmmu template helpers was confused. In the legacy case, we wanted to indicate that there is no return address, and so passed in NULL. However, we then immediately subtracted GETPC_ADJ from NULL, resulting in a non-zero value, indicating the presence of an (invalid) return address. Push the GETPC_ADJ subtraction down to the only point it's required: immediately before use within cpu_restore_state_from_tb, after all NULL pointer checks have been completed. This makes GETPC and GETRA identical. Remove GETRA as the lesser used macro, replacing all uses with GETPC. Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
- 09 7月, 2016 2 次提交
-
-
由 Samuel Damashek 提交于
[rth: Split out from the original patch.] Signed-off-by: NSamuel Damashek <samuel.damashek@invincea.com> Message-Id: <20160706182652.16190-1-samuel.damashek@invincea.com> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
由 Richard Henderson 提交于
There are currently 22 invocations of this function, and we're about to increase that number. Signed-off-by: NRichard Henderson <rth@twiddle.net>
-