- 02 7月, 2018 1 次提交
-
-
由 Peter Maydell 提交于
The condition to check whether an address has hit against a particular TLB entry is not completely trivial. We do this in various places, and in fact in one place (get_page_addr_code()) we have got the condition wrong. Abstract it out into new tlb_hit() and tlb_hit_page() inline functions (one for a known-page-aligned address and one for an arbitrary address), and use them in all the places where we had the condition correct. This is a no-behaviour-change patch; we leave fixing the buggy code in get_page_addr_code() to a subsequent patch. Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Message-Id: <20180629162122.19376-2-peter.maydell@linaro.org> Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
-
- 27 6月, 2018 1 次提交
-
-
由 Peter Maydell 提交于
Add support for MMU protection regions that are smaller than TARGET_PAGE_SIZE. We do this by marking the TLB entry for those pages with a flag TLB_RECHECK. This flag causes us to always take the slow-path for accesses. In the slow path we can then special case them to always call tlb_fill() again, so we have the correct information for the exact address being accessed. This change allows us to handle reading and writing from small regions; we cannot deal with execution from the small region. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Message-id: 20180620130619.11362-2-peter.maydell@linaro.org
-
- 15 6月, 2018 1 次提交
-
-
由 Peter Maydell 提交于
There's a common pattern in QEMU where a function needs to perform a data load or store of an N byte integer in a particular endianness. At the moment this is handled by doing a switch() on the size and calling the appropriate ld*_p or st*_p function for each size. Provide a new family of functions ldn_*_p() and stn_*_p() which take the size as an argument and do the switch() themselves. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Message-id: 20180611171007.4165-2-peter.maydell@linaro.org
-
- 09 5月, 2018 2 次提交
-
-
由 Paolo Bonzini 提交于
MemoryRegionCache was reverted to "normal" address_space_* operations for 2.9, due to lack of support for IOMMUs. Reinstate the optimizations, caching only the IOMMU translation at address_cache_init but not the IOMMU lookup and target AddressSpace translation are not cached; now that MemoryRegionCache supports IOMMUs, it becomes more widely applicable too. The inlined fast path is defined in memory_ldst_cached.inc.h, while the slow path uses memory_ldst.inc.c as before. The smaller fast path causes a little code size reduction in MemoryRegionCache users: hw/virtio/virtio.o text size before: 32373 hw/virtio/virtio.o text size after: 31941 Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
For now, this reduces the text size very slightly due to the newly-added inlining: text size before: 9301965 text size after: 9300645 Later, however, the declarations in include/exec/memory_ldst.inc.h will be reused for the MemoryRegionCache slow path functions. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 14 3月, 2018 1 次提交
-
-
由 Max Filippov 提交于
In linux-user QEMU that runs for a target with TARGET_ABI_BITS bigger than L1_MAP_ADDR_SPACE_BITS an assertion in page_set_flags fires when mmap, munmap, mprotect, mremap or shmat is called for an address outside the guest address space. mmap and mprotect should return ENOMEM in such case. Change definition of GUEST_ADDR_MAX to always be the last valid guest address. Account for this change in open_self_maps. Add macro guest_addr_valid that verifies if the guest address is valid. Add function guest_range_valid that verifies if address range is within guest address space and does not wrap around. Use that macro in mmap/munmap/mprotect/mremap/shmat for error checking. Cc: qemu-stable@nongnu.org Cc: Riku Voipio <riku.voipio@iki.fi> Cc: Laurent Vivier <laurent@vivier.eu> Reviewed-by: NLaurent Vivier <laurent@vivier.eu> Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
-
- 10 3月, 2018 1 次提交
-
-
由 Max Filippov 提交于
In linux-user QEMU that runs for a target with TARGET_ABI_BITS bigger than L1_MAP_ADDR_SPACE_BITS an assertion in page_set_flags fires when mmap, munmap, mprotect, mremap or shmat is called for an address outside the guest address space. mmap and mprotect should return ENOMEM in such case. Change definition of GUEST_ADDR_MAX to always be the last valid guest address. Account for this change in open_self_maps. Add macro guest_addr_valid that verifies if the guest address is valid. Add function guest_range_valid that verifies if address range is within guest address space and does not wrap around. Use that macro in mmap/munmap/mprotect/mremap/shmat for error checking. Cc: qemu-stable@nongnu.org Cc: Riku Voipio <riku.voipio@iki.fi> Cc: Laurent Vivier <laurent@vivier.eu> Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com> Reviewed-by: NLaurent Vivier <laurent@vivier.eu> Message-Id: <20180307215010.30706-1-jcmvbkbc@gmail.com> Signed-off-by: NLaurent Vivier <laurent@vivier.eu>
-
- 20 10月, 2017 1 次提交
-
-
由 David Hildenbrand 提交于
Background: s390x implements Low-Address Protection (LAP). If LAP is enabled, writing to effective addresses (before any translation) 0-511 and 4096-4607 triggers a protection exception. So we have subpage protection on the first two pages of every address space (where the lowcore - the CPU private data resides). By immediately invalidating the write entry but allowing the caller to continue, we force every write access onto these first two pages into the slow path. we will get a tlb fault with the specific accessed addresses and can then evaluate if protection applies or not. We have to make sure to ignore the invalid bit if tlb_fill() succeeds. Signed-off-by: NDavid Hildenbrand <david@redhat.com> Message-Id: <20171016202358.3633-2-david@redhat.com> Signed-off-by: NCornelia Huck <cohuck@redhat.com>
-
- 11 10月, 2017 1 次提交
-
-
由 Emilio G. Cota 提交于
These only depend on the host and therefore belong in the common osdep, not in a target-dependent object. While at it, query the host during an init constructor, which guarantees the page size will be well-defined throughout the execution of the program. Suggested-by: NRichard Henderson <rth@twiddle.net> Reviewed-by: NRichard Henderson <rth@twiddle.net> Signed-off-by: NEmilio G. Cota <cota@braap.org> Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
-
- 22 12月, 2016 1 次提交
-
-
由 Paolo Bonzini 提交于
Device models often have to perform multiple access to a single memory region that is known in advance, but would to use "DMA-style" functions instead of address_space_map/unmap. This can happen for example when the data has to undergo endianness conversion. Introduce a new data structure to cache the result of address_space_translate without forcing usage of a host address like address_space_map does. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 26 10月, 2016 1 次提交
-
-
由 Richard Henderson 提交于
When we cannot emulate an atomic operation within a parallel context, this exception allows us to stop the world and try again in a serial context. Reviewed-by: NEmilio G. Cota <cota@braap.org> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
- 24 10月, 2016 1 次提交
-
-
由 Peter Maydell 提交于
Support target CPUs having a page size which isn't knownn at compile time. To use this, the CPU implementation should: * define TARGET_PAGE_BITS_VARY * not define TARGET_PAGE_BITS * define TARGET_PAGE_BITS_MIN to the smallest value it might possibly want for TARGET_PAGE_BITS * call set_preferred_target_page_bits() in its realize function to indicate the actual preferred target page size for the CPU (and report any error from it) In CONFIG_USER_ONLY, the CPU implementation should continue to define TARGET_PAGE_BITS appropriately for the guest OS page size. Machines which want to take advantage of having the page size something larger than TARGET_PAGE_BITS_MIN must set the MachineClass minimum_page_bits field to a value which they guarantee will be no greater than the preferred page size for any CPU they create. Note that changing the target page size by setting minimum_page_bits is a migration compatibility break for that machine. For debugging purposes, attempts to use TARGET_PAGE_SIZE before it has been finally confirmed will assert. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <rth@twiddle.net>
-
- 06 7月, 2016 1 次提交
-
-
由 Sergey Sorokin 提交于
Some architectures (e.g. ARMv8) need the address which is aligned to a size more than the size of the memory access. To support such check it's enough the current costless alignment check implementation in QEMU, but we need to support an alignment size specifying. Signed-off-by: NSergey Sorokin <afarallax@yandex.ru> Message-Id: <1466705806-679898-1-git-send-email-afarallax@yandex.ru> Signed-off-by: NRichard Henderson <rth@twiddle.net> [rth: Assert in tcg_canonicalize_memop. Leave get_alignment_bits available for, though unused by, user-mode. Retain logging difference based on ALIGNED_ONLY.]
-
- 29 6月, 2016 1 次提交
-
-
由 Peter Crosthwaite 提交于
This function needs to be converted to QOM hook and virtualised for multi-arch. This rename interferes, as cpu-qom will not have access to the renaming causing name divergence. This rename doesn't really do anything anyway so just delete it. Signed-off-by: NPeter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <69bd25a8678b8b31b91cd9760c777bed1aafb44e.1437212383.git.crosthwaite.peter@gmail.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NPeter Crosthwaite <crosthwaitepeter@gmail.com>
-
- 07 6月, 2016 1 次提交
-
-
由 Peter Maydell 提交于
The WORDS_ALIGNED #define is not used anywhere, and hasn't been since 2013 when commit 612d590e rewrote the various ld<type>_<endian>_p functions to not use it. Remove the #define and the comment describing it. Also remove the line in the comment about TARGET_WORDS_ALIGNED, since it has never actually existed. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NMichael Tokarev <mjt@tls.msk.ru>
-
- 19 5月, 2016 1 次提交
-
-
由 Paolo Bonzini 提交于
Disentangle cpu-common.h and memory.h from NEED_CPU_H. Prototypes are not defined for !NEED_CPU_H, so remove them from poison.h too. Only macros need poisoning. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 23 2月, 2016 1 次提交
-
-
由 Peter Maydell 提交于
Clean up includes so that osdep.h is included first and headers which it implies are not included manually. This commit was created with scripts/clean-includes. NB: If this commit breaks compilation for your out-of-tree patchseries or fork, then you need to make sure you add #include "qemu/osdep.h" to any new .c files that you have. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NEric Blake <eblake@redhat.com>
-
- 02 12月, 2015 1 次提交
-
-
由 Paolo Bonzini 提交于
Anthony reported that >4GB guests on Xen with 32bit QEMU broke after commit 4ed023ce ("Round up RAMBlock sizes to host page sizes", 2015-11-05). In that patch sizes are masked against qemu_host_page_size/mask which are uintptr_t, and thus 32bit on a 32bit QEMU, even though the ram space might be bigger than 4GB on Xen. Since ram_addr_t is not available on user-mode emulation targets, ensure that we get a sign extension when masking away the low bits of the address. Remove the ~10 year old scary comment that the type of these variables is probably wrong, with another equally scary comment. The new comment however does not have "???" in it, which is arguably an improvement. For completeness use the alignment macros in linux-user and bsd-user instead of manually doing an &. linux-user and bsd-user are not affected by the Xen issue, however. Reviewed-by: NJuan Quintela <quintela@redhat.com> Reported-by: NAnthony PERARD <anthony.perard@citrix.com> Fixes: 4ed023ceSigned-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 09 9月, 2015 1 次提交
-
-
由 Dr. David Alan Gilbert 提交于
Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <1439547914-18249-1-git-send-email-dgilbert@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 25 8月, 2015 2 次提交
-
-
由 Laurent Vivier 提交于
As we have removed CONFIG_USE_GUEST_BASE, we always use a guest base and the macros GUEST_BASE and RESERVED_VA become useless: replace them by their values. Reviewed-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NLaurent Vivier <laurent@vivier.eu> Message-Id: <1440420834-8388-1-git-send-email-laurent@vivier.eu> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
由 Laurent Vivier 提交于
All tcg host architectures now support the guest base and as there is no real performance lost, it can be always enabled. Anyway, guest base use can be disabled lively by setting guest base to 0. CONFIG_USE_GUEST_BASE is defined as (USE_GUEST_BASE && USER_ONLY), it should have to be replaced by CONFIG_USER_ONLY in non CONFIG_USER_ONLY parts, but as some other parts are using !CONFIG_SOFTMMU I have chosen to use !CONFIG_SOFTMMU instead. Reviewed-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NLaurent Vivier <laurent@vivier.eu> Message-Id: <1440373328-9788-2-git-send-email-laurent@vivier.eu> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
- 07 7月, 2015 1 次提交
-
-
由 Peter Crosthwaite 提交于
Currently the "host" page size alignment API is really aligning to both host and target page sizes. There is the qemu_real_page_size which can be used for the actual host page size but it's missing a mask and ALIGN macro as provided for qemu_page_size. Complete the API. This allows system level code that cares about the host page size to use a consistent alignment interface without having to un-needingly align to the target page size. This also reduces system level code dependency on the cpu specific TARGET_PAGE_SIZE. Signed-off-by: NPeter Crosthwaite <crosthwaite.peter@gmail.com> Tested-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Acked-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
-
- 26 6月, 2015 1 次提交
-
-
由 Peter Crosthwaite 提交于
These exception indicies are generic and don't have any reliance on the per-arch cpu.h defs. Move them to cpu-all.h so they can be used by core code that does not have access to cpu-defs.h. Reviewed-by: NRichard Henderson <rth@redhat.com> Signed-off-by: NPeter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <dbebd3062c7cd4332240891a3564e73f374ddfcd.1433052532.git.crosthwaite.peter@gmail.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 17 2月, 2015 4 次提交
-
-
由 Mike Day 提交于
Allow "unlocked" reads of the ram_list by using an RCU-enabled QLIST. The ramlist mutex is kept. call_rcu callbacks are run with the iothread lock taken, but that may change in the future. Writers still take the ramlist mutex, but they no longer need to assume that the iothread lock is taken. Readers of the list, instead, no longer require either the iothread or ramlist mutex, but they need to use rcu_read_lock() and rcu_read_unlock(). One place in arch_init.c was downgrading from write side to read side like this: qemu_mutex_lock_iothread() qemu_mutex_lock_ramlist() ... qemu_mutex_unlock_iothread() ... qemu_mutex_unlock_ramlist() and the equivalent idiom is: qemu_mutex_lock_ramlist() rcu_read_lock() ... qemu_mutex_unlock_ramlist() ... rcu_read_unlock() Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NMike Day <ncmike@ncultra.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Mike Day 提交于
QLIST has RCU-friendly primitives, so switch to it. Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NMike Day <ncmike@ncultra.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Mike Day 提交于
Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NMike Day <ncmike@ncultra.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Hence, freeing a RAMBlock has to be switched to call_rcu. Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 20 1月, 2015 1 次提交
-
-
由 Peter Maydell 提交于
Add documentation of what the cpu_*_* accessors look like. Correct some minor errors in the existing documentation of the direct _p accessor family. Remove the near-duplicate comment on the _p accessors from cpu-all.h and replace it with a reference to the comment in bswap.h. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <rth@twiddle.net> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Message-id: 1421334118-3287-16-git-send-email-peter.maydell@linaro.org
-
- 08 1月, 2015 2 次提交
-
-
由 Michael S. Tsirkin 提交于
Add API to allocate "resizeable" RAM. This looks just like regular RAM generally, but has a special property that only a portion of it (used_length) is actually used, and migrated. This used_length size can change across reboots. Follow up patches will change used_length for such blocks at migration, making it easier to extend devices using such RAM (notably ACPI, but in the future thinkably other ROMs) without breaking migration compatibility or wasting ROM (guest) memory. Device is notified on resize, so it can adjust if necessary. qemu_ram_alloc_resizeable allocates this memory, qemu_ram_resize resizes it. Note: nothing prevents making all RAM resizeable in this way. However, reviewers felt that only enabling this selectively will make some class of errors easier to detect. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Michael S. Tsirkin 提交于
This patch allows us to distinguish between two length values for each block: max_length - length of memory block that was allocated used_length - length of block used by QEMU/guest Currently, we set used_length - max_length, unconditionally. Follow-up patches allow used_length <= max_length. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 17 12月, 2014 1 次提交
-
-
由 Max Filippov 提交于
Currently 'info jit' outputs half of the information to monitor and the rest to qemu log. Dumping opcode counts to monitor as a part of 'info jit' command doesn't sound useful. Add new monitor command 'info opcount' that only dumps opcode counters. Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com> Reviewed-by: NRichard Henderson <rth@twiddle.net> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
-
- 16 12月, 2014 3 次提交
-
-
由 Michael S. Tsirkin 提交于
If it isn't, access at an offset will cause memory corruption. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: NAmos Kong <akong@redhat.com> Signed-off-by: NAmit Shah <amit.shah@redhat.com>
-
由 Michael S. Tsirkin 提交于
Make accesses safer in case we missed some check somewhere. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: NAmos Kong <akong@redhat.com> Signed-off-by: NAmit Shah <amit.shah@redhat.com>
-
由 Michael S. Tsirkin 提交于
host pointer accesses force pointer math, let's add a wrapper to make them safer. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: NAmos Kong <akong@redhat.com> Signed-off-by: NAmit Shah <amit.shah@redhat.com>
-
- 07 10月, 2014 1 次提交
-
-
由 Mikhail Ilyin 提交于
The initial base address is miscalculated in walk_memory_regions(). It has to be shifted TARGET_PAGE_BITS more. Holder variables are extended to target_ulong size otherwise they don't fit for MIPS N32 (a 32-bit ABI with a 64-bit address space) and qemu won't compile. The issue led to incorrect debug output of memory maps and a mis-formed coredumped file. Signed-off-by: NMikhail Ilyin <m.ilin@samsung.com> Signed-off-by: NRiku Voipio <riku.voipio@linaro.org>
-
- 22 8月, 2014 1 次提交
-
-
由 Mikhail Ilyin 提交于
Build /proc/self/maps doing a match against guest memory translation table. Output only that map records which are valid for guest memory layout. Signed-off-by: NMikhail Ilyin <m.ilin@samsung.com> Signed-off-by: NRiku Voipio <riku.voipio@linaro.org>
-
- 19 6月, 2014 3 次提交
-
-
由 Paolo Bonzini 提交于
Prepare for adding more flags. The "_MASK" suffix is unique, kill it. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NHu Tao <hutao@cn.fujitsu.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Paolo Bonzini 提交于
Split the internal interface in exec.c to a separate function, and push the check on mem_path up to memory_region_init_ram. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NHu Tao <hutao@cn.fujitsu.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Wanlong Gao 提交于
Signed-off-by: NWanlong Gao <gaowanlong@cn.fujitsu.com> Reviewed-by: NEduardo Habkost <ehabkost@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NHu Tao <hutao@cn.fujitsu.com> Signed-off-by: NBlue Swirl <blauwirbel@gmail.com> Signed-off-by: NAndre Przywara <andre.przywara@amd.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> MST: comment tweaks
-
- 05 6月, 2014 1 次提交
-
-
由 Paolo Bonzini 提交于
Unify pieces of cpu-all.h, exec-all.h, softmmu_exec.h and tcg/tcg.h into a single new header file with all helpers. Reviewed-by: NRichard Henderson <rth@twiddle.net> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-