- 06 11月, 2015 2 次提交
-
-
由 Paolo Bonzini 提交于
Whenever the MRU cache hits for the list of RAM blocks, qemu_get_ram_block does an unnecessary write that causes a processor cache line to bounce from one core to another. This causes a performance hit. Reported-by: NEmilio G. Cota <cota@braap.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NMichael Tokarev <mjt@tls.msk.ru>
-
由 Pavel Dovgalyuk 提交于
This patch introduces the functions for enabling the record/replay and for freeing the resources when simulator closes. Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NPavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Message-Id: <20150917162507.8676.90232.stgit@PASHA-ISP.def.inno> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NPavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
-
- 04 11月, 2015 2 次提交
-
-
由 Pavel Fedin 提交于
This allows to explicitly specify file name to use with the backend. This is important when using it together with ivshmem in order to make it backed by hugetlbfs. By default filename is autogenerated using mkstemp(), and the file is unlink()ed after creation, effectively making it anonymous. This is not very useful with ivshmem because it ends up in a memory which cannot be accessed by something else. Distinction between directory and file name is done by stat() check. If an existing directory is given, the code keeps old behavior. Otherwise it creates or opens a file with the given pathname. Signed-off-by: NPavel Fedin <p.fedin@samsung.com> Tested-by: NIgor Skalkin <i.skalkin@samsung.com> Message-Id: <004301d11166$9672fe30$c358fa90$@samsung.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
This ensures that cpu_reload_memory_map() is called as soon as tcg_cpu_address_space_init() is called, and before cpu->memory_dispatch is used. qemu-system-s390x never changes the address spaces after tcg_cpu_address_space_init() is called, and thus tcg_commit() is never called. This causes a SIGSEGV. Because memory_map_init() will now call mem_commit(), we have to initialize io_mem_* before address_space_memory and friends. Reported-by: NPhilipp Kern <pkern@debian.org> Reviewed-by: NPeter Maydell <peter.maydell@linaro.org> Fixes: 0a1c71ceSigned-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 02 11月, 2015 1 次提交
-
-
由 Igor Mammedov 提交于
QEMU shouldn't exits from file_ram_alloc() if -mem-prealloc option is specified and "object_add memory-backend-file,..." fails allocation during memory hotplug. Propagate error to a caller and let it decide what to do with allocation failure. That leaves QEMU alive if it can't create backend during hotplug time and kills QEMU at startup time if backends or initial memory were misconfigured/ too large. Signed-off-by: NIgor Mammedov <imammedo@redhat.com> Message-Id: <1445274671-17704-1-git-send-email-imammedo@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 21 10月, 2015 1 次提交
-
-
由 Michael S. Tsirkin 提交于
Anonymous and file-backed RAM allocation are now almost exactly the same. Reduce code duplication by moving RAM mmap code out of oslib-posix.c and exec.c. Reported-by: NMarc-André Lureau <mlureau@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Acked-by: NPaolo Bonzini <pbonzini@redhat.com> Tested-by: NThibaut Collet <thibaut.collet@6wind.com>
-
- 13 10月, 2015 2 次提交
-
-
由 Peter Maydell 提交于
Gather up all the fields currently in CPUState which deal with the CPU's AddressSpace into a separate CPUAddressSpace struct. This paves the way for allowing the CPU to know about more than one AddressSpace. The rearrangement also allows us to make the MemoryListener a directly embedded object in the CPUAddressSpace (it could not be embedded in CPUState because 'struct MemoryListener' isn't defined for the user-only builds). This allows us to resolve the FIXME in tcg_commit() by going directly from the MemoryListener to the CPUAddressSpace. This patch extracts the actual update of the cached dispatch pointer from cpu_reload_memory_map() (which is renamed accordingly to cpu_reloading_memory_map() as it is only responsible for breaking cpu-exec.c's RCU critical section now). This lets us keep the definition of the CPUAddressSpace struct private to exec.c. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Message-Id: <1443709790-25180-4-git-send-email-peter.maydell@linaro.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Peter Maydell 提交于
Currently we call cpu_reload_memory_map() from cpu_exec_init(), but this is not necessary: * KVM doesn't use the data structures maintained by cpu_reload_memory_map() (the TLB and cpu->memory_dispatch) * for TCG, we will call this function via tcg_commit() either as soon as tcg_cpu_address_space_init() registers the listener, or when the first MemoryRegion is added to the AddressSpace if the AS is empty when we register the listener The unnecessary call is awkward for adding support for multiple address spaces per CPU, so drop it. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NEdgar E. Iglesias <edgar.iglesias@gmail.com> Message-Id: <1443709790-25180-2-git-send-email-peter.maydell@linaro.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 01 10月, 2015 1 次提交
-
-
由 Michael S. Tsirkin 提交于
This inserts a read and write protected page between RAM and QEMU memory, for file-backend RAM. This makes it harder to exploit QEMU bugs resulting from buffer overflows in devices using variants of cpu_physical_memory_map, dma_memory_map etc. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 16 9月, 2015 4 次提交
-
-
由 Peter Crosthwaite 提交于
Move the architecture agnostic function prototypes for exec.c out of cputlb.h to exec-all.h. This allows hiding of the arch specific cputlb.h from exec.c which should be getting close to having no architecture specifics. Prepares support for multi-arch, which will have a minimal cpu.h that services exec.c but not cputlb.h. Reviewed-by: NRichard Henderson <rth@twiddle.net> Signed-off-by: NPeter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <b4fe754c58c860315e35d44430c26b1c967ce2c9.1441614289.git.crosthwaite.peter@gmail.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Peter Crosthwaite 提交于
Change tlb_set_dirty() to accept a CPU instead of an env pointer. This allows for removal of another CPUArchState usage from prototypes that need to be QOMified. Signed-off-by: NPeter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <d2b1dcbe7945112989861d8ba7369449c11cc273.1441614289.git.crosthwaite.peter@gmail.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Peter Crosthwaite 提交于
To prepare for multi-arch, cputlb.c should only have awareness of one single architecture. This means it should not have access to the full CPU lists which may be heterogeneous. Instead, push the CPU_LOOP() up to the one and only caller in exec.c. Signed-off-by: NPeter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <db06dc6c49f8970caaf116d0385f00ee10a56f2f.1441614289.git.crosthwaite.peter@gmail.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Andrey Smetanin 提交于
CPUState::crash_occurred field inside CPUState marks that guest crash occurred. This value is added into cpu common migration subsection. Signed-off-by: NAndrey Smetanin <asmetanin@virtuozzo.com> Signed-off-by: NDenis V. Lunev <den@openvz.org> CC: Paolo Bonzini <pbonzini@redhat.com> CC: Andreas Färber <afaerber@suse.de> Message-Id: <1435924905-8926-12-git-send-email-den@openvz.org> [Document the new field. - Paolo] Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 09 9月, 2015 1 次提交
-
-
由 Paolo Bonzini 提交于
TLS is now required on all platforms, so DECLARE_TLS/DEFINE_TLS is not needed anymore. Removing it does not break Windows because of the previous patch. Reviewed-by: NRichard Henderson <rth@twiddle.net> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 07 9月, 2015 1 次提交
-
-
由 Peter Maydell 提交于
Use pow2floor() to round down to the nearest power of 2, rather than an inline calculation. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Message-id: 1437741192-20955-5-git-send-email-peter.maydell@linaro.org
-
- 15 8月, 2015 1 次提交
-
-
由 Chen Hanxiao 提交于
Use ROUND_UP instead. Signed-off-by: NChen Hanxiao <chenhanxiao@cn.fujitsu.com> Message-Id: <1437707523-4910-1-git-send-email-chenhanxiao@cn.fujitsu.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 23 7月, 2015 1 次提交
-
-
由 Peter Maydell 提交于
When accessing the dispatch pointer in an AddressSpace within an RCU critical section we should always use atomic_rcu_read(). Fix an access within memory_region_section_get_iotlb() which was incorrectly doing a direct pointer access. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Message-Id: <1437391637-31576-1-git-send-email-peter.maydell@linaro.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 09 7月, 2015 7 次提交
-
-
由 Peter Crosthwaite 提交于
The callers (most of them in target-foo/cpu.c) to this function all have the cpu pointer handy. Just pass it to avoid an ENV_GET_CPU() from core code (in exec.c). Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Michael Walle <michael@walle.cc> Cc: Leon Alrae <leon.alrae@imgtec.com> Cc: Anthony Green <green@moxielogic.com> Cc: Jia Liu <proljc@gmail.com> Cc: Alexander Graf <agraf@suse.de> Cc: Blue Swirl <blauwirbel@gmail.com> Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Max Filippov <jcmvbkbc@gmail.com> Reviewed-by: NAndreas Färber <afaerber@suse.de> Reviewed-by: NAurelien Jarno <aurelien@aurel32.net> Signed-off-by: NPeter Crosthwaite <crosthwaite.peter@gmail.com> Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
由 Peter Crosthwaite 提交于
All of the core-code usages of this API have the cpu pointer handy so pass it in. There are only 3 architecture specific usages (2 of which are commented out) which can just use ENV_GET_CPU() locally to get the cpu pointer. The reduces core code usage of the CPU env, which brings us closer to common-obj'ing these core files. Cc: Riku Voipio <riku.voipio@iki.fi> Cc: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: NEduardo Habkost <ehabkost@redhat.com> Acked-by: NEduardo Habkost <ehabkost@redhat.com> Signed-off-by: NPeter Crosthwaite <crosthwaite.peter@gmail.com> Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
由 Bharata B Rao 提交于
Currently CPUState::cpu_index is monotonically increasing and a newly created CPU always gets the next higher index. The next available index is calculated by counting the existing number of CPUs. This is fine as long as we only add CPUs, but there are architectures which are starting to support CPU removal, too. For an architecture like PowerPC which derives its CPU identifier (device tree ID) from cpu_index, the existing logic of generating cpu_index values causes problems. With the currently proposed method of handling vCPU removal by parking the vCPU fd in QEMU (Ref: http://lists.gnu.org/archive/html/qemu-devel/2015-02/msg02604.html), generating cpu_index this way will not work for PowerPC. This patch changes the way cpu_index is handed out by maintaining a bit map of the CPUs that tracks both addition and removal of CPUs. The CPU bitmap allocation logic is part of cpu_exec_init(), which is called by instance_init routines of various CPU targets. Newly added cpu_exec_exit() API handles the deallocation part and this routine is called from generic CPU instance_finalize. Note: This new CPU enumeration is for !CONFIG_USER_ONLY only. CONFIG_USER_ONLY continues to have the old enumeration logic. Signed-off-by: NBharata B Rao <bharata@linux.vnet.ibm.com> Reviewed-by: NEduardo Habkost <ehabkost@redhat.com> Reviewed-by: NIgor Mammedov <imammedo@redhat.com> Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NPeter Crosthwaite <peter.crosthwaite@xilinx.com> Acked-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NPeter Crosthwaite <crosthwaite.peter@gmail.com> [AF: max_cpus -> MAX_CPUMASK_BITS] Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
由 Bharata B Rao 提交于
Add an Error argument to cpu_exec_init() to let users collect the error. This is in preparation to change the CPU enumeration logic in cpu_exec_init(). With the new enumeration logic, cpu_exec_init() can fail if cpu_index values corresponding to max_cpus have already been handed out. Since all current callers of cpu_exec_init() are from instance_init, use error_abort Error argument to abort in case of an error. Signed-off-by: NBharata B Rao <bharata@linux.vnet.ibm.com> Reviewed-by: NEduardo Habkost <ehabkost@redhat.com> Reviewed-by: NIgor Mammedov <imammedo@redhat.com> Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NPeter Crosthwaite <peter.crosthwaite@xilinx.com> Acked-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NPeter Crosthwaite <crosthwaite.peter@gmail.com> Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
由 Eduardo Habkost 提交于
Instead of initializing cpu->as, cpu->thread_id, and reloading memory map while holding cpu_list_lock(), do it earlier, before locking the CPU list and initializing cpu_index. This allows the code handling cpu_index and global CPU list to be isolated from the rest. Cc: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: NEduardo Habkost <ehabkost@redhat.com> Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
由 Eduardo Habkost 提交于
One small step in the simplification of cpu_exec_init(). Reviewed-by: NIgor Mammedov <imammedo@redhat.com> Signed-off-by: NEduardo Habkost <ehabkost@redhat.com> Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
由 Eduardo Habkost 提交于
QOM objects are already zero-filled when instantiated, there's no need to explicitly set numa_node to 0. Reviewed-by: NIgor Mammedov <imammedo@redhat.com> Signed-off-by: NEduardo Habkost <ehabkost@redhat.com> Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
- 07 7月, 2015 1 次提交
-
-
由 Li Zhijian 提交于
Prevously, if we hotplug a device(e.g. device_add e1000) during migration is processing in source side, qemu will add a new ram block but migration_bitmap is not extended. In this case, migration_bitmap will overflow and lead qemu abort unexpectedly. Signed-off-by: NLi Zhijian <lizhijian@cn.fujitsu.com> Signed-off-by: NWen Congyang <wency@cn.fujitsu.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
- 06 7月, 2015 1 次提交
-
-
由 Paolo Bonzini 提交于
Loading the BIOS in the mac99 machine is interesting, because there is a PROM in the middle of the BIOS region (from 16K to 32K). Before memory region accesses were clamped, when QEMU was asked to load a BIOS from 0xfff00000 to 0xffffffff it would put even those 16K from the BIOS file into the region. This is weird because those 16K were not actually visible between 0xfff04000 and 0xfff07fff. However, it worked. After clamping was added, this also worked. In this case, the cpu_physical_memory_write_rom_internal function split the write in three parts: the first 16K were copied, the PROM area (second 16K) were ignored, then the rest was copied. Problems then started with commit 965eb2fc (exec: do not clamp accesses to MMIO regions, 2015-06-17). Clamping accesses is not done for MMIO regions because they can overlap wildly, and MMIO registers can be expected to perform full-width accesses based only on their address (with no respect for adjacent registers that could decode to completely different MemoryRegions). However, this lack of clamping also applied to the PROM area! cpu_physical_memory_write_rom_internal thus failed to copy the third range above, i.e. only copied the first 16K of the BIOS. In effect, address_space_translate is expecting _something else_ to do the clamping for MMIO regions if the incoming length is large. This "something else" is memory_access_size in the case of address_space_rw, so use the same logic in cpu_physical_memory_write_rom_internal. Reported-by: NAlexander Graf <agraf@redhat.com> Reviewed-by: NLaurent Vivier <lvivier@redhat.com> Tested-by: NLaurent Vivier <lvivier@redhat.com> Fixes: 965eb2fcSigned-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 01 7月, 2015 2 次提交
-
-
由 Jan Kiszka 提交于
The MMIO case is further broken up in two cases: if the caller does not hold the BQL on invocation, the unlocked one takes or avoids BQL depending on the locking strategy of the target memory region and its coalesced MMIO handling. In this case, the caller should not hold _any_ lock (a friendly suggestion which is disregarded by virtio-scsi-dataplane). Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Cc: Frederic Konrad <fred.konrad@greensocs.com> Message-Id: <1434646046-27150-6-git-send-email-pbonzini@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
As memory_region_read/write_accessor will now be run also without BQL held, we need to move coalesced MMIO flushing earlier in the dispatch process. Cc: Frederic Konrad <fred.konrad@greensocs.com> Message-Id: <1434646046-27150-5-git-send-email-pbonzini@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 19 6月, 2015 2 次提交
-
-
由 Paolo Bonzini 提交于
Because the clamping was done against the MemoryRegion, address_space_rw was effectively broken if a write spanned multiple sections that are not linear in underlying memory (with the memory not being under an IOMMU). This is visible with the MIPS rc4030 IOMMU, which is implemented as a series of alias memory regions that point to the actual RAM. Tested-by: NHervé Poussineau <hpoussin@reactos.org> Tested-by: NMark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
It is common for MMIO registers to overlap, for example a 4 byte register at 0xcf8 (totally random choice... :)) and a 1 byte register at 0xcf9. If these registers are implemented via separate MemoryRegions, it is wrong to clamp the accesses as the value written would be truncated. Hence for these regions the effects of commit 23820dbf (exec: Respect as_translate_internal length clamp, 2015-03-16, previously applied as commit c3c1bb99) must be skipped. Tested-by: NHervé Poussineau <hpoussin@reactos.org> Tested-by: NMark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 12 6月, 2015 2 次提交
-
-
由 Dr. David Alan Gilbert 提交于
check the return value of the function it calls and error if it's non-0 Fixup qemu_rdma_init_one_block that is the only current caller, and rdma_add_block the only function it calls using it. Pass the name of the ramblock to the function; helps in debugging. Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NAmit Shah <amit.shah@redhat.com> Reviewed-by: NMichael R. Hines <mrhines@us.ibm.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Juan Quintela 提交于
We create optional sections with this patch. But we already have optional subsections. Instead of having two mechanism that do the same, we can just generalize it. For subsections we just change: - Add a needed function to VMStateDescription - Remove VMStateSubsection (after removal of the needed function it is just a VMStateDescription) - Adjust the whole tree, moving the needed function to the corresponding VMStateDescription Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
- 05 6月, 2015 7 次提交
-
-
由 Stefan Hajnoczi 提交于
The cpu_physical_memory_reset_dirty() function is sometimes used together with cpu_physical_memory_get_dirty(). This is not atomic since two separate accesses to the dirty memory bitmap are made. Turn cpu_physical_memory_reset_dirty() and cpu_physical_memory_clear_dirty_range_type() into the atomic cpu_physical_memory_test_and_clear_dirty(). Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Message-Id: <1417519399-3166-6-git-send-email-stefanha@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Most of the time, not all bitmaps have to be marked as dirty; do not do anything if the interesting ones are already dirty. Previously, any clean bitmap would have cause all the bitmaps to be marked dirty. In fact, unless running TCG most of the time bitmap operations need not be done at all, because memory_region_is_logging returns zero. In this case, skip the call to cpu_physical_memory_range_includes_clean altogether as well. With this patch, cpu_physical_memory_set_dirty_range is called unconditionally, so there need not be anymore a separate call to xen_modified_memory. Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
This cuts in half the cost of bitmap operations (which will become more expensive when made atomic) during migration on non-VRAM regions. Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
The is_cpu_write_access argument is always 0, remove it. Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
The memory API can now return the exact set of bitmaps that have to be tracked. Use it instead of the in_migration variable. In the next patches, we will also use it to set only DIRTY_MEMORY_VGA or DIRTY_MEMORY_MIGRATION if necessary. This can make a difference for dataplane, especially after the dirty bitmap is changed to use more expensive atomic operations. Of some interest is the change to stl_phys_notdirty. When migration was introduced, stl_phys_notdirty was changed to effectively behave as stl_phys during migration. In fact, if one looks at the function as it was in the beginning (commit 8df1cd07, physical memory access functions, 2005-01-28), at the time the dirty bitmap was the equivalent of DIRTY_MEMORY_CODE nowadays; hence, the function simply should not touch the dirty code bits. This patch changes it to do the intended thing. Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Invoke xen_modified_memory from cpu_physical_memory_set_dirty_range_nocode; it is akin to DIRTY_MEMORY_MIGRATION, so set it together with that bitmap. The remaining call from invalidate_and_set_dirty's "else" branch will go away soon. Second, fix the second argument to the function in the cpu_physical_memory_set_dirty_lebitmap call site. That function is only used by KVM, but it is better to be clean anyway. Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
phys_page_set_level is writing zeroes to a struct that has just been filled in by phys_map_node_alloc. Instead, tell phys_map_node_alloc whether to fill in the page "as a leaf" or "as a non-leaf". memcpy is faster than struct assignment, which copies each bitfield individually. A compiler bug (https://gcc.gnu.org/PR66391), and small memcpys like this one are special-cased anyway, and optimized to a register move, so just use the memcpy. This cuts the cost of phys_page_set_level from 25% to 5% when booting qboot. Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 30 4月, 2015 1 次提交
-
-
由 Paolo Bonzini 提交于
Once address_space_translate will be called outside the BQL, the returned MemoryRegion might disappear as soon as the RCU read-side critical section ends. Avoid this by moving the critical section to the callers. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Message-Id: <1426684909-95030-3-git-send-email-pbonzini@redhat.com>
-