- 06 6月, 2015 1 次提交
-
-
由 Gerd Hoffmann 提交于
Signed-off-by: NGerd Hoffmann <kraxel@redhat.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> [Fix compilation of the newly introduced test. - Paolo] Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 05 6月, 2015 39 次提交
-
-
由 Gerd Hoffmann 提交于
Once the SMRAM.D_LCK bit has been set by the guest several bits in SMRAM and ESMRAMC become readonly until the next machine reset. Implement this by updating the wmask accordingly when the guest sets the lock bit. As the lock it itself is locked down too we don't need to worry about the guest clearing the lock bit. Signed-off-by: NGerd Hoffmann <kraxel@redhat.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Gerd Hoffmann 提交于
Not all bits in SMRAM and ESMRAMC can be changed by the guest. Add wmask defines accordingly and set them in mch_reset(). Signed-off-by: NGerd Hoffmann <kraxel@redhat.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Gerd Hoffmann 提交于
The cache bits in ESMRAMC are hardcoded to 1 (=disabled) according to the q35 mch specs. Add and use a define with this default. While being at it also update the SMRAM default to use the name (no code change, just makes things a bit more readable). Signed-off-by: NGerd Hoffmann <kraxel@redhat.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
When H_SMRAME is 1, low memory at 0xa0000 is left alone by SMM, and instead the chipset maps the 0xa0000-0xbffff window at 0xfeda0000-0xfedbffff. This affects both the "non-SMM" view controlled by D_OPEN and the SMM view controlled by G_SMRAME, so add two new MemoryRegions and toggle the enabled/disabled state of all four in mch_update_smram. Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
It's easier to inline it now that most of its work is done by the CPU (rather than the chipset) through /machine/smram and the memory API. Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Remove cpu_smm_register and cpu_smm_update. Instead, each CPU address space gets an extra region which is an alias of /machine/smram. This extra region is enabled or disabled as the CPU enters/exits SMM. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
This region is exported at /machine/smram. It is "empty" if SMRAME=0 and points to SMRAM if SMRAME=1. The CPU will enable/disable it as it enters or exits SMRAM. While touching nearby code, the existing memory region setup was slightly inconsistent. The smram_region is *disabled* in order to open SMRAM (because the smram_region shows the low VRAM instead of the RAM at 0xa0000). Because SMRAM is closed at startup, the smram_region must be enabled when creating the i440fx or q35 devices. Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Different CPUs can be in SMM or not at the same time, thus they will see different things where the chipset places SMRAM. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
If a machine_init_done notifier is added late, as part of a hot-plugged device, run it immediately. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Suggested-by: NEduardo Habkost <ehabkost@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
-global does not work for drivers that have a dot in their name, such as cfi.pflash01. This is just a parsing limitation, because such globals can be declared easily inside a -readconfig file. To allow this usage, support the full QemuOpts key/value syntax for -global too, for example "-global driver=cfi.pflash01,property=secure,value=on". The two formats do not conflict, because the key/value syntax does not have a period before the first equal sign. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
When this property is set, MMIO accesses are only allowed with the MEMTXATTRS_SECURE attribute. This is used for secure access to UEFI variables stored in flash. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
This is a required step to implement read_with_attrs and write_with_attrs. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Make this consistent with the secure property, added in the next patch. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
An SMI should definitely wake up a processor in halted state! This lets OVMF boot with SMM on multiprocessor systems, although it halts very soon after that with a "CpuIndex != BspIndex" assertion failure. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Because the limit field's bits 31:20 is 1, G should be 1. VMX actually enforces this, let's do it for completeness in QEMU as well. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
QEMU is not blocking NMIs on entry to SMM. Implementing this has to cover a few corner cases, because: - NMIs can then be enabled by an IRET instruction and there is no mechanism to _set_ the "NMIs masked" flag on exit from SMM: "A special case can occur if an SMI handler nests inside an NMI handler and then another NMI occurs. [...] When the processor enters SMM while executing an NMI handler, the processor saves the SMRAM state save map but does not save the attribute to keep NMI interrupts disabled. - However, there is some hidden state, because "If NMIs were blocked before the SMI occurred [and no IRET is executed while in SMM], they are blocked after execution of RSM." This is represented by the new HF2_SMM_INSIDE_NMI_MASK bit. If it is zero, NMIs are _unblocked_ on exit from RSM. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
In order to do this, stop using the cpu_in*/out* helpers, and instead access address_space_io directly. cpu_in* and cpu_out* remain for usage in the monitor, in qtest, and in Xen. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
These include page table walks, SVM accesses and SMM state save accesses. The bulk of the patch is obtained with sed -i 's/\(\<[a-z_]*_phys\(_notdirty\)\?\>(cs\)->as,/x86_\1,/' Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Victor CLEMENT 提交于
While qemu is running in sleep=no mode, a warning will be printed when no timer deadline is set. As this mode is intended for getting deterministic virtual time, if no timer is set on the virtual clock this determinism is broken. Signed-off-by: NVictor CLEMENT <victor.clement@openwide.fr> Message-Id: <1432912446-9811-4-git-send-email-victor.clement@openwide.fr> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Victor CLEMENT 提交于
The 'sleep' parameter sets the icount_sleep mode, which is enabled by default. To disable it, add the 'sleep=no' parameter (or 'nosleep') to the qemu -icount option. Signed-off-by: NVictor CLEMENT <victor.clement@openwide.fr> Message-Id: <1432912446-9811-3-git-send-email-victor.clement@openwide.fr> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Victor CLEMENT 提交于
When the icount_sleep mode is disabled, the QEMU_VIRTUAL_CLOCK runs at the maximum possible speed by warping the sleep times of the virtual cpu to the soonest clock deadline. The virtual clock will be updated only according the instruction counter. Signed-off-by: NVictor CLEMENT <victor.clement@openwide.fr> Message-Id: <1432912446-9811-2-git-send-email-victor.clement@openwide.fr> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
mr->terminates alone doesn't guarantee that we are looking at a RAM region. mr->ram_addr also has to be checked, in order to distinguish RAM and I/O regions. So, do the following: 1) add a new define RAM_ADDR_INVALID, and test it in the assertions instead of mr->terminates 2) IOMMU regions were not setting mr->ram_addr to a bogus value, initialize it in the instance_init function so that the new assertions would fire for IOMMU regions as well. Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Stefan Hajnoczi 提交于
The fast path of cpu_physical_memory_sync_dirty_bitmap() directly manipulates the dirty bitmap. Use atomic_xchg() to make the test-and-clear atomic. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Message-Id: <1417519399-3166-7-git-send-email-stefanha@redhat.com> [Only do xchg on nonzero words. - Paolo] Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Stefan Hajnoczi 提交于
The cpu_physical_memory_reset_dirty() function is sometimes used together with cpu_physical_memory_get_dirty(). This is not atomic since two separate accesses to the dirty memory bitmap are made. Turn cpu_physical_memory_reset_dirty() and cpu_physical_memory_clear_dirty_range_type() into the atomic cpu_physical_memory_test_and_clear_dirty(). Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Message-Id: <1417519399-3166-6-git-send-email-stefanha@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Stefan Hajnoczi 提交于
The dirty memory bitmap is managed by ram_addr.h and copied to migration_bitmap[] periodically during live migration. Move the code to sync the bitmap to ram_addr.h where related code lives. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Message-Id: <1417519399-3166-5-git-send-email-stefanha@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Stefan Hajnoczi 提交于
Use set_bit_atomic() and bitmap_set_atomic() so that multiple threads can dirty memory without race conditions. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Message-Id: <1417519399-3166-4-git-send-email-stefanha@redhat.com> Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Stefan Hajnoczi 提交于
The new bitmap_test_and_clear_atomic() function clears a range and returns whether or not the bits were set. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Message-Id: <1417519399-3166-3-git-send-email-stefanha@redhat.com> [Test before xchg; then a full barrier is needed at the end just like in the previous patch. The barrier can be avoided if we did at least one xchg. - Paolo] Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Stefan Hajnoczi 提交于
Use atomic_or() for atomic bitmaps where several threads may set bits at the same time. This avoids the race condition between threads loading an element, bitwise ORing, and then storing the element. When setting all bits in a word we can avoid atomic ops and instead just use an smp_mb() at the end. Most bitmap users don't need atomicity so introduce new functions. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Message-Id: <1417519399-3166-2-git-send-email-stefanha@redhat.com> [Avoid barrier in the single word case, use full barrier instead of write. - Paolo] Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
cpu_physical_memory_set_dirty_lebitmap unconditionally syncs the DIRTY_MEMORY_CODE bitmap. This however is unused unless TCG is enabled. Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Most of the time, not all bitmaps have to be marked as dirty; do not do anything if the interesting ones are already dirty. Previously, any clean bitmap would have cause all the bitmaps to be marked dirty. In fact, unless running TCG most of the time bitmap operations need not be done at all, because memory_region_is_logging returns zero. In this case, skip the call to cpu_physical_memory_range_includes_clean altogether as well. With this patch, cpu_physical_memory_set_dirty_range is called unconditionally, so there need not be anymore a separate call to xen_modified_memory. Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
While it is obvious that cpu_physical_memory_get_dirty returns true even if a single page is dirty, the same is not true for cpu_physical_memory_get_clean; one would expect that it returns true only if all the pages are clean, but it actually looks for even one clean page. (By contrast, the caller of that function, cpu_physical_memory_range_includes_clean, has a good name). To clarify, rename the function to cpu_physical_memory_all_dirty and return true if _all_ the pages are dirty. This is the opposite of the previous meaning, because "all are 1" is the same as "not (any is 0)", so we have to modify cpu_physical_memory_range_includes_clean as well. Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
This cuts in half the cost of bitmap operations (which will become more expensive when made atomic) during migration on non-VRAM regions. Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
is_cpu_write_access is only set if tb_invalidate_phys_page_range is called from tb_invalidate_phys_page_fast, and hence from notdirty_mem_write. However: - the code bitmap can be built directly in tb_invalidate_phys_page_fast (unconditionally, since is_cpu_write_access would always be passed as 1); - the virtual address is not needed to mark the page as "not containing code" (dirty code bitmap = 1), so we can also remove that use of is_cpu_write_access. For calls of tb_invalidate_phys_page_range that do not come from notdirty_mem_write, the next call to notdirty_mem_write will notice that the page does not contain code anymore, and will fix up the TLB entry. The parameter needs to remain in order to guard accesses to cpu->mem_io_pc. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
These days modification of the TLB is done in notdirty_mem_write, so the virtual address and env pointer as unnecessary. The new name of the function, tlb_unprotect_code, is consistent with tlb_protect_code. Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
The is_cpu_write_access argument is always 0, remove it. Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Remove them from the sundry exec-all.h header, since they are only used by the TCG runtime in exec.c and user-exec.c. Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
The memory API can now return the exact set of bitmaps that have to be tracked. Use it instead of the in_migration variable. In the next patches, we will also use it to set only DIRTY_MEMORY_VGA or DIRTY_MEMORY_MIGRATION if necessary. This can make a difference for dataplane, especially after the dirty bitmap is changed to use more expensive atomic operations. Of some interest is the change to stl_phys_notdirty. When migration was introduced, stl_phys_notdirty was changed to effectively behave as stl_phys during migration. In fact, if one looks at the function as it was in the beginning (commit 8df1cd07, physical memory access functions, 2005-01-28), at the time the dirty bitmap was the equivalent of DIRTY_MEMORY_CODE nowadays; hence, the function simply should not touch the dirty code bits. This patch changes it to do the intended thing. Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-