- 07 11月, 2018 1 次提交
-
-
由 Rudolf Marek 提交于
Fix the SYSCALL instruction in 64-bit (long mode). The RF flag should be cleared in R11 as well as in the RFLAGS. Intel and AMD CPUs behave same. AMD has this documented in the APM vol 3. Signed-off-by: NRoman Kapl <rka@sysgo.com> Signed-off-by: NRudolf Marek <rudolf.marek@sysgo.com> Message-Id: <20181019122449.26387-1-rka@sysgo.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 03 10月, 2018 2 次提交
-
-
由 Paolo Bonzini 提交于
This flag will be used for KVM's nested VMX migration; the HF_GUEST_MASK name is already used in KVM, adopt it in QEMU as well. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Interrupt handling depends on various flags in env->hflags or env->hflags2, and the exact detail were not exactly replicated between x86_cpu_has_work and x86_cpu_exec_interrupt. Create a new function that extracts the highest-priority non-masked interrupt, and use it in both functions. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 24 8月, 2018 2 次提交
-
-
由 Andrew Oates 提交于
The current implementation has three bugs, * segment limits are not enforced in protected mode if the L bit is set in the target segment descriptor * segment limits are not enforced in compatibility mode (ljmp to 32-bit code segment in long mode) * #GP(new_cs) is generated rather than #GP(0) Now the segment limits are enforced if we're not in long mode OR the target code segment doesn't have the L bit set. Signed-off-by: NAndrew Oates <aoates@google.com> Message-Id: <20180816011903.39816-1-andrew@andrewoates.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Andrew Oates 提交于
Currently call gates are always treated as 32-bit gates. In IA-32e mode (either compatibility or 64-bit submode), system segment descriptors are always 64-bit. Treating them as 32-bit has the expected unfortunate effect: only the lower 32 bits of the offset are loaded, the stack pointer is truncated, a bad new stack pointer is loaded from the TSS (if switching privilege levels), etc. This change adds support for 64-bit call gate to the lcall and ljmp instructions. Additionally, there should be a check for non-canonical stack pointers, but I've omitted that since there doesn't seem to be checks for non-canonical addresses in this code elsewhere. I've left the raise_exception_err_ra lines unwapped at 80 columns to match the style in the rest of the file. Signed-off-by: NAndrew Oates <aoates@google.com> Message-Id: <20180819181725.34098-1-andrew@andrewoates.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 29 6月, 2018 1 次提交
-
-
由 Jan Kiszka 提交于
Check for SVM interception prior to injecting an NMI. Tested via the Jailhouse hypervisor. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Message-Id: <c65877e9a011ee4962931287e59f502c482b8d0b.1522769774.git.jan.kiszka@web.de> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 04 7月, 2017 2 次提交
-
-
由 Paolo Bonzini 提交于
Move the handling of conforming code segments before the handling of stack switch. Because dpl == cpl after the new "if", it's now unnecessary to check the C bit when testing dpl < cpl. Furthermore, dpl > cpl is checked slightly above the modified code, so the final "else" is unreachable and we can remove it. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Wu Xiang 提交于
In do_interrupt64(), when interrupt stack table(ist) is enabled and the the target code segment is conforming(e2 & DESC_C_MASK), the old implementation always set new CPL to 0, and SS.RPL to 0. This is incorrect for when CPL3 code access a CPL0 conforming code segment, the CPL should remain unchanged. Otherwise higher privileged code can be compromised. The patch fix this for always set dpl = cpl when the target code segment is conforming, and modify the last parameter `flags`, which contains correct new CPL, in cpu_x86_load_seg_cache(). Signed-off-by: NWu Xiang <willx8@gmail.com> Message-Id: <20170621142152.GA18094@wxdeubuntu.ipads-lab.se.sjtu.edu.cn> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 09 3月, 2017 1 次提交
-
-
由 Paolo Bonzini 提交于
Paths through the softmmu code during code generation now need to be audited to check for double locking of tb_lock. In particular, VMEXIT can take tb_lock through cpu_vmexit -> cpu_x86_update_cr4 -> tlb_flush. To avoid this, split VMEXIT delivery in two parts, similar to what is done with exceptions. cpu_vmexit only records the VMEXIT exit code and information, and cc->do_interrupt can then deliver it when it is safe to take the lock. Reported-by: NAlexander Boettcher <alexander.boettcher@genode-labs.com> Suggested-by: NRichard Henderson <rth@twiddle.net> Tested-by: NAlexander Boettcher <alexander.boettcher@genode-labs.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NRichard Henderson <rth@twiddle.net>
-
- 17 2月, 2017 1 次提交
-
-
由 Paolo Bonzini 提交于
Commit 2afbdf84 ("target-i386: exception handling for memory helpers", 2015-09-15) changed tlb_fill's cpu_restore_state+raise_exception_err to raise_exception_err_ra. After this change, the cpu_restore_state and raise_exception_err's cpu_loop_exit are merged into raise_exception_err_ra's cpu_loop_exit_restore. This actually fixed some bugs, but when SVM is enabled there is a second path from raise_exception_err_ra to cpu_loop_exit. This is the VMEXIT path, and now cpu_vmexit is called without a cpu_restore_state before. The fix is to pass the retaddr to cpu_vmexit (via cpu_svm_check_intercept_param). All helpers can now use GETPC() to pass the correct retaddr, too. Cc: qemu-stable@nongnu.org Fixes: 2afbdf84Reported-by: NAlexander Boettcher <alexander.boettcher@genode-labs.com> Tested-by: NAlexander Boettcher <alexander.boettcher@genode-labs.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 28 1月, 2017 1 次提交
-
-
由 Pavel Dovgalyuk 提交于
This patch improves interrupt handling in record/replay mode. Now "interrupt" event is saved only when cc->cpu_exec_interrupt returns true. This patch also adds missing return to cpu_exec_interrupt function. Signed-off-by: NPavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Message-Id: <20170124071708.4572.64023.stgit@PASHA-ISP> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 21 12月, 2016 1 次提交
-
-
由 Thomas Huth 提交于
We've currently got 18 architectures in QEMU, and thus 18 target-xxx folders in the root folder of the QEMU source tree. More architectures (e.g. RISC-V, AVR) are likely to be included soon, too, so the main folder of the QEMU sources slowly gets quite overcrowded with the target-xxx folders. To disburden the main folder a little bit, let's move the target-xxx folders into a dedicated target/ folder, so that target-xxx/ simply becomes target/xxx/ instead. Acked-by: Laurent Vivier <laurent@vivier.eu> [m68k part] Acked-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> [tricore part] Acked-by: Michael Walle <michael@walle.cc> [lm32 part] Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com> [s390x part] Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> [s390x part] Acked-by: Eduardo Habkost <ehabkost@redhat.com> [i386 part] Acked-by: Artyom Tarasenko <atar4qemu@gmail.com> [sparc part] Acked-by: Richard Henderson <rth@twiddle.net> [alpha part] Acked-by: Max Filippov <jcmvbkbc@gmail.com> [xtensa part] Reviewed-by: David Gibson <david@gibson.dropbear.id.au> [ppc part] Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> [crisµblaze part] Acked-by: Guan Xuetao <gxt@mprc.pku.edu.cn> [unicore32 part] Signed-off-by: NThomas Huth <thuth@redhat.com>
-
- 15 9月, 2016 1 次提交
-
-
由 Stanislav Shmarov 提交于
In user-mode emulation env->idt.base memory is allocated in linux-user/main.c with size 8*512 = 4096 (for 64-bit). When fake interrupt EXCP_SYSCALL is thrown do_interrupt_user checks destination privilege level for this fake exception, and tries to read 4 bytes at address base + (256 * 2^4)=4096, that causes segfault. Privlege level was checked only for int's, so lets read dpl from memory only for this case. Signed-off-by: NStanislav Shmarov <snarpix@gmail.com> Message-Id: <1473773008-2588376-1-git-send-email-snarpix@gmail.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 09 6月, 2016 1 次提交
-
-
由 Peter Maydell 提交于
Add a comment to do_interrupt_user() along the same lines as the existing one for do_interrupt_all() noting that the next_eip argument is not used unless is_int is true or intno is EXCP_SYSCALL. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NSergey Fedorov <sergey.fedorov@linaro.org> Acked-by: NEduardo Habkost <ehabkost@redhat.com> Acked-by: NRiku Voipio <riku.voipio@linaro.org> Message-id: 1463494687-25947-6-git-send-email-peter.maydell@linaro.org
-
- 19 5月, 2016 1 次提交
-
-
由 Paolo Bonzini 提交于
exec-all.h contains TCG-specific definitions. It is not needed outside TCG-specific files such as translate.c, exec.c or *helper.c. One generic function had snuck into include/exec/exec-all.h; move it to include/qom/cpu.h. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 09 2月, 2016 1 次提交
-
-
由 Richard Henderson 提交于
Use gen_lea_v_seg for centralized segment base knowledge. Unify code across 32- and 64-bit. Fix note about "must save state" before using the out-of-line helpers. Signed-off-by: NRichard Henderson <rth@twiddle.net> Message-Id: <1450379966-28198-8-git-send-email-rth@twiddle.net> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 03 2月, 2016 1 次提交
-
-
由 Paolo Bonzini 提交于
Split the bits that require it to exec/log.h. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NDenis V. Lunev <den@openvz.org> Acked-by: NChristian Borntraeger <borntraeger@de.ibm.com> Message-id: 1452174932-28657-8-git-send-email-den@openvz.org Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
- 29 1月, 2016 1 次提交
-
-
由 Peter Maydell 提交于
Clean up includes so that osdep.h is included first and headers which it implies are not included manually. This commit was created with scripts/clean-includes. Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Message-id: 1453832250-766-11-git-send-email-peter.maydell@linaro.org
-
- 23 10月, 2015 1 次提交
-
-
由 Richard Henderson 提交于
This moves the last of the iteration over breakpoints into the bpt_helper.c file. This also allows us to make several breakpoint functions static. Signed-off-by: NRichard Henderson <rth@twiddle.net> Signed-off-by: NEduardo Habkost <ehabkost@redhat.com>
-
- 25 9月, 2015 1 次提交
-
-
由 Pavel Dovgalyuk 提交于
This patch updates x86_cpu_exec_interrupt function. It can process two interrupt request at a time (poll and another one). This makes its execution non-deterministic. Determinism is requred for recorded icount execution. Signed-off-by: NPavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Message-Id: <20150917162410.8676.13042.stgit@PASHA-ISP.def.inno> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 16 9月, 2015 1 次提交
-
-
由 Pavel Dovgalyuk 提交于
This patch fixes exception handling for seg_helper functions. Signed-off-by: NPavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
- 05 6月, 2015 1 次提交
-
-
由 Paolo Bonzini 提交于
These include page table walks, SVM accesses and SMM state save accesses. The bulk of the patch is obtained with sed -i 's/\(\<[a-z_]*_phys\(_notdirty\)\?\>(cs\)->as,/x86_\1,/' Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 10 3月, 2015 1 次提交
-
-
由 Bill Paul 提交于
According to my reading of the Intel documentation, the SYSRET instruction is supposed to force the RPL bits of the %ss register to 3 when returning to user mode. The actual sequence is: SS.Selector <-- (IA32_STAR[63:48]+8) OR 3; (* RPL forced to 3 *) However, the code in helper_sysret() leaves them at 0 (in other words, the "OR 3" part of the above sequence is missing). It does set the privilege level bits of %cs correctly though. This has caused me trouble with some of my VxWorks development: code that runs okay on real hardware will crash on QEMU, unless I apply the patch below. Signed-off-by: NBill Paul <wpaul@windriver.com> Message-Id: <201503091548.01462.wpaul@windriver.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 20 1月, 2015 1 次提交
-
-
由 Peter Maydell 提交于
Use inline functions rather than macros for cpu_ld/st accessors for the *-user configurations, as we already do for softmmu. This has a two advantages: * we can actually typecheck our arguments * we don't need to leak the _raw macros everywhere Since the _kernel functions were only used by target-i386/seg_helper.c, put the definitions for them in that file too. (It already has the similar template include code to define them for the softmmu case, so it makes sense to have it deal with defining them for user-only.) Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Reviewed-by: NRichard Henderson <rth@twiddle.net> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Message-id: 1421334118-3287-12-git-send-email-peter.maydell@linaro.org
-
- 12 11月, 2014 1 次提交
-
-
由 Paolo Bonzini 提交于
ist != 0 is checked in the first "if", so it cannot be true in the "else if" part. While at it, simplify the code and move the ESP alignment out of the conditionals. Reported by Coverity. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 26 9月, 2014 1 次提交
-
-
由 Richard Henderson 提交于
Signed-off-by: NRichard Henderson <rth@twiddle.net> Message-id: 1410626734-3804-23-git-send-email-rth@twiddle.net Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
- 22 8月, 2014 1 次提交
-
-
由 Jincheng Miao 提交于
Currently syscall instruction is buggy on user mode X86_64, the EIP is updated after do_syscall(), that is too late for clone(). Because clone() will create a thread at the env->EIP (the address of syscall insn), and then child thread enters do_syscall() again, that is not expected. Sometimes it is tragic. User mode syscall insn emulation is not used MSR, so the action should be same to INT 0x80. INT 0x80 will update EIP in do_interrupt(), ditto for syscall() for consistency. Signed-off-by: NJincheng Miao <jmiao@redhat.com> Reviewed-by: NRichard Henderson <rth@twiddle.net> Signed-off-by: NRiku Voipio <riku.voipio@linaro.org>
-
- 05 6月, 2014 3 次提交
-
-
由 Paolo Bonzini 提交于
With SMAP, implicit kernel accesses from user mode always behave as if AC=0. To do this, kernel mode is not anymore a separate MMU mode. Instead, KERNEL_IDX is renamed to KSMAP_IDX and the kernel mode accessors wrap KSMAP_IDX and KNOSMAP_IDX. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Prepare for adding _kernel accessors there in the next patch. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
This will collect all load and store helpers soon. For now it is just a replacement for softmmu_exec.h, which this patch stops including directly, but we also include it where this will be necessary in order to simplify the next patch. Reviewed-by: NRichard Henderson <rth@twiddle.net> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 29 5月, 2014 1 次提交
-
-
由 Richard Henderson 提交于
Rather than include helper.h with N values of GEN_HELPER, include a secondary file that sets up the macros to include helper.h. This minimizes the files that must be rebuilt when changing the macros for file N. Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
- 22 5月, 2014 4 次提交
-
-
由 Paolo Bonzini 提交于
There is no reason to keep that out of the function. The comment refers to the disassembler's cc_op state rather than the CPUState field. Reviewed-by: NRichard Henderson <rth@twiddle.net> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
During task switch, all of CS.DPL, CS.RPL, SS.DPL must match (in addition to all the other requirements) and will be the new CPL. So far this worked by carefully setting the CS selector and flags before doing the task switch; but this will not work once we get the CPL from SS.DPL. Temporarily assume that the CPL comes from CS.RPL during task switch to a protected-mode task, until the descriptor of SS is loaded. Tested-by: NKevin O'Connor <kevin@koconnor.net> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
With the next patch, these need to be correct or VM86 tasks have the wrong CPL. The flags are basically what the Intel VMX documentation say is mandatory for entry into a VM86 guest. For consistency, SMM ought to have the same flags except with CPL=0. Tested-by: NKevin O'Connor <kevin@koconnor.net> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Kevin O'Connor 提交于
Commit fd460606 moved setting of eflags above calls to cpu_x86_load_seg_cache() in seg_helper.c. Unfortunately, in do_interrupt_protected() this moved the clearing of VM_MASK above a test for it. Fix this regression by storing the value of VM_MASK at the start of do_interrupt_protected(). Signed-off-by: NKevin O'Connor <kevin@koconnor.net> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 13 5月, 2014 2 次提交
-
-
由 Kevin O'Connor 提交于
Instead of manually calling cpu_x86_set_cpl() when the CPL changes, check for CPL changes on calls to cpu_x86_load_seg_cache(R_CS). Every location that called cpu_x86_set_cpl() also called cpu_x86_load_seg_cache(R_CS), so cpu_x86_set_cpl() is no longer required. This fixes the SMM handler code as it was not setting/restoring the CPL level manually. Signed-off-by: NKevin O'Connor <kevin@koconnor.net> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Kevin O'Connor 提交于
The cpu_x86_load_seg_cache() function inspects eflags, so make sure all changes to eflags are done prior to loading the segment caches. Signed-off-by: NKevin O'Connor <kevin@koconnor.net> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 14 3月, 2014 3 次提交
-
-
由 Andreas Färber 提交于
Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
由 Andreas Färber 提交于
Signed-off-by: NAndreas Färber <afaerber@suse.de>
-
由 Andreas Färber 提交于
Signed-off-by: NAndreas Färber <afaerber@suse.de>
-