- 18 5月, 2011 4 次提交
-
-
由 Fenghua Yu 提交于
Add support for newly documented SMEP (Supervisor Mode Execution Protection) CPU feature in CR4. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> LKML-Reference: <1305683069-25394-3-git-send-email-fenghua.yu@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Fenghua Yu 提交于
Add support for newly documented SMEP (Supervisor Mode Execution Protection) CPU feature flag. SMEP prevents the CPU in kernel-mode to jump to an executable page that has the user flag set in the PTE. This prevents the kernel from executing user-space code accidentally or maliciously, so it for example prevents kernel exploits from jumping to specially prepared user-mode shell code. [ hpa: added better description by Ingo Molnar ] Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> LKML-Reference: <1305683069-25394-2-git-send-email-fenghua.yu@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Fenghua Yu 提交于
Intel processors are adding enhancements to REP MOVSB/STOSB and the use of REP MOVSB/STOSB for optimal memcpy/memset or similar functions is recommended. Enhancement availability is indicated by CPUID.7.0.EBX[9] (Enhanced REP MOVSB/ STOSB). Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1305671358-14478-2-git-send-email-fenghua.yu@intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Fenghua Yu 提交于
CPUID leaf 7, subleaf 0 returns the maximum subleaf in EAX, not the number of subleaves. Since so far only subleaf 0 is defined (and only the EBX bitfield) we do not need to qualify the test. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1305660806-17519-1-git-send-email-fenghua.yu@intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com> Cc: <stable@kernel.org> 2.6.36..39
-
- 06 5月, 2011 2 次提交
-
-
由 Peter Zijlstra 提交于
The Intel Nehalem offcore bits implemented in: e994d7d2: perf: Fix LLC-* events on Intel Nehalem/Westmere ... are wrong: they implemented _ACCESS as _HIT and counted OTHER_CORE_HIT* as MISS even though its clearly documented as an L3 hit ... Fix them and the Westmere definitions as well. Cc: Andi Kleen <ak@linux.intel.com> Cc: Lin Ming <ming.m.lin@intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/1299119690-13991-3-git-send-email-ming.m.lin@intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Frederic Weisbecker 提交于
We make use of ptrace_get_breakpoints() / ptrace_put_breakpoints() to protect ptrace_set_debugreg() even if CONFIG_HAVE_HW_BREAKPOINT if off. However in this case, these APIs are not implemented. To fix this, push the protection down inside the relevant ifdef. Best would be to export the code inside CONFIG_HAVE_HW_BREAKPOINT into a standalone function to cleanup the ifdefury there and call the breakpoint ref API inside. But as it is more invasive, this should be rather made in an -rc1. Fixes this build error: arch/powerpc/kernel/ptrace.c:1594: error: implicit declaration of function 'ptrace_get_breakpoints' make[2]: *** Reported-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: LPPC <linuxppc-dev@lists.ozlabs.org> Cc: Prasad <prasad@linux.vnet.ibm.com> Cc: v2.6.33.. <stable@kernel.org> Link: http://lkml.kernel.org/r/1304639598-4707-1-git-send-email-fweisbec@gmail.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 03 5月, 2011 3 次提交
-
-
由 H. Peter Anvin 提交于
The use of base for %ebx in this file is arbitrary, *except* that we also use it to compute the real-mode segment. Therefore, make it so that r_base really is the true address to which %ebx points. This resolves kernel bugzilla 33302. Reported-and-tested-by: NAlexey Zaytsev <alexey.zaytsev@gmail.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Link: http://lkml.kernel.org/n/tip-08os5wi3yq1no0y4i5m4z7he@git.kernel.org
-
由 Stefano Stabellini 提交于
mask_rw_pte is currently checking if a pfn is a pagetable page if it falls in the range pgt_buf_start - pgt_buf_end but that is incorrect because pgt_buf_end is a moving target: pgt_buf_top is the real boundary. Acked-by: N"H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Konrad Rzeszutek Wilk 提交于
As a consequence of the commit: commit 4b239f45 Author: Yinghai Lu <yinghai@kernel.org> Date: Fri Dec 17 16:58:28 2010 -0800 x86-64, mm: Put early page table high it causes the Linux kernel to crash under Xen: mapping kernel into physical memory Xen: setup ISA identity maps about to get started... (XEN) mm.c:2466:d0 Bad type (saw 7400000000000001 != exp 1000000000000000) for mfn b1d89 (pfn bacf7) (XEN) mm.c:3027:d0 Error while pinning mfn b1d89 (XEN) traps.c:481:d0 Unhandled invalid opcode fault/trap [#6] on VCPU 0 [ec=0000] (XEN) domain_crash_sync called from entry.S (XEN) Domain 0 (vcpu#0) crashed on cpu#0: ... The reason is that at some point init_memory_mapping is going to reach the pagetable pages area and map those pages too (mapping them as normal memory that falls in the range of addresses passed to init_memory_mapping as argument). Some of those pages are already pagetable pages (they are in the range pgt_buf_start-pgt_buf_end) therefore they are going to be mapped RO and everything is fine. Some of these pages are not pagetable pages yet (they fall in the range pgt_buf_end-pgt_buf_top; for example the page at pgt_buf_end) so they are going to be mapped RW. When these pages become pagetable pages and are hooked into the pagetable, xen will find that the guest has already a RW mapping of them somewhere and fail the operation. The reason Xen requires pagetables to be RO is that the hypervisor needs to verify that the pagetables are valid before using them. The validation operations are called "pinning" (more details in arch/x86/xen/mmu.c). In order to fix the issue we mark all the pages in the entire range pgt_buf_start-pgt_buf_top as RO, however when the pagetable allocation is completed only the range pgt_buf_start-pgt_buf_end is reserved by init_memory_mapping. Hence the kernel is going to crash as soon as one of the pages in the range pgt_buf_end-pgt_buf_top is reused (b/c those ranges are RO). For this reason, this function is introduced which is called _after_ the init_memory_mapping has completed (in a perfect world we would call this function from init_memory_mapping, but lets ignore that). Because we are called _after_ init_memory_mapping the pgt_buf_[start, end,top] have all changed to new values (b/c another init_memory_mapping is called). Hence, the first time we enter this function, we save away the pgt_buf_start value and update the pgt_buf_[end,top]. When we detect that the "old" pgt_buf_start through pgt_buf_end PFNs have been reserved (so memblock_x86_reserve_range has been called), we immediately set out to RW the "old" pgt_buf_end through pgt_buf_top. And then we update those "old" pgt_buf_[end|top] with the new ones so that we can redo this on the next pagetable. Acked-by: N"H. Peter Anvin" <hpa@zytor.com> Reviewed-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> [v1: Updated with Jeremy's comments] [v2: Added the crash output] Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 02 5月, 2011 2 次提交
-
-
由 Yinghai Lu 提交于
numa_cleanup_meminfo() trims each memblk between low (0) and high (max_pfn) limits and discards empty ones. However, the emptiness detection incorrectly used equality test. If the start of a memblk is higher than max_pfn, it is empty but fails the equality test and doesn't get discarded. The condition triggers when max_pfn is lower than start of a NUMA node and results in memory misconfiguration - leading to WARN_ON()s and other funnies. The bug was discovered in devel branch where 32bit too uses this code path for NUMA init. If a node is above the addressing limit, max_pfn ends up lower than the node triggering this problem. The failure hasn't been observed on x86-64 but is still possible with broken hardware e820/NUMA info. As the fix is very low risk, it would be better to apply it even for 64bit. Fix it by using >= instead of ==. Signed-off-by: NYinghai Lu <yinghai@kernel.org> [ Extracted the actual fix from the original patch and rewrote patch description. ] Signed-off-by: NTejun Heo <tj@kernel.org> Link: http://lkml.kernel.org/r/20110501171204.GO29280@htj.dyndns.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Boris Ostrovsky 提交于
Older AMD K8 processors (Revisions A-E) are affected by erratum 400 (APIC timer interrupts don't occur in C states greater than C1). This, for example, means that X86_FEATURE_ARAT flag should not be set for these parts. This addresses regression introduced by commit b87cf80a ("x86, AMD: Set ARAT feature on AMD processors") where the system may become unresponsive until external interrupt (such as keyboard input) occurs. This results, for example, in time not being reported correctly, lack of progress on the system and other lockups. Reported-by: NJoerg-Volker Peetz <jvpeetz@web.de> Tested-by: NJoerg-Volker Peetz <jvpeetz@web.de> Acked-by: NBorislav Petkov <borislav.petkov@amd.com> Signed-off-by: NBoris Ostrovsky <Boris.Ostrovsky@amd.com> Cc: stable@kernel.org Link: http://lkml.kernel.org/r/1304113663-6586-1-git-send-email-ostr@amd64.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 29 4月, 2011 29 次提交
-
-
由 Dan Rosenberg 提交于
When CONFIG_OABI_COMPAT is set, the wrapper for semtimedop does not bound the nsops argument. A sufficiently large value will cause an integer overflow in allocation size, followed by copying too much data into the allocated buffer. Fix this by restricting nsops to SEMOPM. Untested. Cc: stable@kernel.org Signed-off-by: NDan Rosenberg <drosenberg@vsecurity.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Heiko Carstens 提交于
pfault, dasd diag and virtio all use the same external interrupt number. The respective interrupt handlers decide by the subcode if they are meant to handle the interrupt. Counting is currently done before looking at the subcode which means each handler counts an interrupt even if it is not handling it. Fix this by moving the kstat code after the code which looks at the subcode. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Jon Medhurst 提交于
- Remove coding standard violations reported by checkpatch.pl - Delete comment about handling of conditional branches which is no longer true. - Delete comment at end of file which lists all ARM instructions. This duplicates data available in the ARM ARM and seems like an unnecessary maintenance burden to keep this up to date and accurate. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
Being able to probe NOP instructions is useful for hard-coding probeable locations and is used by the kprobes test code. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
These bit field manipulation instructions occur several thousand times in an ARMv7 kernel. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
The MOVW and MOVT instructions account for approximately 7% of all instructions in a ARMv7 kernel as GCC uses them instead of a literal pool. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
The instruction decoding in space_cccc_000x needs to reject probing of instructions with undefined patterns as they may in future become defined and then emulated faultily - as has already happened with the SMC instruction. This fix is achieved by testing for the instruction patterns we want to probe and making the the default fall-through paths reject probes. This also allows us to remove some explicit tests for instructions that we wish to reject, as that is now the default action. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
The tests to explicitly reject probing CPS, RFE and SRS instructions are redundant as the default case is now to reject undecoded patterns. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
The PLD instructions wasn't being decoded correctly and the emulation code wasn't adjusting PC correctly. As the PLD instruction is only a performance hint we emulate it as a simple nop, and we can broaden the instruction decoding to take into account newer PLI and PLDW instructions. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
The emulation of SETEND was broken as it changed the endianess for the running kprobes handling code. Rather than adding a new simulation routine to fix this we'll just reject probing of SETEND as these should be very rare in the kernel. Note, the function emulate_none is now unused but it is left in the source code as future patches will use it. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
Following the change to remove support for coprocessor instructions we are left with three stub functions which can be consolidated. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
The kernel doesn't currently support VFP or Neon code, and probing of code with CP15 operations is fraught with bad consequences. Therefore we don't need the ability to probe coprocessor instructions and the code to support this can be removed. The removed code also had at least two bugs: - MRC into R15 should set CPSR not trash PC - LDC and STC which use PC as base register needed the address offset by 8 Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
The USAD8 instruction wasn't being explicitly decoded leading to the incorrect emulation routine being called. It can be correctly decoded in the same way as the signed multiply instructions so we move the decoding there. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
The signed multiply instructions were being decoded incorrectly. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
These sign extension instructions are encoded as extend-and-add instructions where the register to add is specified as r15. The decoding routines weren't checking for this and were using the incorrect emulation code, giving incorrect results. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
The instructions space for media instructions contains some undefined patterns. We need to reject probing of these because they may in future become defined and the kprobes code may then emulate them faultily. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
The v6T2 RBIT instruction was accidentally being emulated correctly, this patch adds correct decoding for the instruction. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
These instructions are specified as UNPREDICTABLE. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
The decoding of these instructions got the register indexed and immediate indexed forms the wrong way around, causing incorrect emulation. Instructions like "LDRD Rx, [Rx]" were corrupting Rx because the base register writeback was being performed unconditionally, overwriting the value just loaded from memory. The fix is to only writeback the base register when that form of the instruction is used. Note, now that we reject probing writeback with PC the emulation code doesn't need the check rn!=15. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
Using PC as an base register with writeback is UNPREDICTABLE, as is non word-sized loads or stores of PC. (We only really care about preventing loads to PC but it keeps the code simpler if we also exclude stores.) Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
The decoding of these instructions got the register indexed and immediate indexed forms the wrong way around, causing incorrect emulation. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
The emulation code for STREX and LDREX instructions is faulty, however, rather than attempting to fix this we reject probes of these instructions. We do this because they can never succeed in gaining exclusive access as the exception framework clears the exclusivity monitor when a probes breakpoint is hit. (This is a general problem when probing all instructions executing between a LDREX and its corresponding STREX and can lead to infinite retry loops.) Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
The instructions space for 'Multiply and multiply-accumulate' instructions contains some undefined patterns. We need to reject probing of these because they may in future become defined and the kprobes code may then emulate them faultily. This has already happened with the new MLS instruction which this patch also adds correct decoding for as well as tightening up other decoding tests. (Before this patch the wrong emulation routine was being called for MLS though it still produced correct results.) Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
The MRS instruction should set mode and interrupt bits in the read value so it is simpler to use a new simulation routine (simulate_mrs) rather than some modified emulation. prep_emulate_rd12 is now unused and removed. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
We need to reject probing of instructions which read SPSR because we can't handle this as the value in SPSR is lost when the exception handler for the probe breakpoint first runs. This patch also fixes the bitmask for MRS instructions decoding to include checking bits 5-7. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
Emulation of instructions like "ADD rd, rn, #<const>" would result in a corrupted value for rd. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Jon Medhurst 提交于
Probing these instructions was corrupting R0 because the emulation code didn't account for the fact that they don't write a result to a register. Signed-off-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-