- 15 6月, 2009 8 次提交
-
-
由 Vegard Nossum 提交于
This patch hooks into the DMA API to prevent the reporting of the false positives that would otherwise be reported when memory is accessed that is also used directly by devices. [rebased for mainline inclusion] Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
-
由 Vegard Nossum 提交于
With kmemcheck enabled, the slab allocator needs to do this: 1. Tell kmemcheck to allocate the shadow memory which stores the status of each byte in the allocation proper, e.g. whether it is initialized or uninitialized. 2. Tell kmemcheck which parts of memory that should be marked uninitialized. There are actually a few more states, such as "not yet allocated" and "recently freed". If a slab cache is set up using the SLAB_NOTRACK flag, it will never return memory that can take page faults because of kmemcheck. If a slab cache is NOT set up using the SLAB_NOTRACK flag, callers can still request memory with the __GFP_NOTRACK flag. This does not prevent the page faults from occuring, however, but marks the object in question as being initialized so that no warnings will ever be produced for this object. In addition to (and in contrast to) __GFP_NOTRACK, the __GFP_NOTRACK_FALSE_POSITIVE flag indicates that the allocation should not be tracked _because_ it would produce a false positive. Their values are identical, but need not be so in the future (for example, we could now enable/disable false positives with a config option). Parts of this patch were contributed by Pekka Enberg but merged for atomicity. Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: NIngo Molnar <mingo@elte.hu> [rebased for mainline inclusion] Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
-
由 Vegard Nossum 提交于
The hooks that we modify are: - Page fault handler (to handle kmemcheck faults) - Debug exception handler (to hide pages after single-stepping the instruction that caused the page fault) Also redefine memset() to use the optimized version if kmemcheck is enabled. (Thanks to Pekka Enberg for minimizing the impact on the page fault handler.) As kmemcheck doesn't handle MMX/SSE instructions (yet), we also disable the optimized xor code, and rely instead on the generic C implementation in order to avoid false-positive warnings. Signed-off-by: NVegard Nossum <vegardno@ifi.uio.no> [whitespace fixlet] Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: NIngo Molnar <mingo@elte.hu> [rebased for mainline inclusion] Signed-off-by: NVegard Nossum <vegardno@ifi.uio.no>
-
由 Pekka Enberg 提交于
Lets use kmemcheck_pte_lookup() in kmemcheck_fault() instead of open-coding it there. Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
-
由 Pekka Enberg 提交于
This patch moves the CONFIG_X86_64 ifdef out of kmemcheck_opcode_decode() by introducing a version of the function that always returns false for CONFIG_X86_32. Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
-
由 Pekka Enberg 提交于
Multiple ifdef'd definitions of the same global variable is ugly and error-prone. Fix that up. Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
-
由 Pekka Enberg 提交于
The "Bugs, beware!" printout during is cute but confuses users that something bad happened so change the text to the more boring "Initialized" message. Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
-
由 Pekka Enberg 提交于
This patch reorders code in error.c so that we can get rid of the forward declarations. Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
-
- 13 6月, 2009 3 次提交
-
-
由 Randy Dunlap 提交于
kmemcheck/shadow.c needs to include <linux/module.h> to prevent the following warnings: linux-next-20080724/arch/x86/mm/kmemcheck/shadow.c:64: warning : data definition has no type or storage class linux-next-20080724/arch/x86/mm/kmemcheck/shadow.c:64: warning : type defaults to 'int' in declaration of 'EXPORT_SYMBOL_GPL' linux-next-20080724/arch/x86/mm/kmemcheck/shadow.c:64: warning : parameter names (without types) in function declaration Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> Cc: vegardno@ifi.uio.no Cc: penberg@cs.helsinki.fi Cc: akpm <akpm@linux-foundation.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Vegard Nossum 提交于
General description: kmemcheck is a patch to the linux kernel that detects use of uninitialized memory. It does this by trapping every read and write to memory that was allocated dynamically (e.g. using kmalloc()). If a memory address is read that has not previously been written to, a message is printed to the kernel log. Thanks to Andi Kleen for the set_memory_4k() solution. Andrew Morton suggested documenting the shadow member of struct page. Signed-off-by: NVegard Nossum <vegardno@ifi.uio.no> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi> [export kmemcheck_mark_initialized] [build fix for setup_max_cpus] Signed-off-by: NIngo Molnar <mingo@elte.hu> [rebased for mainline inclusion] Signed-off-by: NVegard Nossum <vegardno@ifi.uio.no>
-
由 Vegard Nossum 提交于
This will help kmemcheck (and possibly other debugging tools) since we can now simply pass regs->bp to the stack tracer instead of specifying the number of stack frames to skip, which is unreliable if gcc decides to inline functions, etc. Note that this makes the API incomplete for other architectures, but I expect that those can be updated lazily, e.g. when they need it. Cc: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
-
- 12 6月, 2009 3 次提交
-
-
由 Yinghai Lu 提交于
So we make sure MAXSMP gets a cleared cpumask Signed-off-by: NYinghai Lu <yinghai@kernel.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Yinghai Lu 提交于
Don't hardcode to node zero for early boot IRQ setup memory allocations. [ penberg@cs.helsinki.fi: minor cleanups ] Cc: Ingo Molnar <mingo@elte.hu> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NYinghai Lu <yinghai@kernel.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
由 Yinghai Lu 提交于
Now that we set up the slab allocator earlier, we can get rid of some alloc_bootmem_cpumask_var() calls in boot code. Cc: Ingo Molnar <mingo@elte.hu> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NYinghai Lu <yinghai@kernel.org> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 11 6月, 2009 9 次提交
-
-
由 Peter Zijlstra 提交于
The top (fastest) and last level (biggest) caches are the most interesting ones, performance wise. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> [ Fixed the Nehalem LL table to LLC Reference/Miss events ] Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Pure renames only, to PERF_COUNT_HW_* and PERF_COUNT_SW_*. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Arnd Bergmann 提交于
The legacy TCSETA{,W,F} ioctls failed to set the termio->c_line field on x86. This adds a missing get_user. The same ioctls also fail to report faulting user pointers, which we keep ignoring. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NAlan Cox <alan@linux.intel.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Thomas Gleixner 提交于
Commit c9690998 (x86: memtest: remove 64-bit division) introduced following compile warning: arch/x86/mm/memtest.c: In function 'memtest': arch/x86/mm/memtest.c:56: warning: comparison of distinct pointer types lacks a cast arch/x86/mm/memtest.c:58: warning: comparison of distinct pointer types lacks a cast Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Peter Zijlstra 提交于
We currently log hw.sample_period for PERF_SAMPLE_PERIOD, however this is incorrect. When we adjust the period, it will only take effect the next cycle but report it for the current cycle. So when we adjust the period for every cycle, we're always wrong. Solve this by keeping track of the last_period. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
For easy extension of the sample data, put it in a structure. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
This reverts commit 7e0bfad2. A late objection to the ABI has arrived: http://lkml.org/lkml/2009/6/10/253 Keep the ABI disabled out of caution, to not create premature user-space expectations. While the hw-branch-tracing variant uses and tests the BTS code. Cc: Peter Zijlstra <peterz@infradead.org> Cc: Markus Metzger <markus.t.metzger@intel.com> Cc: Oleg Nesterov <oleg@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Harald Welte 提交于
The e_powersaver driver for VIA's C7 CPU's needs to be marked as DANGEROUS as it configures the CPU to power states that are out of specification. According to Centaur, all systems with C7 and Nano CPU's support the ACPI p-state method. Thus, the acpi-cpufreq driver should be used instead. Signed-off-by: NHarald Welte <HaraldWelte@viatech.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Harald Welte 提交于
The VIA/Centaur C7, C7-M and Nano CPU's all support ACPI based cpu p-states using a MSR interface. The Linux driver just never made use of it, since in addition to the check for the EST flag it also checked if the vendor is Intel. Signed-off-by: NHarald Welte <HaraldWelte@viatech.com> [ Removed the vendor checks entirely - Linus ] Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 10 6月, 2009 17 次提交
-
-
由 Peter Zijlstra 提交于
Also employ the overflow handler to adjust the frequency, this results in a stable frequency in about 40~50 samples, instead of that many ticks. This also means we can start sampling at a sample period of 1 without running head-first into the throttle. It relies on sched_clock() to accurately measure the time difference between the overflow NMIs. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Yong Wang 提交于
Fix the model number of Intel Core2 processors according to the documentation: Intel Processor Identification with the CPUID Instruction: http://www.intel.com/support/processors/sb/cs-009861.htmSigned-off-by: NYong Wang <yong.y.wang@intel.com> Also-Reported-by: NArnd Bergmann <arnd@arndb.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <20090610090612.GA26580@ywang-moblin2.bj.intel.com> [ Added two more model numbers suggested by Arnd Bergmann ] Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Borislav Petkov 提交于
Provide for concurrent MSR writes on all the CPUs in the cpumask. Also, add a temporary workaround for smp_call_function_many which skips the CPU we're executing on. Bart: zero out rv struct which is allocated on stack. CC: H. Peter Anvin <hpa@zytor.com> Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> Signed-off-by: NBartlomiej Zolnierkiewicz <bzolnier@gmail.com>
-
由 Borislav Petkov 提交于
Add a struct representing a 64bit MSR pair consisting of a low and high register part and convert msr_info to use it. Also, rename msr-on-cpu.c to msr.c. Side note: Put the cpumask.h include in __KERNEL__ space thus fixing an allmodconfig build failure in the headers_check target. CC: H. Peter Anvin <hpa@zytor.com> Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
-
由 Andi Kleen 提交于
VT-x needs an explicit MC vector intercept to handle machine checks in the hyper visor. It also has a special option to catch machine checks that happen during VT entry. Do these interceptions and forward them to the Linux machine check handler. Make it always look like user space is interrupted because the machine check handler treats kernel/user space differently. Thanks to Jiang Yunhong for help and testing. Cc: stable@kernel.org Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NHuang Ying <ying.huang@intel.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Nitin A Kamble 提交于
That way the interpretation of rmode.active becomes more clear with unrestricted guest code. Signed-off-by: NNitin A Kamble <nitin.a.kamble@intel.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Gleb Natapov 提交于
To save us one reading of VM_EXIT_INTR_INFO. Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Gleb Natapov 提交于
Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Gleb Natapov 提交于
INTn will be re-executed after migration. If we wanted to migrate pending software interrupt we would need to migrate interrupt type and instruction length too, but we do not have all required info on SVM, so SVM->VMX migration would need to re-execute INTn anyway. To make it simple never migrate pending soft interrupt. Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Gleb Natapov 提交于
If NMI is received during handling of another NMI it should be injected immediately after IRET from previous NMI handler, but SVM intercept IRET before instruction execution so we can't inject pending NMI at this point and there is not way to request exit when NMI window opens. This patch fix SVM code to open NMI window after IRET by single stepping over IRET instruction. Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Gleb Natapov 提交于
Currently they are not requested if there is pending exception. Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Gleb Natapov 提交于
Re-inject event instead. This is what Intel suggest. Also use correct instruction length when re-injecting soft fault/interrupt. Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Gleb Natapov 提交于
Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Gleb Natapov 提交于
Only one interrupt vector can be injected from userspace irqchip at any given time so no need to store it in a bitmap. Put it into interrupt queue directly. Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Gleb Natapov 提交于
The exception will immediately close the interrupt window. Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Gleb Natapov 提交于
It is done for exception and interrupt already. Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Robert P. J. Day 提交于
Signed-off-by: NRobert P. J. Day <rpjday@crashcourse.ca> Cc: Avi Kivity <avi@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NAvi Kivity <avi@redhat.com>
-