- 31 7月, 2016 1 次提交
-
-
由 Jiri Olsa 提交于
This fixes the same issue Steven already fixed for x86 in following commit: 237d28db ftrace/jprobes/x86: Fix conflict between jprobes and function graph tracing It fixes the crash, that happens when function graph tracing and jprobes are used simultaneously. Please refer to above commit for details. Signed-off-by: NJiri Olsa <jolsa@kernel.org> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Acked-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 19 1月, 2016 2 次提交
-
-
由 Heiko Carstens 提交于
Yet another leftover from the 31 bit era. The usual operation "y = x & PSW_ADDR_INSN" with the PSW_ADDR_INSN mask is a nop for CONFIG_64BIT. Therefore remove all usages and hope the code is a bit less confusing. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com>
-
由 Heiko Carstens 提交于
This is a leftover from the 31 bit area. For CONFIG_64BIT the usual operation "y = x | PSW_ADDR_AMODE" is a nop. Therefore remove all usages of PSW_ADDR_AMODE and make the code a bit less confusing. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com>
-
- 25 3月, 2015 1 次提交
-
-
由 Heiko Carstens 提交于
Remove the s390 architecture implementation of probe_kernel_write() and instead use a new function s390_kernel_write() to modify kernel text and data everywhere. The s390 implementation of probe_kernel_write() was potentially broken since it modified memory in a read-modify-write fashion, which read four bytes, modified the requested bytes within those four bytes and wrote the result back. If two cpus would modify the same four byte area at different locations within that area, this could lead to corruption. Right now the only places which called probe_kernel_write() did run within stop_machine_run. Therefore the scenario can't happen right now, however that might change at any time. To fix this rename probe_kernel_write() to s390_kernel_write() which can have special semantics, like only call it while running within stop_machine(). Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 29 1月, 2015 1 次提交
-
-
由 Heiko Carstens 提交于
Make use of gcc's hotpatch support to generate better code for ftrace function tracing. The generated code now contains only a six byte nop in each function prologue instead of a 24 byte code block which will be runtime patched to support function tracing. With the new code generation the runtime overhead for supporting function tracing is close to zero, while the original code did show a significant performance impact. Acked-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 01 12月, 2014 1 次提交
-
-
由 Heiko Carstens 提交于
When we generate the instruction for out of line execution the length of the to be copied instruction was evaluated from a not initialized memory location. Therefore we ended up with a random (2, 4 or 6) number of bytes being copied instead of taking the real instruction length into account. This works surprisingly well most of the time, but still not always. Reported-by: NUrsula Braun <ursula.braun@de.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 27 10月, 2014 2 次提交
-
-
由 Heiko Carstens 提交于
Use NOKPROBE_SYMBOL() instead of __kprobes annotation. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
If the function tracer is enabled, allow to set kprobes on the first instruction of a function (which is the function trace caller): If no kprobe is set handling of enabling and disabling function tracing of a function simply patches the first instruction. Either it is a nop (right now it's an unconditional branch, which skips the mcount block), or it's a branch to the ftrace_caller() function. If a kprobe is being placed on a function tracer calling instruction we encode if we actually have a nop or branch in the remaining bytes after the breakpoint instruction (illegal opcode). This is possible, since the size of the instruction used for the nop and branch is six bytes, while the size of the breakpoint is only two bytes. Therefore the first two bytes contain the illegal opcode and the last four bytes contain either "0" for nop or "1" for branch. The kprobes code will then execute/simulate the correct instruction. Instruction patching for kprobes and function tracer is always done with stop_machine(). Therefore we don't have any races where an instruction is patched concurrently on a different cpu. Besides that also the program check handler which executes the function trace caller instruction won't be executed concurrently to any stop_machine() execution. This allows to keep full fault based kprobes handling which generates correct pt_regs contents automatically. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 25 9月, 2014 1 次提交
-
-
由 Jan Willeke 提交于
This patch moves common functions from kprobes.c to probes.c. Thus its possible for uprobes to use them without enabling kprobes. Signed-off-by: NJan Willeke <willeke@de.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 09 9月, 2014 1 次提交
-
-
由 Heiko Carstens 提交于
Even if it has a __used annotation it is actually unused. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 27 8月, 2014 1 次提交
-
-
由 Christoph Lameter 提交于
__get_cpu_var() is used for multiple purposes in the kernel source. One of them is address calculation via the form &__get_cpu_var(x). This calculates the address for the instance of the percpu variable of the current processor based on an offset. Other use cases are for storing and retrieving data from the current processors percpu area. __get_cpu_var() can be used as an lvalue when writing data or on the right side of an assignment. __get_cpu_var() is defined as : #define __get_cpu_var(var) (*this_cpu_ptr(&(var))) __get_cpu_var() always only does an address determination. However, store and retrieve operations could use a segment prefix (or global register on other platforms) to avoid the address calculation. this_cpu_write() and this_cpu_read() can directly take an offset into a percpu area and use optimized assembly code to read and write per cpu variables. This patch converts __get_cpu_var into either an explicit address calculation using this_cpu_ptr() or into a use of this_cpu operations that use the offset. Thereby address calculations are avoided and less registers are used when code is generated. At the end of the patch set all uses of __get_cpu_var have been removed so the macro is removed too. The patch set includes passes over all arches as well. Once these operations are used throughout then specialized macros can be defined in non -x86 arches as well in order to optimize per cpu access by f.e. using a global register that may be set to the per cpu base. Transformations done to __get_cpu_var() 1. Determine the address of the percpu instance of the current processor. DEFINE_PER_CPU(int, y); int *x = &__get_cpu_var(y); Converts to int *x = this_cpu_ptr(&y); 2. Same as #1 but this time an array structure is involved. DEFINE_PER_CPU(int, y[20]); int *x = __get_cpu_var(y); Converts to int *x = this_cpu_ptr(y); 3. Retrieve the content of the current processors instance of a per cpu variable. DEFINE_PER_CPU(int, y); int x = __get_cpu_var(y) Converts to int x = __this_cpu_read(y); 4. Retrieve the content of a percpu struct DEFINE_PER_CPU(struct mystruct, y); struct mystruct x = __get_cpu_var(y); Converts to memcpy(&x, this_cpu_ptr(&y), sizeof(x)); 5. Assignment to a per cpu variable DEFINE_PER_CPU(int, y) __get_cpu_var(y) = x; Converts to this_cpu_write(y, x); 6. Increment/Decrement etc of a per cpu variable DEFINE_PER_CPU(int, y); __get_cpu_var(y)++ Converts to this_cpu_inc(y) Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> CC: linux390@de.ibm.com Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 24 10月, 2013 3 次提交
-
-
由 Heiko Carstens 提交于
Since we have an in-kernel disassembler we can make sure that there won't be any kprobes set on random data. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
When checking the insn address wether it is a kernel image or module address it should be an if-else-if statement not two independent if statements. This doesn't really fix a bug, but matches s390_free_insn_slot(). Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 14 10月, 2013 1 次提交
-
-
由 Anoop Thomas Mathew 提交于
Signed-off-by: NAnoop Thomas Mathew <atm@profoundis.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 30 9月, 2013 1 次提交
-
-
由 Heiko Carstens 提交于
"execute relative long" may have all sorts of side effects dependend on the instructions it executes. Therefore prohibit setting a kprobe on exrl just like we do for the regular execute instruction. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 12 9月, 2013 1 次提交
-
-
由 Heiko Carstens 提交于
With the general-instruction extension facility (z10) a couple of instructions with a pc-relative long displacement were introduced. The kprobes support for these instructions however was never implemented. In result, if anybody ever put a probe on any of these instructions the result would have been random behaviour after the instruction got executed within the insn slot. So lets add the missing handling for these instructions. Since all of the new instructions have 32 bit signed displacement the easiest solution is to allocate an insn slot that is within the same 2GB area like the original instruction and patch the displacement field. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 22 8月, 2013 1 次提交
-
-
由 Heiko Carstens 提交于
The compare and branch instructions (not relative) all need special handling when kprobed: - if a branch was taken, the instruction pointer should be left alone - if a branch was not taken, the instruction pointer must be adjusted The compare and branch instructions family was introduced with the general instruction extension facility (z10). Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 28 2月, 2013 1 次提交
-
-
由 Sasha Levin 提交于
I'm not sure why, but the hlist for each entry iterators were conceived list_for_each_entry(pos, head, member) The hlist ones were greedy and wanted an extra parameter: hlist_for_each_entry(tpos, pos, head, member) Why did they need an extra pos parameter? I'm not quite sure. Not only they don't really need it, it also prevents the iterator from looking exactly like the list iterator, which is unfortunate. Besides the semantic patch, there was some manual work required: - Fix up the actual hlist iterators in linux/list.h - Fix up the declaration of other iterators based on the hlist ones. - A very small amount of places were using the 'node' parameter, this was modified to use 'obj->member' instead. - Coccinelle didn't handle the hlist_for_each_entry_safe iterator properly, so those had to be fixed up manually. The semantic patch which is mostly the work of Peter Senna Tschudin is here: @@ iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host; type T; expression a,c,d,e; identifier b; statement S; @@ -T b; <+... when != b ( hlist_for_each_entry(a, - b, c, d) S | hlist_for_each_entry_continue(a, - b, c) S | hlist_for_each_entry_from(a, - b, c) S | hlist_for_each_entry_rcu(a, - b, c, d) S | hlist_for_each_entry_rcu_bh(a, - b, c, d) S | hlist_for_each_entry_continue_rcu_bh(a, - b, c) S | for_each_busy_worker(a, c, - b, d) S | ax25_uid_for_each(a, - b, c) S | ax25_for_each(a, - b, c) S | inet_bind_bucket_for_each(a, - b, c) S | sctp_for_each_hentry(a, - b, c) S | sk_for_each(a, - b, c) S | sk_for_each_rcu(a, - b, c) S | sk_for_each_from -(a, b) +(a) S + sk_for_each_from(a) S | sk_for_each_safe(a, - b, c, d) S | sk_for_each_bound(a, - b, c) S | hlist_for_each_entry_safe(a, - b, c, d, e) S | hlist_for_each_entry_continue_rcu(a, - b, c) S | nr_neigh_for_each(a, - b, c) S | nr_neigh_for_each_safe(a, - b, c, d) S | nr_node_for_each(a, - b, c) S | nr_node_for_each_safe(a, - b, c, d) S | - for_each_gfn_sp(a, c, d, b) S + for_each_gfn_sp(a, c, d) S | - for_each_gfn_indirect_valid_sp(a, c, d, b) S + for_each_gfn_indirect_valid_sp(a, c, d) S | for_each_host(a, - b, c) S | for_each_host_safe(a, - b, c, d) S | for_each_mesh_entry(a, - b, c, d) S ) ...+> [akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c] [akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c] [akpm@linux-foundation.org: checkpatch fixes] [akpm@linux-foundation.org: fix warnings] [akpm@linux-foudnation.org: redo intrusive kvm changes] Tested-by: NPeter Senna Tschudin <peter.senna@gmail.com> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: NSasha Levin <sasha.levin@oracle.com> Cc: Wu Fengguang <fengguang.wu@intel.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Gleb Natapov <gleb@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 26 9月, 2012 1 次提交
-
-
由 Heiko Carstens 提交于
This is the s390 port of 70627654 "x86, extable: Switch to relative exception table entries". Reduces the size of our exception tables by 50% on 64 bit builds. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 20 7月, 2012 1 次提交
-
-
由 Heiko Carstens 提交于
Remove the file name from the comment at top of many files. In most cases the file name was wrong anyway, so it's rather pointless. Also unify the IBM copyright statement. We did have a lot of sightly different statements and wanted to change them one after another whenever a file gets touched. However that never happened. Instead people start to take the old/"wrong" statements to use as a template for new files. So unify all of them in one go. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
- 30 10月, 2011 1 次提交
-
-
由 Martin Schwidefsky 提交于
Make functions and data static to avoid sparse warnings. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 05 1月, 2011 10 次提交
-
-
由 Martin Schwidefsky 提交于
Overhaul program event recording and the code dealing with the ptrace user space interface. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Correct some minor coding style issues. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Restructure the kprobe breakpoint handler function. Add comments to make it more comprehensible and add a sanity check for re-entering kprobes. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Register %r14 and %r15 are already stored in jprobe_saved_regs, no need to store them a second time in jprobe_saved_r14 / jprobe_saved_r15. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The s390 architecture can execute code on kmalloc/vmalloc memory. No need for the __ARCH_WANT_KPROBES_INSN_SLOT detour. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Replace set_current_kprobe/reset_current_kprobe/save_previous_kprobe/ restore_previous_kprobe with a simpler scheme push_kprobe/pop_kprobe. The mini kprobes stack can store up to two active kprobes. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Determine instruction fixup details in resume_execution, no need to do it beforehand. Remove fixup, ilen and reg from arch_specific_insn. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Move the definition of the helper structure ins_replace_args to the only place where it is used and drop the old member as it is not needed. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The saved interrupt mask and the saved control registers are only relevant while single stepping is set up. A secondary kprobe while kprobe single stepping is active may not occur. That makes is safe to remove the save and restore of kprobe_saved_imask / kprobe_save_ctl from save_previous_kprobe and restore_previous_kprobe. Move all single step related code to two functions, enable_singlestep and disable_singlestep. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Remove special case of a kprobe on a breakpoint while a relocated instruction is single stepped. The only instruction that may cause a fault while kprobe single stepping is active is the relocated instruction. There is no kprobe on the instruction slot retrieved with get_insn_slot(). Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 10 11月, 2010 2 次提交
-
-
由 Martin Schwidefsky 提交于
Analog to git commit 737480a0 fix the return address of subsequent kretprobes when multiple kretprobes are set on the same function. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Execute the kprobe exception and fault handler with interrupts disabled. To disable the interrupts only while a single step is in progress is not good enough, a kprobe from interrupt context while another kprobe is handled can confuse the internal house keeping. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 29 10月, 2010 1 次提交
-
-
由 Martin Schwidefsky 提交于
Fix kprobes after git commit 1e54622e broke it. The kprobe_handler is now called with interrupts in the state at the time of the breakpoint. The single step of the replaced instruction is done with interrupts off which makes it necessary to enable and disable the interupts in the kprobes code. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 27 5月, 2010 1 次提交
-
-
由 Heiko Carstens 提交于
The probed instructions will be executed in a single stepped and irq disabled context. Therefore the results of stnsm, stosm and epsw would be wrong if probed. So let's just disallow probing of these functions. If really needed a fixup could be written for each of them, but I doubt it's worth it. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 30 3月, 2010 1 次提交
-
-
由 Tejun Heo 提交于
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: NTejun Heo <tj@kernel.org> Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
-
- 22 6月, 2009 1 次提交
-
-
由 Heiko Carstens 提交于
get_krobe_ctlblk returns a per cpu kprobe control block which holds the state of the current cpu wrt to kprobe. When inserting/removing a kprobe the state of the cpu which replaces the code is changed to KPROBE_SWAP_INST. This however is done when preemption is still enabled. So the state of the current cpu doesn't necessarily reflect the real state. To fix this move the code that changes the state to non-preemptible context. Reported-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 12 6月, 2009 1 次提交
-
-
由 Heiko Carstens 提交于
Use proble_kernel_write() to patch the kernel. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 07 1月, 2009 1 次提交
-
-
由 Masami Hiramatsu 提交于
Add kprobe_insn_mutex for protecting kprobe_insn_pages hlist, and remove kprobe_mutex from architecture dependent code. This allows us to call arch_remove_kprobe() (and free_insn_slot) while holding kprobe_mutex. Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com> Acked-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-