- 03 11月, 2014 1 次提交
-
-
由 Christoph Lameter 提交于
This still has not been merged and now powerpc is the only arch that does not have this change. Sorry about missing linuxppc-dev before. V2->V2 - Fix up to work against 3.18-rc1 __get_cpu_var() is used for multiple purposes in the kernel source. One of them is address calculation via the form &__get_cpu_var(x). This calculates the address for the instance of the percpu variable of the current processor based on an offset. Other use cases are for storing and retrieving data from the current processors percpu area. __get_cpu_var() can be used as an lvalue when writing data or on the right side of an assignment. __get_cpu_var() is defined as : __get_cpu_var() always only does an address determination. However, store and retrieve operations could use a segment prefix (or global register on other platforms) to avoid the address calculation. this_cpu_write() and this_cpu_read() can directly take an offset into a percpu area and use optimized assembly code to read and write per cpu variables. This patch converts __get_cpu_var into either an explicit address calculation using this_cpu_ptr() or into a use of this_cpu operations that use the offset. Thereby address calculations are avoided and less registers are used when code is generated. At the end of the patch set all uses of __get_cpu_var have been removed so the macro is removed too. The patch set includes passes over all arches as well. Once these operations are used throughout then specialized macros can be defined in non -x86 arches as well in order to optimize per cpu access by f.e. using a global register that may be set to the per cpu base. Transformations done to __get_cpu_var() 1. Determine the address of the percpu instance of the current processor. DEFINE_PER_CPU(int, y); int *x = &__get_cpu_var(y); Converts to int *x = this_cpu_ptr(&y); 2. Same as #1 but this time an array structure is involved. DEFINE_PER_CPU(int, y[20]); int *x = __get_cpu_var(y); Converts to int *x = this_cpu_ptr(y); 3. Retrieve the content of the current processors instance of a per cpu variable. DEFINE_PER_CPU(int, y); int x = __get_cpu_var(y) Converts to int x = __this_cpu_read(y); 4. Retrieve the content of a percpu struct DEFINE_PER_CPU(struct mystruct, y); struct mystruct x = __get_cpu_var(y); Converts to memcpy(&x, this_cpu_ptr(&y), sizeof(x)); 5. Assignment to a per cpu variable DEFINE_PER_CPU(int, y) __get_cpu_var(y) = x; Converts to __this_cpu_write(y, x); 6. Increment/Decrement etc of a per cpu variable DEFINE_PER_CPU(int, y); __get_cpu_var(y)++ Converts to __this_cpu_inc(y) Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> CC: Paul Mackerras <paulus@samba.org> Signed-off-by: NChristoph Lameter <cl@linux.com> [mpe: Fix build errors caused by set/or_softirq_pending(), and rework assignment in __set_breakpoint() to use memcpy().] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 27 8月, 2014 2 次提交
-
-
由 Tejun Heo 提交于
This reverts commit 5828f666 due to build failure after merging with pending powerpc changes. Link: http://lkml.kernel.org/g/20140827142243.6277eaff@canb.auug.org.auSigned-off-by: NTejun Heo <tj@kernel.org> Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Christoph Lameter 提交于
__get_cpu_var() is used for multiple purposes in the kernel source. One of them is address calculation via the form &__get_cpu_var(x). This calculates the address for the instance of the percpu variable of the current processor based on an offset. Other use cases are for storing and retrieving data from the current processors percpu area. __get_cpu_var() can be used as an lvalue when writing data or on the right side of an assignment. __get_cpu_var() is defined as : #define __get_cpu_var(var) (*this_cpu_ptr(&(var))) __get_cpu_var() always only does an address determination. However, store and retrieve operations could use a segment prefix (or global register on other platforms) to avoid the address calculation. this_cpu_write() and this_cpu_read() can directly take an offset into a percpu area and use optimized assembly code to read and write per cpu variables. This patch converts __get_cpu_var into either an explicit address calculation using this_cpu_ptr() or into a use of this_cpu operations that use the offset. Thereby address calculations are avoided and less registers are used when code is generated. At the end of the patch set all uses of __get_cpu_var have been removed so the macro is removed too. The patch set includes passes over all arches as well. Once these operations are used throughout then specialized macros can be defined in non -x86 arches as well in order to optimize per cpu access by f.e. using a global register that may be set to the per cpu base. Transformations done to __get_cpu_var() 1. Determine the address of the percpu instance of the current processor. DEFINE_PER_CPU(int, y); int *x = &__get_cpu_var(y); Converts to int *x = this_cpu_ptr(&y); 2. Same as #1 but this time an array structure is involved. DEFINE_PER_CPU(int, y[20]); int *x = __get_cpu_var(y); Converts to int *x = this_cpu_ptr(y); 3. Retrieve the content of the current processors instance of a per cpu variable. DEFINE_PER_CPU(int, y); int x = __get_cpu_var(y) Converts to int x = __this_cpu_read(y); 4. Retrieve the content of a percpu struct DEFINE_PER_CPU(struct mystruct, y); struct mystruct x = __get_cpu_var(y); Converts to memcpy(&x, this_cpu_ptr(&y), sizeof(x)); 5. Assignment to a per cpu variable DEFINE_PER_CPU(int, y) __get_cpu_var(y) = x; Converts to __this_cpu_write(y, x); 6. Increment/Decrement etc of a per cpu variable DEFINE_PER_CPU(int, y); __get_cpu_var(y)++ Converts to __this_cpu_inc(y) tj: Folded a fix patch. http://lkml.kernel.org/g/alpine.DEB.2.11.1408172143020.9652@gentwo.org Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> CC: Paul Mackerras <paulus@samba.org> Signed-off-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 24 6月, 2014 1 次提交
-
-
由 Michael Ellerman 提交于
In commit 721aeaa9 "Build little endian ppc64 kernel with ABIv2", we missed some updates required in the kprobes code to make jprobes work when the kernel is built with ABI v2. Firstly update arch_deref_entry_point() to do the right thing. Now that we have added ppc_global_function_entry() we can just always use that, it will do the right thing for 32 & 64 bit and ABI v1 & v2. Secondly we need to update the code that sets up the register state before calling the jprobe handler. On ABI v1 we setup r2 to hold the TOC, on ABI v2 we need to populate r12 with the function entry point address. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 14 10月, 2013 1 次提交
-
-
由 Anoop Thomas Mathew 提交于
Signed-off-by: NAnoop Thomas Mathew <atm@profoundis.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 20 6月, 2013 2 次提交
-
-
由 Suzuki K. Poulose 提交于
This patch moves the single step enable code used by kprobe to a generic routine header so that, it can be re-used by other code, in this case, uprobes. No functional changes. Signed-off-by: NSuzuki K. Poulose <suzuki@in.ibm.com> Cc: Ananth N Mavinakaynahalli <ananth@in.ibm.com> Cc: Kumar Gala <galak@kernel.crashing.org> Cc: linuxppc-dev@ozlabs.org Acked-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Suzuki K. Poulose 提交于
External/Decrement exceptions have lower priority than the Debug Exception. So, we don't have to disable the External interrupts before a single step. However, on BookE, Critical Input Exception(CE) has higher priority than a Debug Exception. Hence we mask them. Signed-off-by: NSuzuki K. Poulose <suzuki@in.ibm.com> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Ananth N Mavinakaynahalli <ananth@in.ibm.com> Cc: Kumar Gala <galak@kernel.crashing.org> Cc: linuxppc-dev@ozlabs.org Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 28 2月, 2013 1 次提交
-
-
由 Sasha Levin 提交于
I'm not sure why, but the hlist for each entry iterators were conceived list_for_each_entry(pos, head, member) The hlist ones were greedy and wanted an extra parameter: hlist_for_each_entry(tpos, pos, head, member) Why did they need an extra pos parameter? I'm not quite sure. Not only they don't really need it, it also prevents the iterator from looking exactly like the list iterator, which is unfortunate. Besides the semantic patch, there was some manual work required: - Fix up the actual hlist iterators in linux/list.h - Fix up the declaration of other iterators based on the hlist ones. - A very small amount of places were using the 'node' parameter, this was modified to use 'obj->member' instead. - Coccinelle didn't handle the hlist_for_each_entry_safe iterator properly, so those had to be fixed up manually. The semantic patch which is mostly the work of Peter Senna Tschudin is here: @@ iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host; type T; expression a,c,d,e; identifier b; statement S; @@ -T b; <+... when != b ( hlist_for_each_entry(a, - b, c, d) S | hlist_for_each_entry_continue(a, - b, c) S | hlist_for_each_entry_from(a, - b, c) S | hlist_for_each_entry_rcu(a, - b, c, d) S | hlist_for_each_entry_rcu_bh(a, - b, c, d) S | hlist_for_each_entry_continue_rcu_bh(a, - b, c) S | for_each_busy_worker(a, c, - b, d) S | ax25_uid_for_each(a, - b, c) S | ax25_for_each(a, - b, c) S | inet_bind_bucket_for_each(a, - b, c) S | sctp_for_each_hentry(a, - b, c) S | sk_for_each(a, - b, c) S | sk_for_each_rcu(a, - b, c) S | sk_for_each_from -(a, b) +(a) S + sk_for_each_from(a) S | sk_for_each_safe(a, - b, c, d) S | sk_for_each_bound(a, - b, c) S | hlist_for_each_entry_safe(a, - b, c, d, e) S | hlist_for_each_entry_continue_rcu(a, - b, c) S | nr_neigh_for_each(a, - b, c) S | nr_neigh_for_each_safe(a, - b, c, d) S | nr_node_for_each(a, - b, c) S | nr_node_for_each_safe(a, - b, c, d) S | - for_each_gfn_sp(a, c, d, b) S + for_each_gfn_sp(a, c, d) S | - for_each_gfn_indirect_valid_sp(a, c, d, b) S + for_each_gfn_indirect_valid_sp(a, c, d) S | for_each_host(a, - b, c) S | for_each_host_safe(a, - b, c, d) S | for_each_mesh_entry(a, - b, c, d) S ) ...+> [akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c] [akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c] [akpm@linux-foundation.org: checkpatch fixes] [akpm@linux-foundation.org: fix warnings] [akpm@linux-foudnation.org: redo intrusive kvm changes] Tested-by: NPeter Senna Tschudin <peter.senna@gmail.com> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: NSasha Levin <sasha.levin@oracle.com> Cc: Wu Fengguang <fengguang.wu@intel.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Gleb Natapov <gleb@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 3月, 2012 1 次提交
-
-
由 David Howells 提交于
Disintegrate asm/system.h for PowerPC. Signed-off-by: NDavid Howells <dhowells@redhat.com> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> cc: linuxppc-dev@lists.ozlabs.org
-
- 02 6月, 2010 1 次提交
-
-
emulate_step() in kprobe_handler() would've already determined if the probed instruction can be emulated. We single-step in hardware only if the instruction couldn't be emulated. resume_execution() therefore is superfluous -- all we need is to fix up the instruction pointer after single-stepping. Thanks to Paul Mackerras for catching this. Signed-off-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 05 5月, 2010 1 次提交
-
-
由 Dave Kleikamp 提交于
476 requires an isync after loading MMU and debug related SPR's. Some of these are in performance-critical paths and may need to be optimized, but initially, we're playing it safe. Signed-off-by: NTorez Smith <lnxtorez@linux.vnet.ibm.com> Signed-off-by: NDave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
-
- 30 3月, 2010 1 次提交
-
-
由 Tejun Heo 提交于
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: NTejun Heo <tj@kernel.org> Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
-
- 17 2月, 2010 1 次提交
-
-
由 Dave Kleikamp 提交于
powerpc/booke: Introduce new CONFIG options for advanced debug registers From: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Introduce new config options to simplify the ifdefs pertaining to the advanced debug registers for booke and 40x processors: CONFIG_PPC_ADV_DEBUG_REGS - boolean: true for dac-based processors CONFIG_PPC_ADV_DEBUG_IACS - number of IAC registers CONFIG_PPC_ADV_DEBUG_DACS - number of DAC registers CONFIG_PPC_ADV_DEBUG_DVCS - number of DVC registers CONFIG_PPC_ADV_DEBUG_DAC_RANGE - DAC ranges supported Beginning conservatively, since I only have the facilities to test 440 hardware. I believe all 40x and booke platforms support at least 2 IAC and 2 DAC registers. For 440, 4 IAC and 2 DVC registers are enabled, as well as the DAC ranges. Signed-off-by: NDave Kleikamp <shaggy@linux.vnet.ibm.com> Acked-by: NDavid Gibson <dwg@au1.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 07 1月, 2009 1 次提交
-
-
由 Masami Hiramatsu 提交于
Add kprobe_insn_mutex for protecting kprobe_insn_pages hlist, and remove kprobe_mutex from architecture dependent code. This allows us to call arch_remove_kprobe() (and free_insn_slot) while holding kprobe_mutex. Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com> Acked-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 1月, 2009 1 次提交
-
-
由 Frederik Schwarzer 提交于
- (better, more, bigger ...) then -> (...) than Signed-off-by: NFrederik Schwarzer <schwarzerf@gmail.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 26 7月, 2008 1 次提交
-
-
由 Srinivasa D S 提交于
Currently list of kretprobe instances are stored in kretprobe object (as used_instances,free_instances) and in kretprobe hash table. We have one global kretprobe lock to serialise the access to these lists. This causes only one kretprobe handler to execute at a time. Hence affects system performance, particularly on SMP systems and when return probe is set on lot of functions (like on all systemcalls). Solution proposed here gives fine-grain locks that performs better on SMP system compared to present kretprobe implementation. Solution: 1) Instead of having one global lock to protect kretprobe instances present in kretprobe object and kretprobe hash table. We will have two locks, one lock for protecting kretprobe hash table and another lock for kretporbe object. 2) We hold lock present in kretprobe object while we modify kretprobe instance in kretprobe object and we hold per-hash-list lock while modifying kretprobe instances present in that hash list. To prevent deadlock, we never grab a per-hash-list lock while holding a kretprobe lock. 3) We can remove used_instances from struct kretprobe, as we can track used instances of kretprobe instances using kretprobe hash table. Time duration for kernel compilation ("make -j 8") on a 8-way ppc64 system with return probes set on all systemcalls looks like this. cacheline non-cacheline Un-patched kernel aligned patch aligned patch =============================================================================== real 9m46.784s 9m54.412s 10m2.450s user 40m5.715s 40m7.142s 40m4.273s sys 2m57.754s 2m58.583s 3m17.430s =========================================================== Time duration for kernel compilation ("make -j 8) on the same system, when kernel is not probed. ========================= real 9m26.389s user 40m8.775s sys 2m7.283s ========================= Signed-off-by: NSrinivasa DS <srinivasa@in.ibm.com> Signed-off-by: NJim Keniston <jkenisto@us.ibm.com> Acked-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Masami Hiramatsu <mhiramat@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 26 6月, 2008 2 次提交
-
-
由 Kumar Gala 提交于
This patch is based on work done by Madhvesh. R. Sulibhavi back in March 2007. We refactor some of the single step handling since it differs between "classic" and "booke" powerpc cores. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Kumar Gala 提交于
* Mark __flush_icache_range as a function that can't be probed since its used by the kprobe code. * Fix an issue with single stepping and async exceptions. We need to ensure that we dont get an async exception (external, decrementer, etc) while we are attempting to single step the probe point. Added a check to ensure we only handle a single step if its really intended for the instruction in question. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 23 5月, 2008 1 次提交
-
-
由 Michael Ellerman 提交于
func_descr_t->entry is already an unsigned long. Mea culpa. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 20 2月, 2008 1 次提交
-
-
Fix sparse warnings in powerpc kprobes: CHECK arch/powerpc/kernel/kprobes.c arch/powerpc/kernel/kprobes.c:277:6: warning: symbol 'kretprobe_trampoline_holder' was not declared. Should it be static? arch/powerpc/kernel/kprobes.c:287:15: warning: symbol 'trampoline_probe_handler' was not declared. Should it be static? arch/powerpc/kernel/kprobes.c:525:16: warning: symbol 'jprobe_return_end' was not declared. Should it be static? Fix along the same lines as http://lkml.org/lkml/2008/2/13/642Signed-off-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 17 10月, 2007 1 次提交
-
-
由 Masami Hiramatsu 提交于
Introduce architecture dependent kretprobe blacklists to prohibit users from inserting return probes on the function in which kprobes can be inserted but kretprobes can not. This patch also removes "__kprobes" mark from "__switch_to" on x86_64 and registers "__switch_to" to the blacklist on x86-64, because that mark is to prohibit user from inserting only kretprobe. Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com> Cc: Prasanna S Panchamukhi <prasanna@in.ibm.com> Acked-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 20 7月, 2007 1 次提交
-
-
由 Michael Ellerman 提交于
I realise jprobes are a razor-blades-included type of interface, but that doesn't mean we can't try and make them safer to use. This guy I know once wrote code like this: struct jprobe jp = { .kp.symbol_name = "foo", .entry = "jprobe_foo" }; And then his kernel exploded. Oops. This patch adds an arch hook, arch_deref_entry_point() (I don't like it either) which takes the void * in a struct jprobe, and gives back the text address that it represents. We can then use that in register_jprobe() to check that the entry point we're passed is actually in the kernel text, rather than just some random value. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Cc: Prasanna S Panchamukhi <prasanna@in.ibm.com> Acked-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 5月, 2007 4 次提交
-
-
This patch provides a debugfs knob to turn kprobes on/off o A new file /debug/kprobes/enabled indicates if kprobes is enabled or not (default enabled) o Echoing 0 to this file will disarm all installed probes o Any new probe registration when disabled will register the probe but not arm it. A message will be printed out in such a case. o When a value 1 is echoed to the file, all probes (including ones registered in the intervening period) will be enabled o Unregistration will happen irrespective of whether probes are globally enabled or not. o Update Documentation/kprobes.txt to reflect these changes. While there also update the doc to make it current. We are also looking at providing sysrq key support to tie to the disabling feature provided by this patch. [akpm@linux-foundation.org: Use bool like a bool!] [akpm@linux-foundation.org: add printk facility levels] [cornelia.huck@de.ibm.com: Add the missing arch_trampoline_kprobe() for s390] Signed-off-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com> Signed-off-by: NSrinivasa DS <srinivasa@in.ibm.com> Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Christoph Hellwig 提交于
- consolidate duplicate code in all arch_prepare_kretprobe instances into common code - replace various odd helpers that use hlist_for_each_entry to get the first elemenet of a list with either a hlist_for_each_entry_save or an opencoded access to the first element in the caller - inline add_rp_inst into it's only remaining caller - use kretprobe_inst_table_head instead of opencoding it Signed-off-by: NChristoph Hellwig <hch@lst.de> Cc: Prasanna S Panchamukhi <prasanna@in.ibm.com> Acked-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
In certain cases like when the real return address can't be found or when the number of tracked calls to a kretprobed function is less than the number of returns, we may not be able to find the correct return address after processing a kretprobe. Currently we just do a BUG_ON, but no information is provided about the actual failing kretprobe. Print out details of the kretprobe before calling BUG(). Signed-off-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Prasanna S Panchamukhi <prasanna@in.ibm.com> Cc: Jim Keniston <jkenisto@us.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: Maneesh Soni <maneesh@in.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Christoph Hellwig 提交于
This patch moves the die notifier handling to common code. Previous various architectures had exactly the same code for it. Note that the new code is compiled unconditionally, this should be understood as an appel to the other architecture maintainer to implement support for it aswell (aka sprinkling a notify_die or two in the proper place) arm had a notifiy_die that did something totally different, I renamed it to arm_notify_die as part of the patch and made it static to the file it's declared and used at. avr32 used to pass slightly less information through this interface and I brought it into line with the other architectures. [akpm@linux-foundation.org: build fix] [akpm@linux-foundation.org: fix vmalloc_sync_all bustage] [bryan.wu@analog.com: fix vmalloc_sync_all in nommu] Signed-off-by: NChristoph Hellwig <hch@lst.de> Cc: <linux-arch@vger.kernel.org> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: NBryan Wu <bryan.wu@analog.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 02 5月, 2007 1 次提交
-
-
由 Christoph Hellwig 提交于
Call the kprobes pagefault handler directly instead of going through the complex notifier chain. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 24 4月, 2007 1 次提交
-
-
For cases when probes are placed on instructions that can be emulated, don't take the single-step exception. Signed-off-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 07 2月, 2007 1 次提交
-
-
由 Kumar Gala 提交于
Added kprobes to ppc32 platforms that have use single_step_exception. This excludes 4xx and anything Book-E since their debug mechanisms for single stepping are completely different. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 08 12月, 2006 1 次提交
-
-
由 Masami Hiramatsu 提交于
When we are unregistering a kprobe-booster, we can't release its instruction buffer immediately on the preemptive kernel, because some processes might be preempted on the buffer. The freeze_processes() and thaw_processes() functions can clean most of processes up from the buffer. There are still some non-frozen threads who have the PF_NOFREEZE flag. If those threads are sleeping (not preempted) at the known place outside the buffer, we can ensure safety of freeing. However, the processing of this check routine takes a long time. So, this patch introduces the garbage collection mechanism of insn_slot. It also introduces the "dirty" flag to free_insn_slot because of efficiency. The "clean" instruction slots (dirty flag is cleared) are released immediately. But the "dirty" slots which are used by boosted kprobes, are marked as garbages. collect_garbage_slots() will be invoked to release "dirty" slots if there are more than INSNS_PER_PAGE garbage slots or if there are no unused slots. Cc: "Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: "bibo,mao" <bibo.mao@intel.com> Cc: Prasanna S Panchamukhi <prasanna@in.ibm.com> Cc: Yumiko Sugita <yumiko.sugita.yf@hitachi.com> Cc: Satoshi Oshima <soshima@redhat.com> Cc: Hideo Aoki <haoki@redhat.com> Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Acked-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 02 10月, 2006 2 次提交
-
-
由 bibo,mao 提交于
kprobe_flush_task() possibly calls kfree function during holding kretprobe_lock spinlock, if kfree function is probed by kretprobe that will incur spinlock deadlock. This patch moves kfree function out scope of kretprobe_lock. Signed-off-by: Nbibo, mao <bibo.mao@intel.com> Signed-off-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 bibo,mao 提交于
Whitespace is used to indent, this patch cleans up these sentences by kernel coding style. Signed-off-by: Nbibo, mao <bibo.mao@intel.com> Signed-off-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 17 8月, 2006 1 次提交
-
-
- On archs that have no-exec support, we vmalloc() a executable scratch area of PAGE_SIZE and divide it up into an array of slots of maximum instruction size for that arch - On a kprobe registration, the original instruction is copied to the first available free slot, so if multiple kprobes are registered, chances are, they get contiguous slots - On POWER4, due to not having coherent icaches, we could hit a situation where a probe that is registered on one processor, is hit immediately on another. This second processor could have fetched the stream of text from the out-of-line single-stepping area *before* the probe registration completed, possibly due to an earlier (and a different) kprobe hit and hence would see stale data at the slot. Executing such an arbitrary instruction lead to a problem as reported in LTC bugzilla 23555. The correct solution is to call flush_icache_range() as soon as the instruction is copied for out-of-line single-stepping, so the correct instruction is seen on all processors. Thanks to Will Schmidt who tracked this down. Signed-off-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com> Acked-by: NWill Schmidt <will_schmidt@vnet.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 01 7月, 2006 1 次提交
-
-
由 Jörn Engel 提交于
Signed-off-by: NJörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: NAdrian Bunk <bunk@stusta.de>
-
- 03 5月, 2006 1 次提交
-
-
We currently single-step inline if the instruction on which a kprobe is inserted is a trap variant. - variants (such as tdnei, used by BUG()) typically evaluate a condition and cause a trap only if the condition is satisfied. - kprobes uses the unconditional "trap" (0x7fe00008) and single-stepping again on this instruction, resulting in another trap without evaluating the condition is obviously incorrect. Signed-off-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 20 4月, 2006 1 次提交
-
-
由 Prasanna S Panchamukhi 提交于
Andrew Morton pointed out that compiler might not inline the functions marked for inline in kprobes. There-by allowing the insertion of probes on these kprobes routines, which might cause recursion. This patch removes all such inline and adds them to kprobes section there by disallowing probes on all such routines. Some of the routines can even still be inlined, since these routines gets executed after the kprobes had done necessay setup for reentrancy. Signed-off-by: NPrasanna S Panchamukhi <prasanna@in.ibm.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 27 3月, 2006 2 次提交
-
-
由 Prasanna S Panchamukhi 提交于
Provide proper kprobes fault handling, if a user-specified pre/post handlers tries to access user address space, through copy_from_user(), get_user() etc. The user-specified fault handler gets called only if the fault occurs while executing user-specified handlers. In such a case user-specified handler is allowed to fix it first, later if the user-specifed fault handler does not fix it, we try to fix it by calling fix_exception(). The user-specified handler will not be called if the fault happens when single stepping the original instruction, instead we reset the current probe and allow the system page fault handler to fix it up. Signed-off-by: NPrasanna S Panchamukhi <prasanna@in.ibm.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 bibo,mao 提交于
Currently kprobe handler traps only happen in kernel space, so function kprobe_exceptions_notify should skip traps which happen in user space. This patch modifies this, and it is based on 2.6.16-rc4. Signed-off-by: Nbibo mao <bibo.mao@intel.com> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: "Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com> Cc: <hiramatu@sdl.hitachi.co.jp> Signed-off-by: NPrasanna S Panchamukhi <prasanna@in.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 23 3月, 2006 1 次提交
-
-
由 Ingo Molnar 提交于
Semaphore to mutex conversion. The conversion was generated via scripts, and the result was validated automatically via a script as well. Signed-off-by: NIngo Molnar <mingo@elte.hu> Acked-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 10 2月, 2006 1 次提交
-
-
由 Jon Mason 提交于
This patch removes all self references and fixes references to files in the now defunct arch/ppc64 tree. I think this accomplises everything wanted, though there might be a few references I missed. Signed-off-by: NJon Mason <jdmason@us.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-