1. 24 4月, 2014 5 次提交
    • M
      kprobes: Show blacklist entries via debugfs · 63724740
      Masami Hiramatsu 提交于
      Show blacklist entries (function names with the address
      range) via /sys/kernel/debug/kprobes/blacklist.
      
      Note that at this point the blacklist supports only
      in vmlinux, not module. So the list is fixed and
      not updated.
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Link: http://lkml.kernel.org/r/20140417081849.26341.11609.stgit@ltc230.yrl.intra.hitachi.co.jpSigned-off-by: NIngo Molnar <mingo@kernel.org>
      63724740
    • M
      kprobes: Use NOKPROBE_SYMBOL macro instead of __kprobes · 820aede0
      Masami Hiramatsu 提交于
      Use NOKPROBE_SYMBOL macro to protect functions from
      kprobes instead of __kprobes annotation.
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Reviewed-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Link: http://lkml.kernel.org/r/20140417081821.26341.40362.stgit@ltc230.yrl.intra.hitachi.co.jpSigned-off-by: NIngo Molnar <mingo@kernel.org>
      820aede0
    • M
      kprobes: Allow probe on some kprobe functions · 55479f64
      Masami Hiramatsu 提交于
      There is no need to prohibit probing on the functions
      used for preparation, registeration, optimization,
      controll etc. Those are safely probed because those are
      not invoked from breakpoint/fault/debug handlers,
      there is no chance to cause recursive exceptions.
      
      Following functions are now removed from the kprobes blacklist:
      
      	add_new_kprobe
      	aggr_kprobe_disabled
      	alloc_aggr_kprobe
      	alloc_aggr_kprobe
      	arm_all_kprobes
      	__arm_kprobe
      	arm_kprobe
      	arm_kprobe_ftrace
      	check_kprobe_address_safe
      	collect_garbage_slots
      	collect_garbage_slots
      	collect_one_slot
      	debugfs_kprobe_init
      	__disable_kprobe
      	disable_kprobe
      	disarm_all_kprobes
      	__disarm_kprobe
      	disarm_kprobe
      	disarm_kprobe_ftrace
      	do_free_cleaned_kprobes
      	do_optimize_kprobes
      	do_unoptimize_kprobes
      	enable_kprobe
      	force_unoptimize_kprobe
      	free_aggr_kprobe
      	free_aggr_kprobe
      	__free_insn_slot
      	__get_insn_slot
      	get_optimized_kprobe
      	__get_valid_kprobe
      	init_aggr_kprobe
      	init_aggr_kprobe
      	in_nokprobe_functions
      	kick_kprobe_optimizer
      	kill_kprobe
      	kill_optimized_kprobe
      	kprobe_addr
      	kprobe_optimizer
      	kprobe_queued
      	kprobe_seq_next
      	kprobe_seq_start
      	kprobe_seq_stop
      	kprobes_module_callback
      	kprobes_open
      	optimize_all_kprobes
      	optimize_kprobe
      	prepare_kprobe
      	prepare_optimized_kprobe
      	register_aggr_kprobe
      	register_jprobe
      	register_jprobes
      	register_kprobe
      	register_kprobes
      	register_kretprobe
      	register_kretprobe
      	register_kretprobes
      	register_kretprobes
      	report_probe
      	show_kprobe_addr
      	try_to_optimize_kprobe
      	unoptimize_all_kprobes
      	unoptimize_kprobe
      	unregister_jprobe
      	unregister_jprobes
      	unregister_kprobe
      	__unregister_kprobe_bottom
      	unregister_kprobes
      	__unregister_kprobe_top
      	unregister_kretprobe
      	unregister_kretprobe
      	unregister_kretprobes
      	unregister_kretprobes
      	wait_for_kprobe_optimizer
      
      I tested those functions by putting kprobes on all
      instructions in the functions with the bash script
      I sent to LKML. See:
      
        https://lkml.org/lkml/2014/3/27/33Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Link: http://lkml.kernel.org/r/20140417081753.26341.57889.stgit@ltc230.yrl.intra.hitachi.co.jp
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: fche@redhat.com
      Cc: systemtap@sourceware.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      55479f64
    • M
      kprobes: Introduce NOKPROBE_SYMBOL() macro to maintain kprobes blacklist · 376e2424
      Masami Hiramatsu 提交于
      Introduce NOKPROBE_SYMBOL() macro which builds a kprobes
      blacklist at kernel build time.
      
      The usage of this macro is similar to EXPORT_SYMBOL(),
      placed after the function definition:
      
        NOKPROBE_SYMBOL(function);
      
      Since this macro will inhibit inlining of static/inline
      functions, this patch also introduces a nokprobe_inline macro
      for static/inline functions. In this case, we must use
      NOKPROBE_SYMBOL() for the inline function caller.
      
      When CONFIG_KPROBES=y, the macro stores the given function
      address in the "_kprobe_blacklist" section.
      
      Since the data structures are not fully initialized by the
      macro (because there is no "size" information),  those
      are re-initialized at boot time by using kallsyms.
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Link: http://lkml.kernel.org/r/20140417081705.26341.96719.stgit@ltc230.yrl.intra.hitachi.co.jp
      Cc: Alok Kataria <akataria@vmware.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Christopher Li <sparse@chrisli.org>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Jan-Simon Möller <dl9pf@gmx.de>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-doc@vger.kernel.org
      Cc: linux-sparse@vger.kernel.org
      Cc: virtualization@lists.linux-foundation.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      376e2424
    • M
      kprobes: Prohibit probing on .entry.text code · be8f2743
      Masami Hiramatsu 提交于
      .entry.text is a code area which is used for interrupt/syscall
      entries, which includes many sensitive code.
      Thus, it is better to prohibit probing on all of such code
      instead of a part of that.
      Since some symbols are already registered on kprobe blacklist,
      this also removes them from the blacklist.
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Reviewed-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jan Kiszka <jan.kiszka@siemens.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Jonathan Lebon <jlebon@redhat.com>
      Cc: Seiji Aguchi <seiji.aguchi@hds.com>
      Link: http://lkml.kernel.org/r/20140417081658.26341.57354.stgit@ltc230.yrl.intra.hitachi.co.jpSigned-off-by: NIngo Molnar <mingo@kernel.org>
      be8f2743
  2. 13 11月, 2013 1 次提交
  3. 12 9月, 2013 2 次提交
    • H
      kprobes: allow to specify custom allocator for insn caches · af96397d
      Heiko Carstens 提交于
      The current two insn slot caches both use module_alloc/module_free to
      allocate and free insn slot cache pages.
      
      For s390 this is not sufficient since there is the need to allocate insn
      slots that are either within the vmalloc module area or within dma memory.
      
      Therefore add a mechanism which allows to specify an own allocator for an
      own insn slot cache.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      af96397d
    • H
      kprobes: unify insn caches · c802d64a
      Heiko Carstens 提交于
      The current kpropes insn caches allocate memory areas for insn slots
      with module_alloc().  The assumption is that the kernel image and module
      area are both within the same +/- 2GB memory area.
      
      This however is not true for s390 where the kernel image resides within
      the first 2GB (DMA memory area), but the module area is far away in the
      vmalloc area, usually somewhere close below the 4TB area.
      
      For new pc relative instructions s390 needs insn slots that are within
      +/- 2GB of each area.  That way we can patch displacements of
      pc-relative instructions within the insn slots just like x86 and
      powerpc.
      
      The module area works already with the normal insn slot allocator,
      however there is currently no way to get insn slots that are within the
      first 2GB on s390 (aka DMA area).
      
      Therefore this patch set modifies the kprobes insn slot cache code in
      order to allow to specify a custom allocator for the insn slot cache
      pages.  In addition architecure can now have private insn slot caches
      withhout the need to modify common code.
      
      Patch 1 unifies and simplifies the current insn and optinsn caches
              implementation. This is a preparation which allows to add more
              insn caches in a simple way.
      
      Patch 2 adds the possibility to specify a custom allocator.
      
      Patch 3 makes s390 use the new insn slot mechanisms and adds support for
              pc-relative instructions with long displacements.
      
      This patch (of 3):
      
      The two insn caches (insn, and optinsn) each have an own mutex and
      alloc/free functions (get_[opt]insn_slot() / free_[opt]insn_slot()).
      
      Since there is the need for yet another insn cache which satifies dma
      allocations on s390, unify and simplify the current implementation:
      
      - Move the per insn cache mutex into struct kprobe_insn_cache.
      - Move the alloc/free functions to kprobe.h so they are simply
        wrappers for the generic __get_insn_slot/__free_insn_slot functions.
        The implementation is done with a DEFINE_INSN_CACHE_OPS() macro
        which provides the alloc/free functions for each cache if needed.
      - move the struct kprobe_insn_cache to kprobe.h which allows to generate
        architecture specific insn slot caches outside of the core kprobes
        code.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c802d64a
  4. 23 7月, 2013 1 次提交
    • J
      kprobes/x86: Call out into INT3 handler directly instead of using notifier · 17f41571
      Jiri Kosina 提交于
      In fd4363ff ("x86: Introduce int3 (breakpoint)-based
      instruction patching"), the mechanism that was introduced for
      notifying alternatives code from int3 exception handler that and
      exception occured was die_notifier.
      
      This is however problematic, as early code might be using jump
      labels even before the notifier registration has been performed,
      which will then lead to an oops due to unhandled exception. One
      of such occurences has been encountered by Fengguang:
      
       int3: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
       Modules linked in:
       CPU: 1 PID: 0 Comm: swapper/1 Not tainted 3.11.0-rc1-01429-g04bf576 #8
       task: ffff88000da1b040 ti: ffff88000da1c000 task.ti: ffff88000da1c000
       RIP: 0010:[<ffffffff811098cc>]  [<ffffffff811098cc>] ttwu_do_wakeup+0x28/0x225
       RSP: 0000:ffff88000dd03f10  EFLAGS: 00000006
       RAX: 0000000000000000 RBX: ffff88000dd12940 RCX: ffffffff81769c40
       RDX: 0000000000000002 RSI: 0000000000000000 RDI: 0000000000000001
       RBP: ffff88000dd03f28 R08: ffffffff8176a8c0 R09: 0000000000000002
       R10: ffffffff810ff484 R11: ffff88000dd129e8 R12: ffff88000dbc90c0
       R13: ffff88000dbc90c0 R14: ffff88000da1dfd8 R15: ffff88000da1dfd8
       FS:  0000000000000000(0000) GS:ffff88000dd00000(0000) knlGS:0000000000000000
       CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
       CR2: 00000000ffffffff CR3: 0000000001c88000 CR4: 00000000000006e0
       Stack:
        ffff88000dd12940 ffff88000dbc90c0 ffff88000da1dfd8 ffff88000dd03f48
        ffffffff81109e2b ffff88000dd12940 0000000000000000 ffff88000dd03f68
        ffffffff81109e9e 0000000000000000 0000000000012940 ffff88000dd03f98
       Call Trace:
        <IRQ>
        [<ffffffff81109e2b>] ttwu_do_activate.constprop.56+0x6d/0x79
        [<ffffffff81109e9e>] sched_ttwu_pending+0x67/0x84
        [<ffffffff8110c845>] scheduler_ipi+0x15a/0x2b0
        [<ffffffff8104dfb4>] smp_reschedule_interrupt+0x38/0x41
        [<ffffffff8173bf5d>] reschedule_interrupt+0x6d/0x80
        <EOI>
        [<ffffffff810ff484>] ? __atomic_notifier_call_chain+0x5/0xc1
        [<ffffffff8105cc30>] ? native_safe_halt+0xd/0x16
        [<ffffffff81015f10>] default_idle+0x147/0x282
        [<ffffffff81017026>] arch_cpu_idle+0x3d/0x5d
        [<ffffffff81127d6a>] cpu_idle_loop+0x46d/0x5db
        [<ffffffff81127f5c>] cpu_startup_entry+0x84/0x84
        [<ffffffff8104f4f8>] start_secondary+0x3c8/0x3d5
        [...]
      
      Fix this by directly calling poke_int3_handler() from the int3
      exception handler (analogically to what ftrace has been doing
      already), instead of relying on notifier, registration of which
      might not have yet been finalized by the time of the first trap.
      Reported-and-tested-by: NFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Link: http://lkml.kernel.org/r/alpine.LNX.2.00.1307231007490.14024@pobox.suse.czSigned-off-by: NIngo Molnar <mingo@kernel.org>
      17f41571
  5. 17 7月, 2013 1 次提交
  6. 04 7月, 2013 1 次提交
  7. 28 5月, 2013 1 次提交
    • M
      kprobes: Fix to free gone and unused optprobes · 7b959fc5
      Masami Hiramatsu 提交于
      Fix to free gone and unused optprobes. This bug will
      cause a kernel panic if the user reuses the killed and
      unused probe.
      
      Reported at:
      
        http://sourceware.org/ml/systemtap/2013-q2/msg00142.html
      
      In the normal path, an optprobe on an init function is
      unregistered when a module goes live.
      
      unregister_kprobe(kp)
       -> __unregister_kprobe_top
         ->__disable_kprobe
           ->disarm_kprobe(ap == op)
             ->__disarm_kprobe
              ->unoptimize_kprobe : the op is queued
                                    on unoptimizing_list
      and do nothing in __unregister_kprobe_bottom
      
      After a while (usually wait 5 jiffies), kprobe_optimizer
      runs to unoptimize and free optprobe.
      
      kprobe_optimizer
       ->do_unoptimize_kprobes
         ->arch_unoptimize_kprobes : moved to free_list
       ->do_free_cleaned_kprobes
         ->hlist_del: the op is removed
         ->free_aggr_kprobe
           ->arch_remove_optimized_kprobe
           ->arch_remove_kprobe
           ->kfree: the op is freed
      
      Here, if kprobes_module_callback is called and the delayed
      unoptimizing probe is picked BEFORE kprobe_optimizer runs,
      
      kprobes_module_callback
       ->kill_kprobe
         ->kill_optimized_kprobe : dequeued from unoptimizing_list <=!!!
           ->arch_remove_optimized_kprobe
         ->arch_remove_kprobe
         (but op is not freed, and on the kprobe hash table)
      
      This doesn't happen if the probe unregistration is done AFTER
      kprobes_module_callback is called (because at that time the op
      is gone), and kprobe-tracer does it.
      
      To fix this bug, this patch changes kprobes_module_callback to
      enqueue the op to freeing_list at kill_optimized_kprobe only
      if the op is unused. The unused probes on freeing_list will
      be freed in do_free_cleaned_kprobes.
      
      Note that this calls arch_remove_*kprobe twice on the
      same probe. Thus those functions have to check the double free.
      Fortunately, most of arch codes already checked that except
      for mips. This will be fixed in the next patch.
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Timo Juhani Lindfors <timo.lindfors@iki.fi>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: Frank Ch. Eigler <fche@redhat.com>
      Cc: systemtap@sourceware.org
      Cc: yrl.pp-manager.tt@hitachi.com
      Cc: David S. Miller <davem@davemloft.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Link: http://lkml.kernel.org/r/20130522093409.9084.63554.stgit@mhiramat-M0-7522
      [ Minor edits. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      7b959fc5
  8. 18 4月, 2013 1 次提交
  9. 28 2月, 2013 1 次提交
    • S
      hlist: drop the node parameter from iterators · b67bfe0d
      Sasha Levin 提交于
      I'm not sure why, but the hlist for each entry iterators were conceived
      
              list_for_each_entry(pos, head, member)
      
      The hlist ones were greedy and wanted an extra parameter:
      
              hlist_for_each_entry(tpos, pos, head, member)
      
      Why did they need an extra pos parameter? I'm not quite sure. Not only
      they don't really need it, it also prevents the iterator from looking
      exactly like the list iterator, which is unfortunate.
      
      Besides the semantic patch, there was some manual work required:
      
       - Fix up the actual hlist iterators in linux/list.h
       - Fix up the declaration of other iterators based on the hlist ones.
       - A very small amount of places were using the 'node' parameter, this
       was modified to use 'obj->member' instead.
       - Coccinelle didn't handle the hlist_for_each_entry_safe iterator
       properly, so those had to be fixed up manually.
      
      The semantic patch which is mostly the work of Peter Senna Tschudin is here:
      
      @@
      iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
      
      type T;
      expression a,c,d,e;
      identifier b;
      statement S;
      @@
      
      -T b;
          <+... when != b
      (
      hlist_for_each_entry(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_continue(a,
      - b,
      c) S
      |
      hlist_for_each_entry_from(a,
      - b,
      c) S
      |
      hlist_for_each_entry_rcu(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_rcu_bh(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_continue_rcu_bh(a,
      - b,
      c) S
      |
      for_each_busy_worker(a, c,
      - b,
      d) S
      |
      ax25_uid_for_each(a,
      - b,
      c) S
      |
      ax25_for_each(a,
      - b,
      c) S
      |
      inet_bind_bucket_for_each(a,
      - b,
      c) S
      |
      sctp_for_each_hentry(a,
      - b,
      c) S
      |
      sk_for_each(a,
      - b,
      c) S
      |
      sk_for_each_rcu(a,
      - b,
      c) S
      |
      sk_for_each_from
      -(a, b)
      +(a)
      S
      + sk_for_each_from(a) S
      |
      sk_for_each_safe(a,
      - b,
      c, d) S
      |
      sk_for_each_bound(a,
      - b,
      c) S
      |
      hlist_for_each_entry_safe(a,
      - b,
      c, d, e) S
      |
      hlist_for_each_entry_continue_rcu(a,
      - b,
      c) S
      |
      nr_neigh_for_each(a,
      - b,
      c) S
      |
      nr_neigh_for_each_safe(a,
      - b,
      c, d) S
      |
      nr_node_for_each(a,
      - b,
      c) S
      |
      nr_node_for_each_safe(a,
      - b,
      c, d) S
      |
      - for_each_gfn_sp(a, c, d, b) S
      + for_each_gfn_sp(a, c, d) S
      |
      - for_each_gfn_indirect_valid_sp(a, c, d, b) S
      + for_each_gfn_indirect_valid_sp(a, c, d) S
      |
      for_each_host(a,
      - b,
      c) S
      |
      for_each_host_safe(a,
      - b,
      c, d) S
      |
      for_each_mesh_entry(a,
      - b,
      c, d) S
      )
          ...+>
      
      [akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
      [akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
      [akpm@linux-foundation.org: checkpatch fixes]
      [akpm@linux-foundation.org: fix warnings]
      [akpm@linux-foudnation.org: redo intrusive kvm changes]
      Tested-by: NPeter Senna Tschudin <peter.senna@gmail.com>
      Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NSasha Levin <sasha.levin@oracle.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b67bfe0d
  10. 10 2月, 2013 1 次提交
    • T
      kprobes: fix wait_for_kprobe_optimizer() · ad72b3be
      Tejun Heo 提交于
      wait_for_kprobe_optimizer() seems largely broken.  It uses
      optimizer_comp which is never re-initialized, so
      wait_for_kprobe_optimizer() will never wait for anything once
      kprobe_optimizer() finishes all pending jobs for the first time.
      
      Also, aside from completion, delayed_work_pending() is %false once
      kprobe_optimizer() starts execution and wait_for_kprobe_optimizer()
      won't wait for it.
      
      Reimplement it so that it flushes optimizing_work until
      [un]optimizing_lists are empty.  Note that this also makes
      optimizing_work execute immediately if someone's waiting for it, which
      is the nicer behavior.
      
      Only compile tested.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      ad72b3be
  11. 22 1月, 2013 1 次提交
  12. 14 9月, 2012 1 次提交
  13. 31 7月, 2012 5 次提交
  14. 06 3月, 2012 1 次提交
  15. 04 2月, 2012 1 次提交
  16. 24 1月, 2012 1 次提交
  17. 13 1月, 2012 1 次提交
  18. 31 10月, 2011 1 次提交
  19. 13 9月, 2011 1 次提交
  20. 16 7月, 2011 1 次提交
  21. 17 12月, 2010 1 次提交
  22. 07 12月, 2010 7 次提交
    • M
      kprobes: Use text_poke_smp_batch for unoptimizing · f984ba4e
      Masami Hiramatsu 提交于
      Use text_poke_smp_batch() on unoptimization path for reducing
      the number of stop_machine() issues. If the number of
      unoptimizing probes is more than MAX_OPTIMIZE_PROBES(=256),
      kprobes unoptimizes first MAX_OPTIMIZE_PROBES probes and kicks
      optimizer for remaining probes.
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: 2nddept-manager@sdl.hitachi.co.jp
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <20101203095434.2961.22657.stgit@ltc236.sdl.hitachi.co.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f984ba4e
    • M
      kprobes: Use text_poke_smp_batch for optimizing · cd7ebe22
      Masami Hiramatsu 提交于
      Use text_poke_smp_batch() in optimization path for reducing
      the number of stop_machine() issues. If the number of optimizing
      probes is more than MAX_OPTIMIZE_PROBES(=256), kprobes optimizes
      first MAX_OPTIMIZE_PROBES probes and kicks optimizer for
      remaining probes.
      
      Changes in v5:
      - Use kick_kprobe_optimizer() instead of directly calling
        schedule_delayed_work().
      - Rescheduling optimizer outside of kprobe mutex lock.
      
      Changes in v2:
      - Allocate code buffer and parameters in arch_init_kprobes()
        instead of using static arraies.
      - Merge previous max optimization limit patch into this patch.
        So, this patch introduces upper limit of optimization at
        once.
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: 2nddept-manager@sdl.hitachi.co.jp
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <20101203095428.2961.8994.stgit@ltc236.sdl.hitachi.co.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cd7ebe22
    • M
      kprobes: Reuse unused kprobe · 0490cd1f
      Masami Hiramatsu 提交于
      Reuse unused (waiting for unoptimizing and no user handler)
      kprobe on given address instead of returning -EBUSY for
      registering a new kprobe.
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: 2nddept-manager@sdl.hitachi.co.jp
      LKML-Reference: <20101203095416.2961.39080.stgit@ltc236.sdl.hitachi.co.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0490cd1f
    • M
      kprobes: Support delayed unoptimizing · 6274de49
      Masami Hiramatsu 提交于
      Unoptimization occurs when a probe is unregistered or disabled,
      and is heavy because it recovers instructions by using
      stop_machine(). This patch delays unoptimization operations and
      unoptimize several probes at once by using
      text_poke_smp_batch(). This can avoid unexpected system slowdown
      coming from stop_machine().
      
      Changes in v5:
      - Split this patch into several cleanup patches and this patch.
      - Fix some text_mutex lock miss.
      - Use bool instead of int for behavior flags.
      - Add additional comment for (un)optimizing path.
      
      Changes in v2:
      - Use dynamic allocated buffers and params.
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: 2nddept-manager@sdl.hitachi.co.jp
      LKML-Reference: <20101203095409.2961.82733.stgit@ltc236.sdl.hitachi.co.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6274de49
    • M
      kprobes: Separate kprobe optimizing code from optimizer · 61f4e13f
      Masami Hiramatsu 提交于
      Separate kprobe optimizing code from optimizer, this
      will make easy to introducing unoptimizing code in
      optimizer.
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: 2nddept-manager@sdl.hitachi.co.jp
      LKML-Reference: <20101203095403.2961.91201.stgit@ltc236.sdl.hitachi.co.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      61f4e13f
    • M
      kprobes: Cleanup disabling and unregistering path · 6f0f1dd7
      Masami Hiramatsu 提交于
      Merge disabling kprobe to unregistering kprobe function
      and add comments for disabing/unregistring process.
      
      Current unregistering code disables(disarms) kprobes after
      checking target kprobe status. This patch changes it to
      disabling kprobe first after that it changing the kprobe's
      state. This allows to share probe disabling code between
      disable_kprobe() and unregister_kprobe().
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: 2nddept-manager@sdl.hitachi.co.jp
      LKML-Reference: <20101203095356.2961.30152.stgit@ltc236.sdl.hitachi.co.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6f0f1dd7
    • M
      kprobes: Rename old_p to more appropriate name · 6d8e40a8
      Masami Hiramatsu 提交于
      Rename irrelevant uses of "old_p" to more appropriate names.
      Originally, "old_p" just meant "the old kprobe on given address"
      but current code uses that name as "just another kprobe" or
      something like that. This patch renames those pointer names
      to more appropriate one for maintainability.
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: 2nddept-manager@sdl.hitachi.co.jp
      LKML-Reference: <20101203095350.2961.48110.stgit@ltc236.sdl.hitachi.co.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6d8e40a8
  23. 30 10月, 2010 1 次提交
  24. 28 10月, 2010 1 次提交
  25. 25 10月, 2010 1 次提交
    • M
      kprobes: Remove redundant text_mutex lock in optimize · 43948f50
      Masami Hiramatsu 提交于
      Remove text_mutex locking in optimize_all_kprobes, because
      this function doesn't modify text. It simply queues probes on
      optimization list for kprobe_optimizer worker thread.
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20101025131801.19160.70939.stgit@ltc236.sdl.hitachi.co.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      43948f50