1. 16 9月, 2009 1 次提交
  2. 27 8月, 2009 1 次提交
    • F
      tracing/kprobes: Dump the culprit kprobe in case of kprobe recursion · 24851d24
      Frederic Weisbecker 提交于
      Kprobes can enter into a probing recursion, ie: a kprobe that does an
      endless loop because one of its core mechanism function used during
      probing is also probed itself.
      
      This patch helps pinpointing the kprobe that raised such recursion
      by dumping it and raising a BUG instead of a warning (we also disarm
      the kprobe to try avoiding recursion in BUG itself). Having a BUG
      instead of a warning stops the stacktrace in the right place and
      doesn't pollute the logs with hundreds of traces that eventually end
      up in a stack overflow.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      24851d24
  3. 07 4月, 2009 3 次提交
  4. 21 2月, 2009 1 次提交
    • I
      x86, mm, kprobes: fault.c, simplify notify_page_fault() · b1801812
      Ingo Molnar 提交于
      Impact: cleanup
      
      Remove an #ifdef from notify_page_fault(). The function still
      compiles to nothing in the !CONFIG_KPROBES case.
      
      Introduce kprobes_built_in() and kprobe_fault_handler() helpers
      to allow this - they returns 0 if !CONFIG_KPROBES.
      
      No code changed:
      
         text	   data	    bss	    dec	    hex	filename
         4618	     32	     24	   4674	   1242	fault.o.before
         4618	     32	     24	   4674	   1242	fault.o.after
      
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b1801812
  5. 30 1月, 2009 1 次提交
  6. 07 1月, 2009 2 次提交
  7. 14 10月, 2008 1 次提交
  8. 26 7月, 2008 1 次提交
    • S
      kprobes: improve kretprobe scalability with hashed locking · ef53d9c5
      Srinivasa D S 提交于
      Currently list of kretprobe instances are stored in kretprobe object (as
      used_instances,free_instances) and in kretprobe hash table.  We have one
      global kretprobe lock to serialise the access to these lists.  This causes
      only one kretprobe handler to execute at a time.  Hence affects system
      performance, particularly on SMP systems and when return probe is set on
      lot of functions (like on all systemcalls).
      
      Solution proposed here gives fine-grain locks that performs better on SMP
      system compared to present kretprobe implementation.
      
      Solution:
      
       1) Instead of having one global lock to protect kretprobe instances
          present in kretprobe object and kretprobe hash table.  We will have
          two locks, one lock for protecting kretprobe hash table and another
          lock for kretporbe object.
      
       2) We hold lock present in kretprobe object while we modify kretprobe
          instance in kretprobe object and we hold per-hash-list lock while
          modifying kretprobe instances present in that hash list.  To prevent
          deadlock, we never grab a per-hash-list lock while holding a kretprobe
          lock.
      
       3) We can remove used_instances from struct kretprobe, as we can
          track used instances of kretprobe instances using kretprobe hash
          table.
      
      Time duration for kernel compilation ("make -j 8") on a 8-way ppc64 system
      with return probes set on all systemcalls looks like this.
      
      cacheline              non-cacheline             Un-patched kernel
      aligned patch 	       aligned patch
      ===============================================================================
      real    9m46.784s       9m54.412s                  10m2.450s
      user    40m5.715s       40m7.142s                  40m4.273s
      sys     2m57.754s       2m58.583s                  3m17.430s
      ===========================================================
      
      Time duration for kernel compilation ("make -j 8) on the same system, when
      kernel is not probed.
      =========================
      real    9m26.389s
      user    40m8.775s
      sys     2m7.283s
      =========================
      Signed-off-by: NSrinivasa DS <srinivasa@in.ibm.com>
      Signed-off-by: NJim Keniston <jkenisto@us.ibm.com>
      Acked-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ef53d9c5
  9. 24 6月, 2008 1 次提交
  10. 28 4月, 2008 4 次提交
  11. 05 3月, 2008 1 次提交
  12. 07 2月, 2008 1 次提交
  13. 30 1月, 2008 1 次提交
  14. 17 10月, 2007 1 次提交
  15. 20 7月, 2007 3 次提交
  16. 09 5月, 2007 4 次提交
  17. 08 12月, 2006 1 次提交
    • M
      [PATCH] kprobes: enable booster on the preemptible kernel · b4c6c34a
      Masami Hiramatsu 提交于
      When we are unregistering a kprobe-booster, we can't release its
      instruction buffer immediately on the preemptive kernel, because some
      processes might be preempted on the buffer.  The freeze_processes() and
      thaw_processes() functions can clean most of processes up from the buffer.
      There are still some non-frozen threads who have the PF_NOFREEZE flag.  If
      those threads are sleeping (not preempted) at the known place outside the
      buffer, we can ensure safety of freeing.
      
      However, the processing of this check routine takes a long time.  So, this
      patch introduces the garbage collection mechanism of insn_slot.  It also
      introduces the "dirty" flag to free_insn_slot because of efficiency.
      
      The "clean" instruction slots (dirty flag is cleared) are released
      immediately.  But the "dirty" slots which are used by boosted kprobes, are
      marked as garbages.  collect_garbage_slots() will be invoked to release
      "dirty" slots if there are more than INSNS_PER_PAGE garbage slots or if
      there are no unused slots.
      
      Cc: "Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: "bibo,mao" <bibo.mao@intel.com>
      Cc: Prasanna S Panchamukhi <prasanna@in.ibm.com>
      Cc: Yumiko Sugita <yumiko.sugita.yf@hitachi.com>
      Cc: Satoshi Oshima <soshima@redhat.com>
      Cc: Hideo Aoki <haoki@redhat.com>
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b4c6c34a
  18. 02 10月, 2006 2 次提交
  19. 26 4月, 2006 1 次提交
  20. 23 3月, 2006 1 次提交
  21. 12 1月, 2006 1 次提交
    • K
      [PATCH] kprobes: fix unloading of self probed module · df019b1d
      Keshavamurthy Anil S 提交于
      When a kprobes modules is written in such a way that probes are inserted on
      itself, then unload of that moudle was not possible due to reference
      couning on the same module.
      
      The below patch makes a check and incrementes the module refcount only if
      it is not a self probed module.
      
      We need to allow modules to probe themself for kprobes performance
      measurements
      
      This patch has been tested on several x86_64, ppc64 and IA64 architectures.
      
      Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      df019b1d
  22. 11 1月, 2006 3 次提交
  23. 13 12月, 2005 2 次提交
  24. 07 11月, 2005 2 次提交