1. 02 5月, 2007 1 次提交
  2. 24 4月, 2007 1 次提交
  3. 07 2月, 2007 1 次提交
  4. 08 12月, 2006 1 次提交
    • M
      [PATCH] kprobes: enable booster on the preemptible kernel · b4c6c34a
      Masami Hiramatsu 提交于
      When we are unregistering a kprobe-booster, we can't release its
      instruction buffer immediately on the preemptive kernel, because some
      processes might be preempted on the buffer.  The freeze_processes() and
      thaw_processes() functions can clean most of processes up from the buffer.
      There are still some non-frozen threads who have the PF_NOFREEZE flag.  If
      those threads are sleeping (not preempted) at the known place outside the
      buffer, we can ensure safety of freeing.
      
      However, the processing of this check routine takes a long time.  So, this
      patch introduces the garbage collection mechanism of insn_slot.  It also
      introduces the "dirty" flag to free_insn_slot because of efficiency.
      
      The "clean" instruction slots (dirty flag is cleared) are released
      immediately.  But the "dirty" slots which are used by boosted kprobes, are
      marked as garbages.  collect_garbage_slots() will be invoked to release
      "dirty" slots if there are more than INSNS_PER_PAGE garbage slots or if
      there are no unused slots.
      
      Cc: "Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: "bibo,mao" <bibo.mao@intel.com>
      Cc: Prasanna S Panchamukhi <prasanna@in.ibm.com>
      Cc: Yumiko Sugita <yumiko.sugita.yf@hitachi.com>
      Cc: Satoshi Oshima <soshima@redhat.com>
      Cc: Hideo Aoki <haoki@redhat.com>
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b4c6c34a
  5. 02 10月, 2006 2 次提交
  6. 17 8月, 2006 1 次提交
    • A
      [POWERPC] kprobes: Fix possible system crash during out-of-line single-stepping · 83db3dde
      Ananth N Mavinakayanahalli 提交于
      - On archs that have no-exec support, we vmalloc() a executable scratch
      area of PAGE_SIZE and divide it up into an array of slots of maximum
      instruction size for that arch
      - On a kprobe registration, the original instruction is copied to the
      first available free slot, so if multiple kprobes are registered, chances
      are, they get contiguous slots
      - On POWER4, due to not having coherent icaches, we could hit a situation
      where a probe that is registered on one processor, is hit immediately on
      another. This second processor could have fetched the stream of text from
      the out-of-line single-stepping area *before* the probe registration
      completed, possibly due to an earlier (and a different) kprobe hit and
      hence would see stale data at the slot.
      
      Executing such an arbitrary instruction lead to a problem as reported
      in LTC bugzilla 23555.
      
      The correct solution is to call flush_icache_range() as soon as the
      instruction is copied for out-of-line single-stepping, so the correct
      instruction is seen on all processors.
      
      Thanks to Will Schmidt who tracked this down.
      Signed-off-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com>
      Acked-by: NWill Schmidt <will_schmidt@vnet.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      83db3dde
  7. 01 7月, 2006 1 次提交
  8. 03 5月, 2006 1 次提交
  9. 20 4月, 2006 1 次提交
  10. 27 3月, 2006 2 次提交
  11. 23 3月, 2006 1 次提交
  12. 10 2月, 2006 1 次提交
  13. 12 1月, 2006 1 次提交
  14. 11 1月, 2006 4 次提交
  15. 13 12月, 2005 1 次提交
  16. 14 11月, 2005 1 次提交
  17. 07 11月, 2005 4 次提交
  18. 02 10月, 2005 1 次提交
  19. 27 9月, 2005 1 次提交
  20. 08 9月, 2005 2 次提交
  21. 06 7月, 2005 1 次提交
  22. 28 6月, 2005 2 次提交
    • R
      [PATCH] Return probe redesign: ppc64 specific implementation · 97f7943d
      Rusty Lynch 提交于
      The following is a patch provided by Ananth Mavinakayanahalli that implements
      the new PPC64 specific parts of the new function return probe design.
      
      NOTE: Since getting Ananth's patch, I changed trampoline_probe_handler()
            to consume each of the outstanding return probem instances (feedback
            on my original RFC after Ananth cut a patch), and also added the
            arch_init() function (adding arch specific initialization.) I have
            cross compiled but have not testing this on a PPC64 machine.
      
      Changes include:
       * Addition of kretprobe_trampoline to act as a dummy function for instrumented
         functions to return to, and for the return probe infrastructure to place
         a kprobe on on, gaining control so that the return probe handler
         can be called, and so that the instruction pointer can be moved back
         to the original return address.
       * Addition of arch_init(), allowing a kprobe to be registered on
         kretprobe_trampoline
       * Addition of trampoline_probe_handler() which is used as the pre_handler
         for the kprobe inserted on kretprobe_implementation.  This is the function
         that handles the details for calling the return probe handler function
         and returning control back at the original return address
       * Addition of arch_prepare_kretprobe() which is setup as the pre_handler
         for a kprobe registered at the beginning of the target function by
         kernel/kprobes.c so that a return probe instance can be setup when
         a caller enters the target function.  (A return probe instance contains
         all the needed information for trampoline_probe_handler to do it's job.)
       * Hooks added to the exit path of a task so that we can cleanup any left-over
         return probe instances (i.e. if a task dies while inside a targeted function
         then the return probe instance was reserved at the beginning of the function
         but the function never returns so we need to mark the instance as unused.)
      Signed-off-by: NRusty Lynch <rusty.lynch@intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      97f7943d
    • A
      [PATCH] kprobes: fix single-step out of line - take2 · 9ec4b1f3
      Ananth N Mavinakayanahalli 提交于
      Now that PPC64 has no-execute support, here is a second try to fix the
      single step out of line during kprobe execution.  Kprobes on x86_64 already
      solved this problem by allocating an executable page and using it as the
      scratch area for stepping out of line.  Reuse that.
      Signed-off-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9ec4b1f3
  23. 24 6月, 2005 2 次提交
    • P
      [PATCH] kprobes: Temporary disarming of reentrant probe for ppc64 · 42cc2060
      Prasanna S Panchamukhi 提交于
      This patch includes ppc64 architecture specific changes to support temporary
      disarming on reentrancy of probes.
      Signed-of-by: NPrasanna S Panchamukhi <prasanna@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      42cc2060
    • R
      [PATCH] Move kprobe [dis]arming into arch specific code · 7e1048b1
      Rusty Lynch 提交于
      The architecture independent code of the current kprobes implementation is
      arming and disarming kprobes at registration time.  The problem is that the
      code is assuming that arming and disarming is a just done by a simple write
      of some magic value to an address.  This is problematic for ia64 where our
      instructions look more like structures, and we can not insert break points
      by just doing something like:
      
      *p->addr = BREAKPOINT_INSTRUCTION;
      
      The following patch to 2.6.12-rc4-mm2 adds two new architecture dependent
      functions:
      
           * void arch_arm_kprobe(struct kprobe *p)
           * void arch_disarm_kprobe(struct kprobe *p)
      
      and then adds the new functions for each of the architectures that already
      implement kprobes (spar64/ppc64/i386/x86_64).
      
      I thought arch_[dis]arm_kprobe was the most descriptive of what was really
      happening, but each of the architectures already had a disarm_kprobe()
      function that was really a "disarm and do some other clean-up items as
      needed when you stumble across a recursive kprobe." So...  I took the
      liberty of changing the code that was calling disarm_kprobe() to call
      arch_disarm_kprobe(), and then do the cleanup in the block of code dealing
      with the recursive kprobe case.
      
      So far this patch as been tested on i386, x86_64, and ppc64, but still
      needs to be tested in sparc64.
      Signed-off-by: NRusty Lynch <rusty.lynch@intel.com>
      Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      7e1048b1
  24. 09 6月, 2005 3 次提交
  25. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4