1. 08 12月, 2006 1 次提交
    • M
      [PATCH] kprobes: enable booster on the preemptible kernel · b4c6c34a
      Masami Hiramatsu 提交于
      When we are unregistering a kprobe-booster, we can't release its
      instruction buffer immediately on the preemptive kernel, because some
      processes might be preempted on the buffer.  The freeze_processes() and
      thaw_processes() functions can clean most of processes up from the buffer.
      There are still some non-frozen threads who have the PF_NOFREEZE flag.  If
      those threads are sleeping (not preempted) at the known place outside the
      buffer, we can ensure safety of freeing.
      
      However, the processing of this check routine takes a long time.  So, this
      patch introduces the garbage collection mechanism of insn_slot.  It also
      introduces the "dirty" flag to free_insn_slot because of efficiency.
      
      The "clean" instruction slots (dirty flag is cleared) are released
      immediately.  But the "dirty" slots which are used by boosted kprobes, are
      marked as garbages.  collect_garbage_slots() will be invoked to release
      "dirty" slots if there are more than INSNS_PER_PAGE garbage slots or if
      there are no unused slots.
      
      Cc: "Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: "bibo,mao" <bibo.mao@intel.com>
      Cc: Prasanna S Panchamukhi <prasanna@in.ibm.com>
      Cc: Yumiko Sugita <yumiko.sugita.yf@hitachi.com>
      Cc: Satoshi Oshima <soshima@redhat.com>
      Cc: Hideo Aoki <haoki@redhat.com>
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b4c6c34a
  2. 02 10月, 2006 2 次提交
  3. 26 4月, 2006 1 次提交
  4. 23 3月, 2006 1 次提交
  5. 12 1月, 2006 1 次提交
    • K
      [PATCH] kprobes: fix unloading of self probed module · df019b1d
      Keshavamurthy Anil S 提交于
      When a kprobes modules is written in such a way that probes are inserted on
      itself, then unload of that moudle was not possible due to reference
      couning on the same module.
      
      The below patch makes a check and incrementes the module refcount only if
      it is not a self probed module.
      
      We need to allow modules to probe themself for kprobes performance
      measurements
      
      This patch has been tested on several x86_64, ppc64 and IA64 architectures.
      
      Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      df019b1d
  6. 11 1月, 2006 3 次提交
  7. 13 12月, 2005 2 次提交
  8. 07 11月, 2005 3 次提交
  9. 08 9月, 2005 1 次提交
    • P
      [PATCH] Kprobes: prevent possible race conditions generic · d0aaff97
      Prasanna S Panchamukhi 提交于
      There are possible race conditions if probes are placed on routines within the
      kprobes files and routines used by the kprobes.  For example if you put probe
      on get_kprobe() routines, the system can hang while inserting probes on any
      routine such as do_fork().  Because while inserting probes on do_fork(),
      register_kprobes() routine grabs the kprobes spin lock and executes
      get_kprobe() routine and to handle probe of get_kprobe(), kprobes_handler()
      gets executed and tries to grab kprobes spin lock, and spins forever.  This
      patch avoids such possible race conditions by preventing probes on routines
      within the kprobes file and routines used by kprobes.
      
      I have modified the patches as per Andi Kleen's suggestion to move kprobes
      routines and other routines used by kprobes to a seperate section
      .kprobes.text.
      
      Also moved page fault and exception handlers, general protection fault to
      .kprobes.text section.
      
      These patches have been tested on i386, x86_64 and ppc64 architectures, also
      compiled on ia64 and sparc64 architectures.
      Signed-off-by: NPrasanna S Panchamukhi <prasanna@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d0aaff97
  10. 06 7月, 2005 1 次提交
  11. 28 6月, 2005 2 次提交
    • R
      [PATCH] Return probe redesign: architecture independent changes · 802eae7c
      Rusty Lynch 提交于
      The following is the second version of the function return probe patches
      I sent out earlier this week.  Changes since my last submission include:
      
      * Fix in ppc64 code removing an unneeded call to re-enable preemption
      * Fix a build problem in ia64 when kprobes was turned off
      * Added another BUG_ON check to each of the architecture trampoline
        handlers
      
      My initial patch description ==>
      
       From my experiences with adding return probes to x86_64 and ia64, and the
      feedback on LKML to those patches, I think we can simplify the design
      for return probes.
      
      The following patch tweaks the original design such that:
      
      * Instead of storing the stack address in the return probe instance, the
        task pointer is stored.  This gives us all we need in order to:
          - find the correct return probe instance when we enter the trampoline
            (even if we are recursing)
          - find all left-over return probe instances when the task is going away
      
        This has the side effect of simplifying the implementation since more
        work can be done in kernel/kprobes.c since architecture specific knowledge
        of the stack layout is no longer required.  Specifically, we no longer have:
      	- arch_get_kprobe_task()
      	- arch_kprobe_flush_task()
      	- get_rp_inst_tsk()
      	- get_rp_inst()
      	- trampoline_post_handler() <see next bullet>
      
      * Instead of splitting the return probe handling and cleanup logic across
        the pre and post trampoline handlers, all the work is pushed into the
        pre function (trampoline_probe_handler), and then we skip single stepping
        the original function.  In this case the original instruction to be single
        stepped was just a NOP, and we can do without the extra interruption.
      
      The new flow of events to having a return probe handler execute when a target
      function exits is:
      
      * At system initialization time, a kprobe is inserted at the beginning of
        kretprobe_trampoline.  kernel/kprobes.c use to handle this on it's own,
        but ia64 needed to do this a little differently (i.e. a function pointer
        is really a pointer to a structure containing the instruction pointer and
        a global pointer), so I added the notion of arch_init(), so that
        kernel/kprobes.c:init_kprobes() now allows architecture specific
        initialization by calling arch_init() before exiting.  Each architecture
        now registers a kprobe on it's own trampoline function.
      
      * register_kretprobe() will insert a kprobe at the beginning of the targeted
        function with the kprobe pre_handler set to arch_prepare_kretprobe
        (still no change)
      
      * When the target function is entered, the kprobe is fired, calling
        arch_prepare_kretprobe (still no change)
      
      * In arch_prepare_kretprobe() we try to get a free instance and if one is
        available then we fill out the instance with a pointer to the return probe,
        the original return address, and a pointer to the task structure (instead
        of the stack address.)  Just like before we change the return address
        to the trampoline function and mark the instance as used.
      
        If multiple return probes are registered for a given target function,
        then arch_prepare_kretprobe() will get called multiple times for the same
        task (since our kprobe implementation is able to handle multiple kprobes
        at the same address.)  Past the first call to arch_prepare_kretprobe,
        we end up with the original address stored in the return probe instance
        pointing to our trampoline function. (This is a significant difference
        from the original arch_prepare_kretprobe design.)
      
      * Target function executes like normal and then returns to kretprobe_trampoline.
      
      * kprobe inserted on the first instruction of kretprobe_trampoline is fired
        and calls trampoline_probe_handler() (no change here)
      
      * trampoline_probe_handler() consumes each of the instances associated with
        the current task by calling the registered handler function and marking
        the instance as unused until an instance is found that has a return address
        different then the trampoline function.
      
        (change similar to my previous ia64 RFC)
      
      * If the task is killed with some left-over return probe instances (meaning
        that a target function was entered, but never returned), then we just
        free any instances associated with the task.  (Not much different other
        then we can handle this without calling architecture specific functions.)
      
        There is a known problem that this patch does not yet solve where
        registering a return probe flush_old_exec or flush_thread will put us
        in a bad state.  Most likely the best way to handle this is to not allow
        registering return probes on these two functions.
      
        (Significant change)
      
      This patch series applies to the 2.6.12-rc6-mm1 kernel, and provides:
        * kernel/kprobes.c changes
        * i386 patch of existing return probes implementation
        * x86_64 patch of existing return probe implementation
        * ia64 implementation
        * ppc64 implementation (provided by Ananth)
      
      This patch implements the architecture independant changes for a reworking
      of the kprobes based function return probes design. Changes include:
      
        * Removing functions for querying a return probe instance off a stack address
        * Removing the stack_addr field from the kretprobe_instance definition,
          and adding a task pointer
        * Adding architecture specific initialization via arch_init()
        * Removing extern definitions for the architecture trampoline functions
          (this isn't needed anymore since the architecture handles the
           initialization of the kprobe in the return probe trampoline function.)
      Signed-off-by: NRusty Lynch <rusty.lynch@intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      802eae7c
    • A
      [PATCH] kprobes: fix single-step out of line - take2 · 9ec4b1f3
      Ananth N Mavinakayanahalli 提交于
      Now that PPC64 has no-execute support, here is a second try to fix the
      single step out of line during kprobe execution.  Kprobes on x86_64 already
      solved this problem by allocating an executable page and using it as the
      scratch area for stepping out of line.  Reuse that.
      Signed-off-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9ec4b1f3
  12. 24 6月, 2005 4 次提交
    • P
      [PATCH] kprobes: Temporary disarming of reentrant probe · ea32c65c
      Prasanna S Panchamukhi 提交于
      In situations where a kprobes handler calls a routine which has a probe on it,
      then kprobes_handler() disarms the new probe forever.  This patch removes the
      above limitation by temporarily disarming the new probe.  When the another
      probe hits while handling the old probe, the kprobes_handler() saves previous
      kprobes state and handles the new probe without calling the new kprobes
      registered handlers.  kprobe_post_handler() restores back the previous kprobes
      state and the normal execution continues.
      
      However on x86_64 architecture, re-rentrancy is provided only through
      pre_handler().  If a routine having probe is referenced through
      post_handler(), then the probes on that routine are disarmed forever, since
      the exception stack is gets changed after the processor single steps the
      instruction of the new probe.
      
      This patch includes generic changes to support temporary disarming on
      reentrancy of probes.
      Signed-of-by: NPrasanna S Panchamukhi <prasanna@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ea32c65c
    • H
      [PATCH] kprobes: moves lock-unlock to non-arch kprobe_flush_task · 0aa55e4d
      Hien Nguyen 提交于
      This patch moves the lock/unlock of the arch specific kprobe_flush_task()
      to the non-arch specific kprobe_flusk_task().
      Signed-off-by: NHien Nguyen <hien@us.ibm.com>
      Acked-by: NPrasanna S Panchamukhi <prasanna@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      0aa55e4d
    • R
      [PATCH] Move kprobe [dis]arming into arch specific code · 7e1048b1
      Rusty Lynch 提交于
      The architecture independent code of the current kprobes implementation is
      arming and disarming kprobes at registration time.  The problem is that the
      code is assuming that arming and disarming is a just done by a simple write
      of some magic value to an address.  This is problematic for ia64 where our
      instructions look more like structures, and we can not insert break points
      by just doing something like:
      
      *p->addr = BREAKPOINT_INSTRUCTION;
      
      The following patch to 2.6.12-rc4-mm2 adds two new architecture dependent
      functions:
      
           * void arch_arm_kprobe(struct kprobe *p)
           * void arch_disarm_kprobe(struct kprobe *p)
      
      and then adds the new functions for each of the architectures that already
      implement kprobes (spar64/ppc64/i386/x86_64).
      
      I thought arch_[dis]arm_kprobe was the most descriptive of what was really
      happening, but each of the architectures already had a disarm_kprobe()
      function that was really a "disarm and do some other clean-up items as
      needed when you stumble across a recursive kprobe." So...  I took the
      liberty of changing the code that was calling disarm_kprobe() to call
      arch_disarm_kprobe(), and then do the cleanup in the block of code dealing
      with the recursive kprobe case.
      
      So far this patch as been tested on i386, x86_64, and ppc64, but still
      needs to be tested in sparc64.
      Signed-off-by: NRusty Lynch <rusty.lynch@intel.com>
      Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      7e1048b1
    • H
      [PATCH] kprobes: function-return probes · b94cce92
      Hien Nguyen 提交于
      This patch adds function-return probes to kprobes for the i386
      architecture.  This enables you to establish a handler to be run when a
      function returns.
      
      1. API
      
      Two new functions are added to kprobes:
      
      	int register_kretprobe(struct kretprobe *rp);
      	void unregister_kretprobe(struct kretprobe *rp);
      
      2. Registration and unregistration
      
      2.1 Register
      
        To register a function-return probe, the user populates the following
        fields in a kretprobe object and calls register_kretprobe() with the
        kretprobe address as an argument:
      
        kp.addr - the function's address
      
        handler - this function is run after the ret instruction executes, but
        before control returns to the return address in the caller.
      
        maxactive - The maximum number of instances of the probed function that
        can be active concurrently.  For example, if the function is non-
        recursive and is called with a spinlock or mutex held, maxactive = 1
        should be enough.  If the function is non-recursive and can never
        relinquish the CPU (e.g., via a semaphore or preemption), NR_CPUS should
        be enough.  maxactive is used to determine how many kretprobe_instance
        objects to allocate for this particular probed function.  If maxactive <=
        0, it is set to a default value (if CONFIG_PREEMPT maxactive=max(10, 2 *
        NR_CPUS) else maxactive=NR_CPUS)
      
        For example:
      
          struct kretprobe rp;
          rp.kp.addr = /* entrypoint address */
          rp.handler = /*return probe handler */
          rp.maxactive = /* e.g., 1 or NR_CPUS or 0, see the above explanation */
          register_kretprobe(&rp);
      
        The following field may also be of interest:
      
        nmissed - Initialized to zero when the function-return probe is
        registered, and incremented every time the probed function is entered but
        there is no kretprobe_instance object available for establishing the
        function-return probe (i.e., because maxactive was set too low).
      
      2.2 Unregister
      
        To unregiter a function-return probe, the user calls
        unregister_kretprobe() with the same kretprobe object as registered
        previously.  If a probed function is running when the return probe is
        unregistered, the function will return as expected, but the handler won't
        be run.
      
      3. Limitations
      
      3.1 This patch supports only the i386 architecture, but patches for
          x86_64 and ppc64 are anticipated soon.
      
      3.2 Return probes operates by replacing the return address in the stack
          (or in a known register, such as the lr register for ppc).  This may
          cause __builtin_return_address(0), when invoked from the return-probed
          function, to return the address of the return-probes trampoline.
      
      3.3 This implementation uses the "Multiprobes at an address" feature in
          2.6.12-rc3-mm3.
      
      3.4 Due to a limitation in multi-probes, you cannot currently establish
          a return probe and a jprobe on the same function.  A patch to remove
          this limitation is being tested.
      
      This feature is required by SystemTap (http://sourceware.org/systemtap),
      and reflects ideas contributed by several SystemTap developers, including
      Will Cohen and Ananth Mavinakayanahalli.
      Signed-off-by: NHien Nguyen <hien@us.ibm.com>
      Signed-off-by: NPrasanna S Panchamukhi <prasanna@in.ibm.com>
      Signed-off-by: NFrederik Deweerdt <frederik.deweerdt@laposte.net>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b94cce92
  13. 06 5月, 2005 1 次提交
  14. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4