1. 23 3月, 2011 1 次提交
    • J
      tracing: Fix set_ftrace_filter probe function display · 1106b699
      Jiri Olsa 提交于
      If one or more function probes (like traceon) are enabled,
      and there's no other function filter, the first probe
      func is skipped (which one depends on the position in the hash).
      
      $ echo sys_open:traceon sys_close:traceon > ./set_ftrace_filter
      $ cat set_ftrace_filter
      #### all functions enabled ####
      sys_close:traceon:unlimited
      $
      
      The reason was, that in the case of no other function filter,
      the func_pos was not properly updated before calling t_hash_start.
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      LKML-Reference: <1297874134-7008-1-git-send-email-jolsa@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      1106b699
  2. 12 2月, 2011 1 次提交
    • S
      ftrace: Fix memory leak with function graph and cpu hotplug · 868baf07
      Steven Rostedt 提交于
      When the fuction graph tracer starts, it needs to make a special
      stack for each task to save the real return values of the tasks.
      All running tasks have this stack created, as well as any new
      tasks.
      
      On CPU hot plug, the new idle task will allocate a stack as well
      when init_idle() is called. The problem is that cpu hotplug does
      not create a new idle_task. Instead it uses the idle task that
      existed when the cpu went down.
      
      ftrace_graph_init_task() will add a new ret_stack to the task
      that is given to it. Because a clone will make the task
      have a stack of its parent it does not check if the task's
      ret_stack is already NULL or not. When the CPU hotplug code
      starts a CPU up again, it will allocate a new stack even
      though one already existed for it.
      
      The solution is to treat the idle_task specially. In fact, the
      function_graph code already does, just not at init_idle().
      Instead of using the ftrace_graph_init_task() for the idle task,
      which that function expects the task to be a clone, have a
      separate ftrace_graph_init_idle_task(). Also, we will create a
      per_cpu ret_stack that is used by the idle task. When we call
      ftrace_graph_init_idle_task() it will check if the idle task's
      ret_stack is NULL, if it is, then it will assign it the per_cpu
      ret_stack.
      Reported-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Suggested-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Stable Tree <stable@kernel.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      868baf07
  3. 18 10月, 2010 1 次提交
  4. 15 10月, 2010 1 次提交
    • A
      llseek: automatically add .llseek fop · 6038f373
      Arnd Bergmann 提交于
      All file_operations should get a .llseek operation so we can make
      nonseekable_open the default for future file operations without a
      .llseek pointer.
      
      The three cases that we can automatically detect are no_llseek, seq_lseek
      and default_llseek. For cases where we can we can automatically prove that
      the file offset is always ignored, we use noop_llseek, which maintains
      the current behavior of not returning an error from a seek.
      
      New drivers should normally not use noop_llseek but instead use no_llseek
      and call nonseekable_open at open time.  Existing drivers can be converted
      to do the same when the maintainer knows for certain that no user code
      relies on calling seek on the device file.
      
      The generated code is often incorrectly indented and right now contains
      comments that clarify for each added line why a specific variant was
      chosen. In the version that gets submitted upstream, the comments will
      be gone and I will manually fix the indentation, because there does not
      seem to be a way to do that using coccinelle.
      
      Some amount of new code is currently sitting in linux-next that should get
      the same modifications, which I will do at the end of the merge window.
      
      Many thanks to Julia Lawall for helping me learn to write a semantic
      patch that does all this.
      
      ===== begin semantic patch =====
      // This adds an llseek= method to all file operations,
      // as a preparation for making no_llseek the default.
      //
      // The rules are
      // - use no_llseek explicitly if we do nonseekable_open
      // - use seq_lseek for sequential files
      // - use default_llseek if we know we access f_pos
      // - use noop_llseek if we know we don't access f_pos,
      //   but we still want to allow users to call lseek
      //
      @ open1 exists @
      identifier nested_open;
      @@
      nested_open(...)
      {
      <+...
      nonseekable_open(...)
      ...+>
      }
      
      @ open exists@
      identifier open_f;
      identifier i, f;
      identifier open1.nested_open;
      @@
      int open_f(struct inode *i, struct file *f)
      {
      <+...
      (
      nonseekable_open(...)
      |
      nested_open(...)
      )
      ...+>
      }
      
      @ read disable optional_qualifier exists @
      identifier read_f;
      identifier f, p, s, off;
      type ssize_t, size_t, loff_t;
      expression E;
      identifier func;
      @@
      ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
      {
      <+...
      (
         *off = E
      |
         *off += E
      |
         func(..., off, ...)
      |
         E = *off
      )
      ...+>
      }
      
      @ read_no_fpos disable optional_qualifier exists @
      identifier read_f;
      identifier f, p, s, off;
      type ssize_t, size_t, loff_t;
      @@
      ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
      {
      ... when != off
      }
      
      @ write @
      identifier write_f;
      identifier f, p, s, off;
      type ssize_t, size_t, loff_t;
      expression E;
      identifier func;
      @@
      ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
      {
      <+...
      (
        *off = E
      |
        *off += E
      |
        func(..., off, ...)
      |
        E = *off
      )
      ...+>
      }
      
      @ write_no_fpos @
      identifier write_f;
      identifier f, p, s, off;
      type ssize_t, size_t, loff_t;
      @@
      ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
      {
      ... when != off
      }
      
      @ fops0 @
      identifier fops;
      @@
      struct file_operations fops = {
       ...
      };
      
      @ has_llseek depends on fops0 @
      identifier fops0.fops;
      identifier llseek_f;
      @@
      struct file_operations fops = {
      ...
       .llseek = llseek_f,
      ...
      };
      
      @ has_read depends on fops0 @
      identifier fops0.fops;
      identifier read_f;
      @@
      struct file_operations fops = {
      ...
       .read = read_f,
      ...
      };
      
      @ has_write depends on fops0 @
      identifier fops0.fops;
      identifier write_f;
      @@
      struct file_operations fops = {
      ...
       .write = write_f,
      ...
      };
      
      @ has_open depends on fops0 @
      identifier fops0.fops;
      identifier open_f;
      @@
      struct file_operations fops = {
      ...
       .open = open_f,
      ...
      };
      
      // use no_llseek if we call nonseekable_open
      ////////////////////////////////////////////
      @ nonseekable1 depends on !has_llseek && has_open @
      identifier fops0.fops;
      identifier nso ~= "nonseekable_open";
      @@
      struct file_operations fops = {
      ...  .open = nso, ...
      +.llseek = no_llseek, /* nonseekable */
      };
      
      @ nonseekable2 depends on !has_llseek @
      identifier fops0.fops;
      identifier open.open_f;
      @@
      struct file_operations fops = {
      ...  .open = open_f, ...
      +.llseek = no_llseek, /* open uses nonseekable */
      };
      
      // use seq_lseek for sequential files
      /////////////////////////////////////
      @ seq depends on !has_llseek @
      identifier fops0.fops;
      identifier sr ~= "seq_read";
      @@
      struct file_operations fops = {
      ...  .read = sr, ...
      +.llseek = seq_lseek, /* we have seq_read */
      };
      
      // use default_llseek if there is a readdir
      ///////////////////////////////////////////
      @ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier readdir_e;
      @@
      // any other fop is used that changes pos
      struct file_operations fops = {
      ... .readdir = readdir_e, ...
      +.llseek = default_llseek, /* readdir is present */
      };
      
      // use default_llseek if at least one of read/write touches f_pos
      /////////////////////////////////////////////////////////////////
      @ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier read.read_f;
      @@
      // read fops use offset
      struct file_operations fops = {
      ... .read = read_f, ...
      +.llseek = default_llseek, /* read accesses f_pos */
      };
      
      @ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier write.write_f;
      @@
      // write fops use offset
      struct file_operations fops = {
      ... .write = write_f, ...
      +	.llseek = default_llseek, /* write accesses f_pos */
      };
      
      // Use noop_llseek if neither read nor write accesses f_pos
      ///////////////////////////////////////////////////////////
      
      @ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier read_no_fpos.read_f;
      identifier write_no_fpos.write_f;
      @@
      // write fops use offset
      struct file_operations fops = {
      ...
       .write = write_f,
       .read = read_f,
      ...
      +.llseek = noop_llseek, /* read and write both use no f_pos */
      };
      
      @ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier write_no_fpos.write_f;
      @@
      struct file_operations fops = {
      ... .write = write_f, ...
      +.llseek = noop_llseek, /* write uses no f_pos */
      };
      
      @ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier read_no_fpos.read_f;
      @@
      struct file_operations fops = {
      ... .read = read_f, ...
      +.llseek = noop_llseek, /* read uses no f_pos */
      };
      
      @ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      @@
      struct file_operations fops = {
      ...
      +.llseek = noop_llseek, /* no read or write fn */
      };
      ===== End semantic patch =====
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Cc: Julia Lawall <julia@diku.dk>
      Cc: Christoph Hellwig <hch@infradead.org>
      6038f373
  5. 15 9月, 2010 3 次提交
    • S
      tracing: Remove leftover FTRACE_ENABLE/DISABLE_MCOUNT enums · 79e406d7
      Steven Rostedt 提交于
      The enums for FTRACE_ENABLE_MCOUNT and FTRACE_DISABLE_MCOUNT were
      used as commands to ftrace_run_update_code(). But these commands
      were used by the old nasty ftrace daemon that has long been slain.
      
      This is a clean up patch to remove the references to these enums
      and simplify the code a little.
      Reported-by: NWu Zhangjin <wuzhangjin@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      79e406d7
    • S
      tracing: Fix reading of set_ftrace_filter across lists · 57c072c7
      Steven Rostedt 提交于
      If we do:
      
       # cd /sys/kernel/debug
       # echo 'do_IRQ:traceon schedule:traceon sys_write:traceon' > \
          set_ftrace_filter
       # cat set_ftrace_filter
      
      We get the following output:
      
       #### all functions enabled ####
       sys_write:traceon:unlimited
       schedule:traceon:unlimited
       do_IRQ:traceon:unlimited
      
      This outputs two lists. One is the fact that all functions are
      currently enabled for function tracing, the other has three probed
      functions, which happen to have 'traceon' as their commands.
      
      Currently, when reading the first list (functions enabled) the
      seq_file code will receive a "NULL" from the t_next() function
      causing it to exit early. This makes "read()" from userspace stop
      reading the code at this boarder. Although read is allowed to do this,
      some (broken) applications might consider this an end of file and
      stop early.
      
      This patch adds the start of the second list to t_next() when it
      finishes the first list. It is a simple change and gives the
      set_ftrace_filter file nicer reading ability.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      57c072c7
    • S
      tracing: Keep track of set_ftrace_filter position and allow lseek again · 98c4fd04
      Steven Rostedt 提交于
      This patch keeps track of the index within the elements of
      set_ftrace_filter and if the position goes backwards, it nicely
      resets and starts from the beginning again.
      
      This allows for lseek and pread to work properly now.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      98c4fd04
  6. 14 9月, 2010 2 次提交
    • S
      tracing: Replace typecasted void pointer in set_ftrace_filter code · 4aeb6967
      Steven Rostedt 提交于
      The set_ftrace_filter uses seq_file and reads from two lists. The
      pointer returned by t_next() can either be of type struct dyn_ftrace
      or struct ftrace_func_probe. If there is a bug (there was one)
      the wrong pointer may be used and the reference can cause an oops.
      
      This patch makes t_next() and friends only return the iterator structure
      which now has a pointer of type struct dyn_ftrace and struct
      ftrace_func_probe. The t_show() can now test if the pointer is NULL or
      not and if the pointer exists, it is guaranteed to be of the correct type.
      
      Now if there's a bug, only wrong data will be shown but not an oops.
      
      Cc: Chris Wright <chrisw@sous-sol.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      4aeb6967
    • S
      tracing: Do not reset *pos in set_ftrace_filter · 2bccfffd
      Steven Rostedt 提交于
      After the filtered functions are read, the probed functions are read
      from the hash in set_ftrace_filter. When the hashed probed functions
      are read, the *pos passed in is reset. Instead of modifying the pos
      given to the read function, just record the pos where the filtered
      functions ended and subtract from that.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2bccfffd
  7. 10 9月, 2010 1 次提交
  8. 09 9月, 2010 1 次提交
    • S
      tracing: Do not allow llseek to set_ftrace_filter · 9c55cb12
      Steven Rostedt 提交于
      Reading the file set_ftrace_filter does three things.
      
      1) shows whether or not filters are set for the function tracer
      2) shows what functions are set for the function tracer
      3) shows what triggers are set on any functions
      
      3 is independent from 1 and 2.
      
      The way this file currently works is that it is a state machine,
      and as you read it, it may change state. But this assumption breaks
      when you use lseek() on the file. The state machine gets out of sync
      and the t_show() may use the wrong pointer and cause a kernel oops.
      
      Luckily, this will only kill the app that does the lseek, but the app
      dies while holding a mutex. This prevents anyone else from using the
      set_ftrace_filter file (or any other function tracing file for that matter).
      
      A real fix for this is to rewrite the code, but that is too much for
      a -rc release or stable. This patch simply disables llseek on the
      set_ftrace_filter() file for now, and we can do the proper fix for the
      next major release.
      Reported-by: NRobert Swiecki <swiecki@google.com>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Cc: Tavis Ormandy <taviso@google.com>
      Cc: Eugene Teo <eugene@redhat.com>
      Cc: vendor-sec@lst.de
      Cc: <stable@kernel.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      9c55cb12
  9. 01 9月, 2010 1 次提交
    • L
      tracing: Fix a race in function profile · 3aaba20f
      Li Zefan 提交于
      While we are reading trace_stat/functionX and someone just
      disabled function_profile at that time, we can trigger this:
      
      	divide error: 0000 [#1] PREEMPT SMP
      	...
      	EIP is at function_stat_show+0x90/0x230
      	...
      
      This fix just takes the ftrace_profile_lock and checks if
      rec->counter is 0. If it's 0, we know the profile buffer
      has been reset.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      Cc: stable@kernel.org
      LKML-Reference: <4C723644.4040708@cn.fujitsu.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      3aaba20f
  10. 04 6月, 2010 1 次提交
    • S
      tracing: Remove ftrace_preempt_disable/enable · 5168ae50
      Steven Rostedt 提交于
      The ftrace_preempt_disable/enable functions were to address a
      recursive race caused by the function tracer. The function tracer
      traces all functions which makes it easily susceptible to recursion.
      One area was preempt_enable(). This would call the scheduler and
      the schedulre would call the function tracer and loop.
      (So was it thought).
      
      The ftrace_preempt_disable/enable was made to protect against recursion
      inside the scheduler by storing the NEED_RESCHED flag. If it was
      set before the ftrace_preempt_disable() it would not call schedule
      on ftrace_preempt_enable(), thinking that if it was set before then
      it would have already scheduled unless it was already in the scheduler.
      
      This worked fine except in the case of SMP, where another task would set
      the NEED_RESCHED flag for a task on another CPU, and then kick off an
      IPI to trigger it. This could cause the NEED_RESCHED to be saved at
      ftrace_preempt_disable() but the IPI to arrive in the the preempt
      disabled section. The ftrace_preempt_enable() would not call the scheduler
      because the flag was already set before entring the section.
      
      This bug would cause a missed preemption check and cause lower latencies.
      
      Investigating further, I found that the recusion caused by the function
      tracer was not due to schedule(), but due to preempt_schedule(). Now
      that preempt_schedule is completely annotated with notrace, the recusion
      no longer is an issue.
      Reported-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5168ae50
  11. 14 5月, 2010 1 次提交
    • S
      tracing: Let tracepoints have data passed to tracepoint callbacks · 38516ab5
      Steven Rostedt 提交于
      This patch adds data to be passed to tracepoint callbacks.
      
      The created functions from DECLARE_TRACE() now need a mandatory data
      parameter. For example:
      
      DECLARE_TRACE(mytracepoint, int value, value)
      
      Will create the register function:
      
      int register_trace_mytracepoint((void(*)(void *data, int value))probe,
                                      void *data);
      
      As the first argument, all callbacks (probes) must take a (void *data)
      parameter. So a callback for the above tracepoint will look like:
      
      void myprobe(void *data, int value)
      {
      }
      
      The callback may choose to ignore the data parameter.
      
      This change allows callbacks to register a private data pointer along
      with the function probe.
      
      	void mycallback(void *data, int value);
      
      	register_trace_mytracepoint(mycallback, mydata);
      
      Then the mycallback() will receive the "mydata" as the first parameter
      before the args.
      
      A more detailed example:
      
        DECLARE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));
      
        /* In the C file */
      
        DEFINE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));
      
        [...]
      
             trace_mytracepoint(status);
      
        /* In a file registering this tracepoint */
      
        int my_callback(void *data, int status)
        {
      	struct my_struct my_data = data;
      	[...]
        }
      
        [...]
      	my_data = kmalloc(sizeof(*my_data), GFP_KERNEL);
      	init_my_data(my_data);
      	register_trace_mytracepoint(my_callback, my_data);
      
      The same callback can also be registered to the same tracepoint as long
      as the data registered is different. Note, the data must also be used
      to unregister the callback:
      
      	unregister_trace_mytracepoint(my_callback, my_data);
      
      Because of the data parameter, tracepoints declared this way can not have
      no args. That is:
      
        DECLARE_TRACE(mytracepoint, TP_PROTO(void), TP_ARGS());
      
      will cause an error.
      
      If no arguments are needed, a new macro can be used instead:
      
        DECLARE_TRACE_NOARGS(mytracepoint);
      
      Since there are no arguments, the proto and args fields are left out.
      
      This is part of a series to make the tracepoint footprint smaller:
      
         text	   data	    bss	    dec	    hex	filename
      4913961	1088356	 861512	6863829	 68bbd5	vmlinux.orig
      4914025	1088868	 861512	6864405	 68be15	vmlinux.class
      4918492	1084612	 861512	6864616	 68bee8	vmlinux.tracepoint
      
      Again, this patch also increases the size of the kernel, but
      lays the ground work for decreasing it.
      
       v5: Fixed net/core/drop_monitor.c to handle these updates.
      
       v4: Moved the DECLARE_TRACE() DECLARE_TRACE_NOARGS out of the
           #ifdef CONFIG_TRACE_POINTS, since the two are the same in both
           cases. The __DECLARE_TRACE() is what changes.
           Thanks to Frederic Weisbecker for pointing this out.
      
       v3: Made all register_* functions require data to be passed and
           all callbacks to take a void * parameter as its first argument.
           This makes the calling functions comply with C standards.
      
           Also added more comments to the modifications of DECLARE_TRACE().
      
       v2: Made the DECLARE_TRACE() have the ability to pass arguments
           and added a new DECLARE_TRACE_NOARGS() for tracepoints that
           do not need any arguments.
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Cc: David S. Miller <davem@davemloft.net>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      38516ab5
  12. 07 5月, 2010 1 次提交
  13. 28 4月, 2010 2 次提交
    • S
      tracing: Fix sleep time function profiling · 37e44bc5
      Steven Rostedt 提交于
      When sleep_time is off the function profiler ignores the time that a task
      is scheduled out. When the task is scheduled out a timestamp is taken.
      When the task is scheduled back in, the timestamp is compared to the
      current time and the saved calltimes are adjusted accordingly.
      
      But when stopping the function profiler, the sched switch hook that
      does this adjustment was stopped before shutting down the tracer.
      This allowed some tasks to not get their timestamps set when they
      scheduled out. When the function profiler started again, this would
      skew the times of the scheduler functions.
      
      This patch moves the stopping of the sched switch to after the function
      profiler is stopped. It also ignores zero set calltimes, which may
      happen on start up.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      37e44bc5
    • C
      tracing: Show sample std dev in function profiling · e330b3bc
      Chase Douglas 提交于
      When combined with function graph tracing the ftrace function profiler
      also prints the average run time of functions. While this gives us some
      good information, it doesn't tell us anything about the variance of the
      run times of the function. This change prints out the s^2 sample
      standard deviation alongside the average.
      
      This change adds one entry to the profile record structure. This
      increases the memory footprint of the function profiler by 1/3 on a
      32-bit system, and by 1/5 on a 64-bit system when function graphing is
      enabled, though the memory is only allocated when the profiler is turned
      on. During the profiling, one extra line of code adds the squared
      calltime to the new record entry, so this should not adversly affect
      performance.
      
      Note that the square of the sample standard deviation is printed because
      there is no sqrt implementation for unsigned long long in the kernel.
      Signed-off-by: NChase Douglas <chase.douglas@canonical.com>
      LKML-Reference: <1272304925-2436-1-git-send-email-chase.douglas@canonical.com>
      
      [ fixed comment about ns^2 -> us^2 conversion ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e330b3bc
  14. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  15. 13 3月, 2010 1 次提交
    • S
      function-graph: Init curr_ret_stack with ret_stack · ea14eb71
      Steven Rostedt 提交于
      If the graph tracer is active, and a task is forked but the allocating of
      the processes graph stack fails, it can cause crash later on.
      
      This is due to the temporary stack being NULL, but the curr_ret_stack
      variable is copied from the parent. If it is not -1, then in
      ftrace_graph_probe_sched_switch() the following:
      
      	for (index = next->curr_ret_stack; index >= 0; index--)
      		next->ret_stack[index].calltime += timestamp;
      
      Will cause a kernel OOPS.
      
      Found with Li Zefan's ftrace_stress_test.
      
      Cc: stable@kernel.org
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      ea14eb71
  16. 11 3月, 2010 1 次提交
    • P
      ftrace: Replace read_barrier_depends() with rcu_dereference_raw() · 3f379b03
      Paul E. McKenney 提交于
      Replace the calls to read_barrier_depends() in
      ftrace_list_func() with rcu_dereference_raw() to improve
      readability.  The reason that we use rcu_dereference_raw() here
      is that removed entries are never freed, instead they are simply
      leaked.  This is one of a very few cases where use of
      rcu_dereference_raw() is the long-term right answer.  And I
      don't yet know of any others.  ;-)
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1267830207-9474-1-git-send-email-paulmck@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3f379b03
  17. 06 3月, 2010 1 次提交
  18. 12 2月, 2010 1 次提交
  19. 04 2月, 2010 2 次提交
    • M
      ftrace: Remove record freezing · f24bb999
      Masami Hiramatsu 提交于
      Remove record freezing. Because kprobes never puts probe on
      ftrace's mcount call anymore, it doesn't need ftrace to check
      whether kprobes on it.
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Cc: systemtap <systemtap@sources.redhat.com>
      Cc: DLE <dle-develop@lists.sourceforge.net>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: przemyslaw@pawelczyk.it
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <20100202214925.4694.73469.stgit@dhcp-100-2-132.bos.redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f24bb999
    • M
      ftrace/alternatives: Introducing *_text_reserved functions · 2cfa1978
      Masami Hiramatsu 提交于
      Introducing *_text_reserved functions for checking the text
      address range is partially reserved or not. This patch provides
      checking routines for x86 smp alternatives and dynamic ftrace.
      Since both functions modify fixed pieces of kernel text, they
      should reserve and protect those from other dynamic text
      modifier, like kprobes.
      
      This will also be extended when introducing other subsystems
      which modify fixed pieces of kernel text. Dynamic text modifiers
      should avoid those.
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Cc: systemtap <systemtap@sources.redhat.com>
      Cc: DLE <dle-develop@lists.sourceforge.net>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: przemyslaw@pawelczyk.it
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Jim Keniston <jkenisto@us.ibm.com>
      Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
      Cc: Jason Baron <jbaron@redhat.com>
      LKML-Reference: <20100202214911.4694.16587.stgit@dhcp-100-2-132.bos.redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2cfa1978
  20. 15 1月, 2010 1 次提交
  21. 14 12月, 2009 3 次提交
  22. 23 11月, 2009 1 次提交
    • I
      tracing, function tracer: Clean up strstrip() usage · 457dc928
      Ingo Molnar 提交于
      Clean up strstrip() usage - which also addresses this build warning:
      
        kernel/trace/ftrace.c: In function 'ftrace_pid_write':
        kernel/trace/ftrace.c:3004: warning: ignoring return value of 'strstrip', declared with attribute warn_unused_result
      
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      457dc928
  23. 18 11月, 2009 1 次提交
  24. 04 11月, 2009 1 次提交
  25. 24 10月, 2009 1 次提交
  26. 15 10月, 2009 1 次提交
  27. 14 10月, 2009 2 次提交
    • J
      tracing: Enable records during the module load · 5cb084bb
      Jiri Olsa 提交于
      I was debuging some module using "function" and "function_graph"
      tracers and noticed, that if you load module after you enabled
      tracing, the module's hooks will convert only to NOP instructions.
      
      The attached patch enables modules' hooks if there's function trace
      allready on, thus allowing to trace module functions.
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <20091013203425.896285120@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5cb084bb
    • J
      tracing: Support multiple pids in set_pid_ftrace file · 756d17ee
      jolsa@redhat.com 提交于
      Adding the possibility to set more than 1 pid in the set_pid_ftrace
      file, thus allowing to trace more than 1 independent processes.
      
      Usage:
      
       sh-4.0# echo 284 > ./set_ftrace_pid
       sh-4.0# cat ./set_ftrace_pid
       284
       sh-4.0# echo 1 >> ./set_ftrace_pid
       sh-4.0# echo 0 >> ./set_ftrace_pid
       sh-4.0# cat ./set_ftrace_pid
       swapper tasks
       1
       284
       sh-4.0# echo 4 > ./set_ftrace_pid
       sh-4.0# cat ./set_ftrace_pid
       4
       sh-4.0# echo > ./set_ftrace_pid
       sh-4.0# cat ./set_ftrace_pid
       no pid
       sh-4.0#
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <20091013203425.565454612@goodmis.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      756d17ee
  28. 13 10月, 2009 1 次提交
  29. 08 10月, 2009 2 次提交
  30. 01 10月, 2009 1 次提交
    • M
      tracing: Fix infinite recursion in ftrace_update_pid_func() · 33974093
      Matt Fleming 提交于
      When CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST is enabled
      __ftrace_trace_function contains the current trace function, not
      ftrace_trace_function.
      
      In ftrace_update_pid_func() we currently incorrectly assign the
      value of ftrace_trace_function to __ftrace_trace_funcion before
      returning.
      
      Without this patch it is possible to execute an infinite recursion
      whereby ftrace_test_stop_func() calls __ftrace_trace_function,
      which was assigned ftrace_test_stop_func() in
      ftrace_update_pid_func().
      Signed-off-by: NMatt Fleming <matthew.fleming@imgtec.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <1254152581-18347-1-git-send-email-matt@console-pimps.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      33974093
  31. 25 9月, 2009 1 次提交
    • F
      tracing/filters: Unify the regex parsing helpers · 3f6fe06d
      Frederic Weisbecker 提交于
      The filter code has stolen the regex parsing function from ftrace to
      get the regex support.
      We have duplicated this code, so factorize it in the filter area and
      make it generally available, as the filter code is the most suited to
      host this feature.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      3f6fe06d