1. 13 1月, 2014 2 次提交
  2. 15 3月, 2013 5 次提交
    • S
      tracing: Add function-trace option to disable function tracing of latency tracers · 328df475
      Steven Rostedt (Red Hat) 提交于
      Currently, the only way to stop the latency tracers from doing function
      tracing is to fully disable the function tracer from the proc file
      system:
      
        echo 0 > /proc/sys/kernel/ftrace_enabled
      
      This is a big hammer approach as it disables function tracing for
      all users. This includes kprobes, perf, stack tracer, etc.
      
      Instead, create a function-trace option that the latency tracers can
      check to determine if it should enable function tracing or not.
      This option can be set or cleared even while the tracer is active
      and the tracers will disable or enable function tracing depending
      on how the option was set.
      
      Instead of using the proc file, disable latency function tracing with
      
        echo 0 > /debug/tracing/options/function-trace
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Clark Williams <williams@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      328df475
    • S
      tracing: Consolidate max_tr into main trace_array structure · 12883efb
      Steven Rostedt (Red Hat) 提交于
      Currently, the way the latency tracers and snapshot feature works
      is to have a separate trace_array called "max_tr" that holds the
      snapshot buffer. For latency tracers, this snapshot buffer is used
      to swap the running buffer with this buffer to save the current max
      latency.
      
      The only items needed for the max_tr is really just a copy of the buffer
      itself, the per_cpu data pointers, the time_start timestamp that states
      when the max latency was triggered, and the cpu that the max latency
      was triggered on. All other fields in trace_array are unused by the
      max_tr, making the max_tr mostly bloat.
      
      This change removes the max_tr completely, and adds a new structure
      called trace_buffer, that holds the buffer pointer, the per_cpu data
      pointers, the time_start timestamp, and the cpu where the latency occurred.
      
      The trace_array, now has two trace_buffers, one for the normal trace and
      one for the max trace or snapshot. By doing this, not only do we remove
      the bloat from the max_trace but the instances of traces can now use
      their own snapshot feature and not have just the top level global_trace have
      the snapshot feature and latency tracers for itself.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      12883efb
    • S
      tracing: Replace the static global per_cpu arrays with allocated per_cpu · a7603ff4
      Steven Rostedt 提交于
      The global and max-tr currently use static per_cpu arrays for the CPU data
      descriptors. But in order to get new allocated trace_arrays, they need to
      be allocated per_cpu arrays. Instead of using the static arrays, switch
      the global and max-tr to use allocated data.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a7603ff4
    • S
      tracing: Encapsulate global_trace and remove dependencies on global vars · 2b6080f2
      Steven Rostedt 提交于
      The global_trace variable in kernel/trace/trace.c has been kept 'static' and
      local to that file so that it would not be used too much outside of that
      file. This has paid off, even though there were lots of changes to make
      the trace_array structure more generic (not depending on global_trace).
      
      Removal of a lot of direct usages of global_trace is needed to be able to
      create more trace_arrays such that we can add multiple buffers.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2b6080f2
    • S
      tracing: Prevent buffer overwrite disabled for latency tracers · 613f04a0
      Steven Rostedt (Red Hat) 提交于
      The latency tracers require the buffers to be in overwrite mode,
      otherwise they get screwed up. Force the buffers to stay in overwrite
      mode when latency tracers are enabled.
      
      Added a flag_changed() method to the tracer structure to allow
      the tracers to see what flags are being changed, and also be able
      to prevent the change from happing.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      613f04a0
  3. 08 2月, 2013 1 次提交
  4. 06 12月, 2012 1 次提交
  5. 01 11月, 2012 2 次提交
  6. 31 7月, 2012 1 次提交
    • S
      ftrace: Add default recursion protection for function tracing · 4740974a
      Steven Rostedt 提交于
      As more users of the function tracer utility are being added, they do
      not always add the necessary recursion protection. To protect from
      function recursion due to tracing, if the callback ftrace_ops does not
      specifically specify that it protects against recursion (by setting
      the FTRACE_OPS_FL_RECURSION_SAFE flag), the list operation will be
      called by the mcount trampoline which adds recursion protection.
      
      If the flag is set, then the function will be called directly with no
      extra protection.
      
      Note, the list operation is called if more than one function callback
      is registered, or if the arch does not support all of the function
      tracer features.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      4740974a
  7. 20 7月, 2012 2 次提交
  8. 08 11月, 2011 1 次提交
    • J
      tracing/latency: Fix header output for latency tracers · 7e9a49ef
      Jiri Olsa 提交于
      In case the the graph tracer (CONFIG_FUNCTION_GRAPH_TRACER) or even the
      function tracer (CONFIG_FUNCTION_TRACER) are not set, the latency tracers
      do not display proper latency header.
      
      The involved/fixed latency tracers are:
              wakeup_rt
              wakeup
              preemptirqsoff
              preemptoff
              irqsoff
      
      The patch adds proper handling of tracer configuration options for latency
      tracers, and displaying correct header info accordingly.
      
      * The current output (for wakeup tracer) with both graph and function
        tracers disabled is:
      
        # tracer: wakeup
        #
          <idle>-0       0d.h5    1us+:      0:120:R   + [000]     7:  0:R watchdog/0
          <idle>-0       0d.h5    3us+: ttwu_do_activate.clone.1 <-try_to_wake_up
          ...
      
      * The fixed output is:
      
        # tracer: wakeup
        #
        # wakeup latency trace v1.1.5 on 3.1.0-tip+
        # --------------------------------------------------------------------
        # latency: 55 us, #4/4, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
        #    -----------------
        #    | task: migration/0-6 (uid:0 nice:0 policy:1 rt_prio:99)
        #    -----------------
        #
        #                  _------=> CPU#
        #                 / _-----=> irqs-off
        #                | / _----=> need-resched
        #                || / _---=> hardirq/softirq
        #                ||| / _--=> preempt-depth
        #                |||| /     delay
        #  cmd     pid   ||||| time  |   caller
        #     \   /      |||||  \    |   /
             cat-1129    0d..4    1us :   1129:120:R   + [000]     6:  0:R migration/0
             cat-1129    0d..4    2us+: ttwu_do_activate.clone.1 <-try_to_wake_up
      
      * The current output (for wakeup tracer) with only function
        tracer enabled is:
      
        # tracer: wakeup
        #
             cat-1140    0d..4    1us+:   1140:120:R   + [000]     6:  0:R migration/0
             cat-1140    0d..4    2us : ttwu_do_activate.clone.1 <-try_to_wake_up
      
      * The fixed output is:
        # tracer: wakeup
        #
        # wakeup latency trace v1.1.5 on 3.1.0-tip+
        # --------------------------------------------------------------------
        # latency: 207 us, #109/109, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
        #    -----------------
        #    | task: watchdog/1-12 (uid:0 nice:0 policy:1 rt_prio:99)
        #    -----------------
        #
        #                  _------=> CPU#
        #                 / _-----=> irqs-off
        #                | / _----=> need-resched
        #                || / _---=> hardirq/softirq
        #                ||| / _--=> preempt-depth
        #                |||| /     delay
        #  cmd     pid   ||||| time  |   caller
        #     \   /      |||||  \    |   /
          <idle>-0       1d.h5    1us+:      0:120:R   + [001]    12:  0:R watchdog/1
          <idle>-0       1d.h5    3us : ttwu_do_activate.clone.1 <-try_to_wake_up
      
      Link: http://lkml.kernel.org/r/20111107150849.GE1807@m.brq.redhat.com
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      7e9a49ef
  9. 15 6月, 2011 1 次提交
    • J
      tracing, function_graph: Remove dependency of abstime and duration fields on latency · 321e68b0
      Jiri Olsa 提交于
      The display of absolute time and duration fields is based on the
      latency field. This was added during the irqsoff/wakeup tracers
      graph support changes.
      
      It's causing confusion in what fields will be displayed for the
      function_graph tracer itself. So I'm removing this depency, and
      adding absolute time and duration fields to the preemptirqsoff
      preemptoff irqsoff wakeup tracers.
      
      With following commands:
      	# echo function_graph > ./current_tracer
      	# cat trace
      
      This is what it looked like before:
      # tracer: function_graph
      #
      #     TIME        CPU  DURATION                  FUNCTION CALLS
      #      |          |     |   |                     |   |   |   |
       0)   0.068 us    |          } /* page_add_file_rmap */
       0)               |          _raw_spin_unlock() {
      ...
      
      This is what it looks like now:
      # tracer: function_graph
      #
      # CPU  DURATION                  FUNCTION CALLS
      # |     |   |                     |   |   |   |
       0)   0.068 us    |                } /* add_preempt_count */
       0)   0.993 us    |              } /* vfsmount_lock_local_lock */
      ...
      
      For preemptirqsoff preemptoff irqsoff wakeup tracers,
      this is what it looked like before:
      SNIP
      #                       _-----=> irqs-off
      #                      / _----=> need-resched
      #                     | / _---=> hardirq/softirq
      #                     || / _--=> preempt-depth
      #                     ||| / _-=> lock-depth
      #                     |||| /
      # CPU  TASK/PID       |||||  DURATION                  FUNCTION CALLS
      # |     |    |        |||||   |   |                     |   |   |   |
       1)    <idle>-0    |  d..1  0.000 us    |  acpi_idle_enter_simple();
      ...
      
      This is what it looks like now:
      SNIP
      #
      #                                       _-----=> irqs-off
      #                                      / _----=> need-resched
      #                                     | / _---=> hardirq/softirq
      #                                     || / _--=> preempt-depth
      #                                     ||| /
      #     TIME        CPU  TASK/PID       ||||  DURATION                  FUNCTION CALLS
      #      |          |     |    |        ||||   |   |                     |   |   |   |
         19.847735 |   1)    <idle>-0    |  d..1  0.000 us    |  acpi_idle_enter_simple();
      ...
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Link: http://lkml.kernel.org/r/1307113131-10045-2-git-send-email-jolsa@redhat.comSigned-off-by: NSteven Rostedt <rostedt@goodmis.org>
      321e68b0
  10. 19 5月, 2011 1 次提交
    • S
      ftrace: Implement separate user function filtering · b848914c
      Steven Rostedt 提交于
      ftrace_ops that are registered to trace functions can now be
      agnostic to each other in respect to what functions they trace.
      Each ops has their own hash of the functions they want to trace
      and a hash to what they do not want to trace. A empty hash for
      the functions they want to trace denotes all functions should
      be traced that are not in the notrace hash.
      
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b848914c
  11. 19 10月, 2010 1 次提交
  12. 18 10月, 2010 2 次提交
  13. 21 7月, 2010 1 次提交
    • K
      tracing: Shrink max latency ringbuffer if unnecessary · ef710e10
      KOSAKI Motohiro 提交于
      Documentation/trace/ftrace.txt says
      
        buffer_size_kb:
      
              This sets or displays the number of kilobytes each CPU
              buffer can hold. The tracer buffers are the same size
              for each CPU. The displayed number is the size of the
              CPU buffer and not total size of all buffers. The
              trace buffers are allocated in pages (blocks of memory
              that the kernel uses for allocation, usually 4 KB in size).
              If the last page allocated has room for more bytes
              than requested, the rest of the page will be used,
              making the actual allocation bigger than requested.
              ( Note, the size may not be a multiple of the page size
                due to buffer management overhead. )
      
              This can only be updated when the current_tracer
              is set to "nop".
      
      But it's incorrect. currently total memory consumption is
      'buffer_size_kb x CPUs x 2'.
      
      Why two times difference is there? because ftrace implicitly allocate
      the buffer for max latency too.
      
      That makes sad result when admin want to use large buffer. (If admin
      want full logging and makes detail analysis). example, If admin
      have 24 CPUs machine and write 200MB to buffer_size_kb, the system
      consume ~10GB memory (200MB x 24 x 2). umm.. 5GB memory waste is
      usually unacceptable.
      
      Fortunatelly, almost all users don't use max latency feature.
      The max latency buffer can be disabled easily.
      
      This patch shrink buffer size of the max latency buffer if
      unnecessary.
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      LKML-Reference: <20100701104554.DA2D.A69D9226@jp.fujitsu.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      ef710e10
  14. 04 6月, 2010 1 次提交
    • S
      tracing: Remove ftrace_preempt_disable/enable · 5168ae50
      Steven Rostedt 提交于
      The ftrace_preempt_disable/enable functions were to address a
      recursive race caused by the function tracer. The function tracer
      traces all functions which makes it easily susceptible to recursion.
      One area was preempt_enable(). This would call the scheduler and
      the schedulre would call the function tracer and loop.
      (So was it thought).
      
      The ftrace_preempt_disable/enable was made to protect against recursion
      inside the scheduler by storing the NEED_RESCHED flag. If it was
      set before the ftrace_preempt_disable() it would not call schedule
      on ftrace_preempt_enable(), thinking that if it was set before then
      it would have already scheduled unless it was already in the scheduler.
      
      This worked fine except in the case of SMP, where another task would set
      the NEED_RESCHED flag for a task on another CPU, and then kick off an
      IPI to trigger it. This could cause the NEED_RESCHED to be saved at
      ftrace_preempt_disable() but the IPI to arrive in the the preempt
      disabled section. The ftrace_preempt_enable() would not call the scheduler
      because the flag was already set before entring the section.
      
      This bug would cause a missed preemption check and cause lower latencies.
      
      Investigating further, I found that the recusion caused by the function
      tracer was not due to schedule(), but due to preempt_schedule(). Now
      that preempt_schedule is completely annotated with notrace, the recusion
      no longer is an issue.
      Reported-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5168ae50
  15. 14 5月, 2010 1 次提交
    • S
      tracing: Let tracepoints have data passed to tracepoint callbacks · 38516ab5
      Steven Rostedt 提交于
      This patch adds data to be passed to tracepoint callbacks.
      
      The created functions from DECLARE_TRACE() now need a mandatory data
      parameter. For example:
      
      DECLARE_TRACE(mytracepoint, int value, value)
      
      Will create the register function:
      
      int register_trace_mytracepoint((void(*)(void *data, int value))probe,
                                      void *data);
      
      As the first argument, all callbacks (probes) must take a (void *data)
      parameter. So a callback for the above tracepoint will look like:
      
      void myprobe(void *data, int value)
      {
      }
      
      The callback may choose to ignore the data parameter.
      
      This change allows callbacks to register a private data pointer along
      with the function probe.
      
      	void mycallback(void *data, int value);
      
      	register_trace_mytracepoint(mycallback, mydata);
      
      Then the mycallback() will receive the "mydata" as the first parameter
      before the args.
      
      A more detailed example:
      
        DECLARE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));
      
        /* In the C file */
      
        DEFINE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));
      
        [...]
      
             trace_mytracepoint(status);
      
        /* In a file registering this tracepoint */
      
        int my_callback(void *data, int status)
        {
      	struct my_struct my_data = data;
      	[...]
        }
      
        [...]
      	my_data = kmalloc(sizeof(*my_data), GFP_KERNEL);
      	init_my_data(my_data);
      	register_trace_mytracepoint(my_callback, my_data);
      
      The same callback can also be registered to the same tracepoint as long
      as the data registered is different. Note, the data must also be used
      to unregister the callback:
      
      	unregister_trace_mytracepoint(my_callback, my_data);
      
      Because of the data parameter, tracepoints declared this way can not have
      no args. That is:
      
        DECLARE_TRACE(mytracepoint, TP_PROTO(void), TP_ARGS());
      
      will cause an error.
      
      If no arguments are needed, a new macro can be used instead:
      
        DECLARE_TRACE_NOARGS(mytracepoint);
      
      Since there are no arguments, the proto and args fields are left out.
      
      This is part of a series to make the tracepoint footprint smaller:
      
         text	   data	    bss	    dec	    hex	filename
      4913961	1088356	 861512	6863829	 68bbd5	vmlinux.orig
      4914025	1088868	 861512	6864405	 68be15	vmlinux.class
      4918492	1084612	 861512	6864616	 68bee8	vmlinux.tracepoint
      
      Again, this patch also increases the size of the kernel, but
      lays the ground work for decreasing it.
      
       v5: Fixed net/core/drop_monitor.c to handle these updates.
      
       v4: Moved the DECLARE_TRACE() DECLARE_TRACE_NOARGS out of the
           #ifdef CONFIG_TRACE_POINTS, since the two are the same in both
           cases. The __DECLARE_TRACE() is what changes.
           Thanks to Frederic Weisbecker for pointing this out.
      
       v3: Made all register_* functions require data to be passed and
           all callbacks to take a void * parameter as its first argument.
           This makes the calling functions comply with C standards.
      
           Also added more comments to the modifications of DECLARE_TRACE().
      
       v2: Made the DECLARE_TRACE() have the ability to pass arguments
           and added a new DECLARE_TRACE_NOARGS() for tracepoints that
           do not need any arguments.
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Cc: David S. Miller <davem@davemloft.net>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      38516ab5
  16. 07 5月, 2010 1 次提交
  17. 15 12月, 2009 3 次提交
  18. 13 9月, 2009 2 次提交
  19. 10 9月, 2009 1 次提交
    • S
      tracing: do not grab lock in wakeup latency function tracing · 478142c3
      Steven Rostedt 提交于
      The wakeup tracer, when enabled, has its own function tracer.
      It only traces the functions on the CPU where the task it is following
      is on. If a task is woken on one CPU but then migrates to another CPU
      before it wakes up, the latency tracer will then start tracing functions
      on the other CPU.
      
      To find which CPU the task is on, the wakeup function tracer performs
      a task_cpu(wakeup_task). But to make sure the task does not disappear
      it grabs the wakeup_lock, which is also taken when the task wakes up.
      By taking this lock, the function tracer does not need to worry about
      the task being freed as it checks its cpu.
      
      Jan Blunck found a problem with this approach on his 32 CPU box. When
      a task is being traced by the wakeup tracer, all functions take this
      lock. That means that on all 32 CPUs, each function call is taking
      this one lock to see if the task is on that CPU. This lock has just
      serialized all functions on all 32 CPUs. Needless to say, this caused
      major issues on that box. It would even lockup.
      
      This patch changes the wakeup latency to insert a probe on the migrate task
      tracepoint. When a task changes its CPU that it will run on, the
      probe will take note. Now the wakeup function tracer no longer needs
      to take the lock. It only compares the current CPU with a variable that
      holds the current CPU the task is on. We don't worry about races since
      it is OK to add or miss a function trace.
      Reported-by: NJan Blunck <jblunck@suse.de>
      Tested-by: NJan Blunck <jblunck@suse.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      478142c3
  20. 05 9月, 2009 1 次提交
    • S
      tracing: use timestamp to determine start of latency traces · 2f26ebd5
      Steven Rostedt 提交于
      Currently the latency tracers reset the ring buffer. Unfortunately
      if a commit is in process (due to a trace event), this can corrupt
      the ring buffer. When this happens, the ring buffer will detect
      the corruption and then permanently disable the ring buffer.
      
      The bug does not crash the system, but it does prevent further tracing
      after the bug is hit.
      
      Instead of reseting the trace buffers, the timestamp of the start of
      the trace is used instead. The buffers will still contain the previous
      data, but the output will not count any data that is before the
      timestamp of the trace.
      
      Note, this only affects the static trace output (trace) and not the
      runtime trace output (trace_pipe). The runtime trace output does not
      make sense for the latency tracers anyway.
      Reported-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2f26ebd5
  21. 24 4月, 2009 1 次提交
    • S
      tracing/wakeup: move access to wakeup_cpu into spinlock · 9be24414
      Steven Rostedt 提交于
      The code had the following outside the lock:
      
              if (next != wakeup_task)
                      return;
      
              pc = preempt_count();
      
              /* The task we are waiting for is waking up */
              data = wakeup_trace->data[wakeup_cpu];
      
      On initialization, wakeup_task is NULL and wakeup_cpu -1. This code
      is not under a lock. If wakeup_task is set on another CPU as that
      task is waking up, we can see the wakeup_task before wakeup_cpu is
      set. If we read wakeup_cpu while it is still -1 then we will have
      a bad data pointer.
      
      This patch moves the reading of wakeup_cpu within the protection of
      the spinlock used to protect the writing of wakeup_cpu and wakeup_task.
      
      [ Impact: remove possible race causing invalid pointer dereference ]
      Reported-by: NManeesh Soni <maneesh@in.ibm.com>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      9be24414
  22. 15 4月, 2009 1 次提交
    • S
      tracing/events: move trace point headers into include/trace/events · ad8d75ff
      Steven Rostedt 提交于
      Impact: clean up
      
      Create a sub directory in include/trace called events to keep the
      trace point headers in their own separate directory. Only headers that
      declare trace points should be defined in this directory.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Cc: Zhao Lei <zhaolei@cn.fujitsu.com>
      Cc: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      ad8d75ff
  23. 07 4月, 2009 1 次提交
    • S
      tracing: remove CALLER_ADDR2 from wakeup tracer · 301fd748
      Steven Rostedt 提交于
      Maneesh Soni was getting a crash when running the wakeup tracer.
      We debugged it down to the recording of the function with the
      CALLER_ADDR2 macro.  This is used to get the location of the caller
      to schedule.
      
      But the problem comes when schedule is called by assmebly. In the case
      that Maneesh had, retint_careful would call schedule. But retint_careful
      does not set up a proper frame pointer. CALLER_ADDR2 is defined as
      __builtin_return_address(2). This produces the following assembly in
      the wakeup tracer code.
      
         mov    0x0(%rbp),%rcx  <--- get the frame pointer of the caller
         mov    %r14d,%r8d
         mov    0xf2de8e(%rip),%rdi
      
         mov    0x8(%rcx),%rsi  <-- this is __builtin_return_address(1)
         mov    0x28(%rdi,%rax,8),%rbx
      
         mov    (%rcx),%rax  <-- get the frame pointer of the caller's caller
         mov    %r12,%rcx
         mov    0x8(%rax),%rdx <-- this is __builtin_return_address(2)
      
      At the reading of 0x8(%rax) Maneesh's machine would take a fault.
      The reason is that retint_careful did not set up the return address
      and the content of %rax here was zero.
      
      To verify this, I sent Maneesh a patch to create a frame pointer
      in retint_careful. He ran the test again but this time he would take
      the same type of fault from sysret_careful. The retint_careful was no
      longer an issue, but there are other callers that still have issues.
      
      Instead of adding frame pointers for all callers to schedule (in possibly
      all archs), it is much safer to simply not use CALLER_ADDR2. This
      loses out on knowing what called schedule, but the function tracer
      will help there if needed.
      Reported-by: NManeesh Soni <maneesh@in.ibm.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      301fd748
  24. 05 3月, 2009 1 次提交
  25. 18 2月, 2009 2 次提交
  26. 05 2月, 2009 1 次提交
  27. 29 1月, 2009 1 次提交
  28. 22 1月, 2009 1 次提交