1. 02 7月, 2013 1 次提交
    • S
      ftrace: Do not run selftest if command line parameter is set · f1ed7c74
      Steven Rostedt (Red Hat) 提交于
      If the kernel command line ftrace filter parameters are set
      (ftrace_filter or ftrace_notrace), force the function self test to
      pass, with a warning why it was forced.
      
      If the user adds a filter to the kernel command line, it is assumed
      that they know what they are doing, and the self test should just not
      run instead of failing (which disables function tracing) or clearing
      the filter, as that will probably annoy the user.
      
      If the user wants the selftest to run, the message will tell them why
      it did not.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f1ed7c74
  2. 30 5月, 2013 1 次提交
  3. 16 3月, 2013 1 次提交
    • S
      tracing: Fix ftrace_dump() · 7fe70b57
      Steven Rostedt (Red Hat) 提交于
      ftrace_dump() had a lot of issues. What ftrace_dump() does, is when
      ftrace_dump_on_oops is set (via a kernel parameter or sysctl), it
      will dump out the ftrace buffers to the console when either a oops,
      panic, or a sysrq-z occurs.
      
      This was written a long time ago when ftrace was fragile to recursion.
      But it wasn't written well even for that.
      
      There's a possible deadlock that can occur if a ftrace_dump() is happening
      and an NMI triggers another dump. This is because it grabs a lock
      before checking if the dump ran.
      
      It also totally disables ftrace, and tracing for no good reasons.
      
      As the ring_buffer now checks if it is read via a oops or NMI, where
      there's a chance that the buffer gets corrupted, it will disable
      itself. No need to have ftrace_dump() do the same.
      
      ftrace_dump() is now cleaned up where it uses an atomic counter to
      make sure only one dump happens at a time. A simple atomic_inc_return()
      is enough that is needed for both other CPUs and NMIs. No need for
      a spinlock, as if one CPU is running the dump, no other CPU needs
      to do it too.
      
      The tracing_on variable is turned off and not turned on. The original
      code did this, but it wasn't pretty. By just disabling this variable
      we get the result of not seeing traces that happen between crashes.
      
      For sysrq-z, it doesn't get turned on, but the user can always write
      a '1' to the tracing_on file. If they are using sysrq-z, then they should
      know about tracing_on.
      
      The new code is much easier to read and less error prone. No more
      deadlock possibility when an NMI triggers here.
      Reported-by: Nzhangwei(Jovi) <jovi.zhangwei@huawei.com>
      Cc: stable@vger.kernel.org
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      7fe70b57
  4. 15 3月, 2013 1 次提交
    • S
      tracing: Consolidate max_tr into main trace_array structure · 12883efb
      Steven Rostedt (Red Hat) 提交于
      Currently, the way the latency tracers and snapshot feature works
      is to have a separate trace_array called "max_tr" that holds the
      snapshot buffer. For latency tracers, this snapshot buffer is used
      to swap the running buffer with this buffer to save the current max
      latency.
      
      The only items needed for the max_tr is really just a copy of the buffer
      itself, the per_cpu data pointers, the time_start timestamp that states
      when the max latency was triggered, and the cpu that the max latency
      was triggered on. All other fields in trace_array are unused by the
      max_tr, making the max_tr mostly bloat.
      
      This change removes the max_tr completely, and adds a new structure
      called trace_buffer, that holds the buffer pointer, the per_cpu data
      pointers, the time_start timestamp, and the cpu where the latency occurred.
      
      The trace_array, now has two trace_buffers, one for the normal trace and
      one for the max trace or snapshot. By doing this, not only do we remove
      the bloat from the max_trace but the instances of traces can now use
      their own snapshot feature and not have just the top level global_trace have
      the snapshot feature and latency tracers for itself.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      12883efb
  5. 23 1月, 2013 2 次提交
  6. 22 1月, 2013 1 次提交
  7. 02 11月, 2012 2 次提交
    • S
      tracing: Use irq_work for wake ups and remove *_nowake_*() functions · 0d5c6e1c
      Steven Rostedt 提交于
      Have the ring buffer commit function use the irq_work infrastructure to
      wake up any waiters waiting on the ring buffer for new data. The irq_work
      was created for such a purpose, where doing the actual wake up at the
      time of adding data is too dangerous, as an event or function trace may
      be in the midst of the work queue locks and cause deadlocks. The irq_work
      will either delay the action to the next timer interrupt, or trigger an IPI
      to itself forcing an interrupt to do the work (in a safe location).
      
      With irq_work, all ring buffer commits can safely do wakeups, removing
      the need for the ring buffer commit "nowake" variants, which were used
      by events and function tracing. All commits can now safely use the
      normal commit, and the "nowake" variants can be removed.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0d5c6e1c
    • S
      tracing: Make tracing_enabled be equal to tracing_on · 0fb9656d
      Steven Rostedt 提交于
      The tracing_enabled file has been deprecated as it never was able
      to serve its purpose well. The tracing_on file has taken over.
      Instead of having code to keep tracing_enabled, have the tracing_enabled
      file just set tracing_on, and remove the tracing_enabled variable.
      
      This allows us to remove the tracing_enabled file. The reason that
      the remove is in a different change set and not removed here is
      in case we find some lonely userspace tool that requires the file
      to exist. Then the removal patch will get reverted, but this one
      will not.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0fb9656d
  8. 07 8月, 2012 1 次提交
  9. 31 7月, 2012 3 次提交
    • S
      ftrace: Add selftest to test function save-regs support · ad97772a
      Steven Rostedt 提交于
      Add selftests to test the save-regs functionality of ftrace.
      
      If the arch supports saving regs, then it will make sure that regs is
      at least not NULL in the callback.
      
      If the arch does not support saving regs, it makes sure that the
      registering of the ftrace_ops that requests saving regs fails.
      It then tests the registering of the ftrace_ops succeeds if the
      'IF_SUPPORTED' flag is set. Then it makes sure that the regs passed to
      the function is NULL.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      ad97772a
    • S
      ftrace: Add selftest to test function trace recursion protection · ea701f11
      Steven Rostedt 提交于
      Add selftests to test the function tracing recursion protection actually
      does work. It also tests if a ftrace_ops states it will perform its own
      protection. Although, even if the ftrace_ops states it will protect itself,
      the ftrace infrastructure may still provide protection if the arch does
      not support all features or another ftrace_ops is registered.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      ea701f11
    • S
      ftrace: Add default recursion protection for function tracing · 4740974a
      Steven Rostedt 提交于
      As more users of the function tracer utility are being added, they do
      not always add the necessary recursion protection. To protect from
      function recursion due to tracing, if the callback ftrace_ops does not
      specifically specify that it protects against recursion (by setting
      the FTRACE_OPS_FL_RECURSION_SAFE flag), the list operation will be
      called by the mcount trampoline which adds recursion protection.
      
      If the flag is set, then the function will be called directly with no
      extra protection.
      
      Note, the list operation is called if more than one function callback
      is registered, or if the arch does not support all of the function
      tracer features.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      4740974a
  10. 20 7月, 2012 2 次提交
  11. 19 5月, 2011 2 次提交
  12. 07 1月, 2011 1 次提交
  13. 23 10月, 2010 1 次提交
  14. 20 7月, 2010 2 次提交
  15. 16 7月, 2010 1 次提交
    • F
      tracing: Remove ksym tracer · 5d550467
      Frederic Weisbecker 提交于
      The ksym (breakpoint) ftrace plugin has been superseded by perf
      tools that are much more poweful to use the cpu breakpoints.
      This tracer doesn't bring more feature. It has been deprecated
      for a while now, lets remove it.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Prasad <prasad@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      5d550467
  16. 22 4月, 2010 1 次提交
    • F
      tracing: Dump either the oops's cpu source or all cpus buffers · cecbca96
      Frederic Weisbecker 提交于
      The ftrace_dump_on_oops kernel parameter, sysctl and sysrq let one
      dump every cpu buffers when an oops or panic happens.
      
      It's nice when you have few cpus but it may take ages if have many,
      plus you miss the real origin of the problem in all the cpu traces.
      
      Sometimes, all you need is to dump the cpu buffer that triggered the
      opps, most of the time it is our main interest.
      
      This patch modifies ftrace_dump_on_oops to handle this choice.
      
      The ftrace_dump_on_oops kernel parameter, when it comes alone, has
      the same behaviour than before. But ftrace_dump_on_oops=orig_cpu
      will only dump the buffer of the cpu that oops'ed.
      
      Similarly, sysctl kernel.ftrace_dump_on_oops=1 and
      echo 1 > /proc/sys/kernel/ftrace_dump_on_oops keep their previous
      behaviour. But setting 2 jumps into cpu origin dump mode.
      
      v2: Fix double setup
      v3: Fix spelling issues reported by Randy Dunlap
      v4: Also update __ftrace_dump in the selftests
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      cecbca96
  17. 01 4月, 2010 1 次提交
    • S
      ring-buffer: Add place holder recording of dropped events · 66a8cb95
      Steven Rostedt 提交于
      Currently, when the ring buffer drops events, it does not record
      the fact that it did so. It does inform the writer that the event
      was dropped by returning a NULL event, but it does not put in any
      place holder where the event was dropped.
      
      This is not a trivial thing to add because the ring buffer mostly
      runs in overwrite (flight recorder) mode. That is, when the ring
      buffer is full, new data will overwrite old data.
      
      In a produce/consumer mode, where new data is simply dropped when
      the ring buffer is full, it is trivial to add the placeholder
      for dropped events. When there's more room to write new data, then
      a special event can be added to notify the reader about the dropped
      events.
      
      But in overwrite mode, any new write can overwrite events. A place
      holder can not be inserted into the ring buffer since there never
      may be room. A reader could also come in at anytime and miss the
      placeholder.
      
      Luckily, the way the ring buffer works, the read side can find out
      if events were lost or not, and how many events. Everytime a write
      takes place, if it overwrites the header page (the next read) it
      updates a "overrun" variable that keeps track of the number of
      lost events. When a reader swaps out a page from the ring buffer,
      it can record this number, perfom the swap, and then check to
      see if the number changed, and take the diff if it has, which would be
      the number of events dropped. This can be stored by the reader
      and returned to callers of the reader.
      
      Since the reader page swap will fail if the writer moved the head
      page since the time the reader page set up the swap, this gives room
      to record the overruns without worrying about races. If the reader
      sets up the pages, records the overrun, than performs the swap,
      if the swap succeeds, then the overrun variable has not been
      updated since the setup before the swap.
      
      For binary readers of the ring buffer, a flag is set in the header
      of each sub page (sub buffer) of the ring buffer. This flag is embedded
      in the size field of the data on the sub buffer, in the 31st bit (the size
      can be 32 or 64 bits depending on the architecture), but only 27
      bits needs to be used for the actual size (less actually).
      
      We could add a new field in the sub buffer header to also record the
      number of events dropped since the last read, but this will change the
      format of the binary ring buffer a bit too much. Perhaps this change can
      be made if the information on the number of events dropped is considered
      important enough.
      
      Note, the notification of dropped events is only used by consuming reads
      or peeking at the ring buffer. Iterating over the ring buffer does not
      keep this information because the necessary data is only available when
      a page swap is made, and the iterator does not swap out pages.
      
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: "Luis Claudio R. Goncalves" <lclaudio@uudg.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      66a8cb95
  18. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  19. 26 3月, 2010 1 次提交
    • P
      x86, perf, bts, mm: Delete the never used BTS-ptrace code · faa4602e
      Peter Zijlstra 提交于
      Support for the PMU's BTS features has been upstreamed in
      v2.6.32, but we still have the old and disabled ptrace-BTS,
      as Linus noticed it not so long ago.
      
      It's buggy: TIF_DEBUGCTLMSR is trampling all over that MSR without
      regard for other uses (perf) and doesn't provide the flexibility
      needed for perf either.
      
      Its users are ptrace-block-step and ptrace-bts, since ptrace-bts
      was never used and ptrace-block-step can be implemented using a
      much simpler approach.
      
      So axe all 3000 lines of it. That includes the *locked_memory*()
      APIs in mm/mlock.c as well.
      Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Roland McGrath <roland@redhat.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Markus Metzger <markus.t.metzger@intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <20100325135413.938004390@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      faa4602e
  20. 15 12月, 2009 1 次提交
  21. 08 11月, 2009 2 次提交
    • L
      ksym_tracer: Remove KSYM_SELFTEST_ENTRY · 30ff21e3
      Li Zefan 提交于
      The macro used to be used in both trace_selftest.c and
      trace_ksym.c, but no longer, so remove it from header file.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      Cc: Prasad <prasad@linux.vnet.ibm.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      30ff21e3
    • F
      hw-breakpoints: Rewrite the hw-breakpoints layer on top of perf events · 24f1e32c
      Frederic Weisbecker 提交于
      This patch rebase the implementation of the breakpoints API on top of
      perf events instances.
      
      Each breakpoints are now perf events that handle the
      register scheduling, thread/cpu attachment, etc..
      
      The new layering is now made as follows:
      
             ptrace       kgdb      ftrace   perf syscall
                \          |          /         /
                 \         |         /         /
                                              /
                  Core breakpoint API        /
                                            /
                           |               /
                           |              /
      
                    Breakpoints perf events
      
                           |
                           |
      
                     Breakpoints PMU ---- Debug Register constraints handling
                                          (Part of core breakpoint API)
                           |
                           |
      
                   Hardware debug registers
      
      Reasons of this rewrite:
      
      - Use the centralized/optimized pmu registers scheduling,
        implying an easier arch integration
      - More powerful register handling: perf attributes (pinned/flexible
        events, exclusive/non-exclusive, tunable period, etc...)
      
      Impact:
      
      - New perf ABI: the hardware breakpoints counters
      - Ptrace breakpoints setting remains tricky and still needs some per
        thread breakpoints references.
      
      Todo (in the order):
      
      - Support breakpoints perf counter events for perf tools (ie: implement
        perf_bpcounter_event())
      - Support from perf tools
      
      Changes in v2:
      
      - Follow the perf "event " rename
      - The ptrace regression have been fixed (ptrace breakpoint perf events
        weren't released when a task ended)
      - Drop the struct hw_breakpoint and store generic fields in
        perf_event_attr.
      - Separate core and arch specific headers, drop
        asm-generic/hw_breakpoint.h and create linux/hw_breakpoint.h
      - Use new generic len/type for breakpoint
      - Handle off case: when breakpoints api is not supported by an arch
      
      Changes in v3:
      
      - Fix broken CONFIG_KVM, we need to propagate the breakpoint api
        changes to kvm when we exit the guest and restore the bp registers
        to the host.
      
      Changes in v4:
      
      - Drop the hw_breakpoint_restore() stub as it is only used by KVM
      - EXPORT_SYMBOL_GPL hw_breakpoint_restore() as KVM can be built as a
        module
      - Restore the breakpoints unconditionally on kvm guest exit:
        TIF_DEBUG_THREAD doesn't anymore cover every cases of running
        breakpoints and vcpu->arch.switch_db_regs might not always be
        set when the guest used debug registers.
        (Waiting for a reliable optimization)
      
      Changes in v5:
      
      - Split-up the asm-generic/hw-breakpoint.h moving to
        linux/hw_breakpoint.h into a separate patch
      - Optimize the breakpoints restoring while switching from kvm guest
        to host. We only want to restore the state if we have active
        breakpoints to the host, otherwise we don't care about messed-up
        address registers.
      - Add asm/hw_breakpoint.h to Kbuild
      - Fix bad breakpoint type in trace_selftest.c
      
      Changes in v6:
      
      - Fix wrong header inclusion in trace.h (triggered a build
        error with CONFIG_FTRACE_SELFTEST
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Prasad <prasad@linux.vnet.ibm.com>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Jan Kiszka <jan.kiszka@web.de>
      Cc: Jiri Slaby <jirislaby@gmail.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Avi Kivity <avi@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      24f1e32c
  22. 06 8月, 2009 1 次提交
  23. 03 6月, 2009 1 次提交
  24. 07 4月, 2009 1 次提交
  25. 22 3月, 2009 2 次提交
  26. 18 3月, 2009 1 次提交
    • F
      tracing/ftrace: stop {irqs, preempt}soff tracers when tracing is stopped · 49036200
      Frederic Weisbecker 提交于
      Impact: fix a selftest warning
      
      In some cases, it's possible to see the following warning on irqsoff
      tracer selftest:
      
      [    4.640003] Testing tracer irqsoff: <4>------------[ cut here ]------------
      [    4.653562] WARNING: at kernel/trace/trace.c:458 update_max_tr_single+0x9a/0xc4()
      [    4.660000] Hardware name: System Product Name
      [    4.660000] Modules linked in:
      [    4.660000] Pid: 301, comm: kstop/1 Not tainted 2.6.29-rc8-tip #35837
      [    4.660000] Call Trace:
      [    4.660000]  [<4014b588>] warn_slowpath+0x79/0x8f
      [    4.660000]  [<402d6949>] ? put_dec+0x64/0x6b
      [    4.660000]  [<40162b56>] ? getnstimeofday+0x58/0xdd
      [    4.660000]  [<40162210>] ? clocksource_read+0x3/0xf
      [    4.660000]  [<4015eb44>] ? ktime_set+0x8/0x34
      [    4.660000]  [<4014101a>] ? balance_runtime+0x8/0x56
      [    4.660000]  [<405f6f11>] ? _spin_lock+0x3/0x10
      [    4.660000]  [<4011f643>] ? ftrace_call+0x5/0x8
      [    4.660000]  [<4015d0f1>] ? task_cputime_zero+0x3/0x27
      [    4.660000]  [<40190ee7>] ? cpupri_set+0x90/0xcb
      [    4.660000]  [<405f7208>] ? _spin_lock_irqsave+0x22/0x34
      [    4.660000]  [<40190f12>] ? cpupri_set+0xbb/0xcb
      [    4.660000]  [<405f7151>] ? _spin_unlock_irqrestore+0x23/0x35
      [    4.660000]  [<4018493f>] ? ring_buffer_reset_cpu+0x27/0x51
      [    4.660000]  [<405f7208>] ? _spin_lock_irqsave+0x22/0x34
      [    4.660000]  [<40184962>] ? ring_buffer_reset_cpu+0x4a/0x51
      [    4.660000]  [<405f7151>] ? _spin_unlock_irqrestore+0x23/0x35
      [    4.660000]  [<4018cc29>] ? trace_hardirqs_off+0x1a/0x1c
      [    4.660000]  [<405f7151>] ? _spin_unlock_irqrestore+0x23/0x35
      [    4.660000]  [<40184962>] ? ring_buffer_reset_cpu+0x4a/0x51
      [    4.660000]  [<401850f3>] ? cpumask_next+0x15/0x18
      [    4.660000]  [<4018a41f>] update_max_tr_single+0x9a/0xc4
      [    4.660000]  [<4014e5fe>] ? exit_notify+0x16/0xf2
      [    4.660000]  [<4018cd13>] check_critical_timing+0xcc/0x11e
      [    4.660000]  [<4014e5fe>] ? exit_notify+0x16/0xf2
      [    4.660000]  [<4014e5fe>] ? exit_notify+0x16/0xf2
      [    4.660000]  [<4018cdf1>] stop_critical_timing+0x8c/0x9f
      [    4.660000]  [<4014e5c4>] ? forget_original_parent+0xac/0xd0
      [    4.660000]  [<4018ce3a>] trace_hardirqs_on+0x1a/0x1c
      [    4.660000]  [<4014e5c4>] forget_original_parent+0xac/0xd0
      [    4.660000]  [<4014e5fe>] exit_notify+0x16/0xf2
      [    4.660000]  [<4014e8a5>] do_exit+0x1cb/0x225
      [    4.660000]  [<4015c72b>] ? kthread+0x0/0x69
      [    4.660000]  [<4011f61d>] kernel_thread_helper+0xd/0x10
      [    4.660000] ---[ end trace a7919e7f17c0a725 ]---
      [    4.660164] .. no entries found ..FAILED!
      
      During the selftest of irqsoff tracer, we do that:
      
      	/* disable interrupts for a bit */
      	local_irq_disable();
      	udelay(100);
      	local_irq_enable();
      	/* stop the tracing. */
      	tracing_stop();
      	/* check both trace buffers */
      	ret = trace_test_buffer(tr, NULL);
      
      If a callsite performs a new max delay with irqs off just after
      tracing_stop, update_max_tr_single() -> ring_buffer_swap_cpu()
      will be called with the buffers disabled by tracing_stop(), hence
      the warning, then ring_buffer_swap_cpu() return -EAGAIN and
      update_max_tr_single() complains.
      
      Fix it by also stopping the tracer before stopping the tracing globally.
      A similar situation can happen with preemptoff and preemptirqsoff tracers
      where we apply the same fix.
      Reported-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <1237325938-5240-1-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      49036200
  27. 16 3月, 2009 1 次提交
    • F
      tracing/ftrace: fix double calls to tracing_start() · ac1d52d0
      Frederic Weisbecker 提交于
      Impact: fix a warning during preemptirqsoff selftests
      
      When the preemptirqsoff selftest fails, we see the following
      warning:
      
      [    6.050000] Testing tracer preemptirqsoff: .. no entries found ..
      ------------[ cut here ]------------
      [    6.060000] WARNING: at kernel/trace/trace.c:688 tracing_start+0x67/0xd3()
      [    6.060000] Modules linked in:
      [    6.060000] Pid: 1, comm: swapper Tainted: G
      [    6.060000] Call Trace:
      [    6.060000]  [<ffffffff802460ff>] warn_slowpath+0xb1/0x100
      [    6.060000]  [<ffffffff802a8f5b>] ? trace_preempt_on+0x35/0x4b
      [    6.060000]  [<ffffffff802a37fb>] ? tracing_start+0x31/0xd3
      [    6.060000]  [<ffffffff802a37fb>] ? tracing_start+0x31/0xd3
      [    6.060000]  [<ffffffff80271e0b>] ? __lock_acquired+0xe6/0x1f2
      [    6.060000]  [<ffffffff802a37fb>] ? tracing_start+0x31/0xd3
      [    6.060000]  [<ffffffff802a3831>] tracing_start+0x67/0xd3
      [    6.060000]  [<ffffffff802a8ace>] ? irqsoff_tracer_reset+0x2d/0x57
      [    6.060000]  [<ffffffff802a4d1c>] trace_selftest_startup_preemptirqsoff+0x1c8/0x1f1
      [    6.060000]  [<ffffffff802a4798>] register_tracer+0x12f/0x241
      [    6.060000]  [<ffffffff810250d0>] ? init_irqsoff_tracer+0x0/0x53
      [    6.060000]  [<ffffffff8102510b>] init_irqsoff_tracer+0x3b/0x53
      
      This is because in fail case, the preemptirqsoff tracer selftest calls twice
      the tracing_start() function:
      
      int
      trace_selftest_startup_preemptirqsoff(struct tracer *trace, struct trace_array *tr)
      {
              if (!ret && !count) {
                      printk(KERN_CONT ".. no entries found ..");
                      ret = -1;
                      tracing_start(); <-----
                      goto out;
              }
              [...]
      out:
              trace->reset(tr);
              tracing_start(); <------
              tracing_max_latency = save_max;
      
              return ret;
      }
      
      Since it is well handled in the out path, we don't need the conditional one.
      Reported-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <1237159961-7447-1-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ac1d52d0
  28. 13 3月, 2009 2 次提交
  29. 10 3月, 2009 1 次提交
  30. 19 2月, 2009 1 次提交