1. 14 10月, 2008 17 次提交
    • S
      ftrace: add stack tracer · e5a81b62
      Steven Rostedt 提交于
      This is another tracer using the ftrace infrastructure, that examines
      at each function call the size of the stack. If the stack use is greater
      than the previous max it is recorded.
      
      You can always see (and set) the max stack size seen. By setting it
      to zero will start the recording again. The backtrace is also available.
      
      For example:
      
      # cat /debug/tracing/stack_max_size
      1856
      
      # cat /debug/tracing/stack_trace
      [<c027764d>] stack_trace_call+0x8f/0x101
      [<c021b966>] ftrace_call+0x5/0x8
      [<c02553cc>] clocksource_get_next+0x12/0x48
      [<c02542a5>] update_wall_time+0x538/0x6d1
      [<c0245913>] do_timer+0x23/0xb0
      [<c0257657>] tick_do_update_jiffies64+0xd9/0xf1
      [<c02576b9>] tick_sched_timer+0x4a/0xad
      [<c0250fe6>] __run_hrtimer+0x3e/0x75
      [<c02518ed>] hrtimer_interrupt+0xf1/0x154
      [<c022c870>] smp_apic_timer_interrupt+0x71/0x84
      [<c021b7e9>] apic_timer_interrupt+0x2d/0x34
      [<c0238597>] finish_task_switch+0x29/0xa0
      [<c05abd13>] schedule+0x765/0x7be
      [<c05abfca>] schedule_timeout+0x1b/0x90
      [<c05ab4d4>] wait_for_common+0xab/0x101
      [<c05ab5ac>] wait_for_completion+0x12/0x14
      [<c033cfc3>] blk_execute_rq+0x84/0x99
      [<c0402470>] scsi_execute+0xc2/0x105
      [<c040250a>] scsi_execute_req+0x57/0x7f
      [<c043afe0>] sr_test_unit_ready+0x3e/0x97
      [<c043bbd6>] sr_media_change+0x43/0x205
      [<c046b59f>] media_changed+0x48/0x77
      [<c046b5ff>] cdrom_media_changed+0x31/0x37
      [<c043b091>] sr_block_media_changed+0x16/0x18
      [<c02b9e69>] check_disk_change+0x1b/0x63
      [<c046f4c3>] cdrom_open+0x7a1/0x806
      [<c043b148>] sr_block_open+0x78/0x8d
      [<c02ba4c0>] do_open+0x90/0x257
      [<c02ba869>] blkdev_open+0x2d/0x56
      [<c0296a1f>] __dentry_open+0x14d/0x23c
      [<c0296b32>] nameidata_to_filp+0x24/0x38
      [<c02a1c68>] do_filp_open+0x347/0x626
      [<c02967ef>] do_sys_open+0x47/0xbc
      [<c02968b0>] sys_open+0x23/0x2b
      [<c021aadd>] sysenter_do_call+0x12/0x26
      
      I've tested this on both x86_64 and i386.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e5a81b62
    • I
      ftrace: clean up macro usage · ac8825ec
      Ingo Molnar 提交于
      enclose the argument in parenthesis. (especially since we cast it,
      which is a high prio operation)
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ac8825ec
    • S
      ftrace: fix build failure · 2d7da80f
      Stephen Rothwell 提交于
      After disabling FTRACE_MCOUNT_RECORD via a patch, a dormant build
      failure surfaced:
      
       kernel/trace/ftrace.c: In function 'ftrace_record_ip':
       kernel/trace/ftrace.c:416: error: incompatible type for argument 1 of '_spin_lock_irqsave'
       kernel/trace/ftrace.c:433: error: incompatible type for argument 1 of '_spin_lock_irqsave'
      
      Introduced by commit 6dad8e07f4c10b17b038e84d29f3ca41c2e55cd0 ("ftrace:
      add necessary locking for ftrace records").
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2d7da80f
    • S
      ftrace: add necessary locking for ftrace records · 99ecdc43
      Steven Rostedt 提交于
      The new design of pre-recorded mcounts and updating the code outside of
      kstop_machine has changed the way the records themselves are protected.
      
      This patch uses the ftrace_lock to protect the records. Note, the lock
      still does not need to be taken within calls that are only called via
      kstop_machine, since the that code can not run while the spin lock is held.
      
      Also removed the hash_lock needed for the daemon when MCOUNT_RECORD is
      configured. Also did a slight cleanup of an unused variable.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      99ecdc43
    • S
      ftrace: do not init module on ftrace disabled · 00fd61ae
      Steven Rostedt 提交于
      If one of the self tests of ftrace has disabled the function tracer,
      do not run the code to convert the mcount calls in modules.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      00fd61ae
    • F
      ftrace: fix some mistakes in error messages · 98a983aa
      Frédéric Weisbecker 提交于
      This patch fixes some mistakes on the tracer in warning messages when
      debugfs fails to create tracing files.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: srostedt@redhat.com
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      98a983aa
    • S
      ftrace: dump out ftrace buffers to console on panic · 3f5a54e3
      Steven Rostedt 提交于
      At OLS I had a lot of interest to be able to have the ftrace buffers
      dumped on panic.  Usually one would expect to uses kexec and examine
      the buffers after a new kernel is loaded. But sometimes the resources
      do not permit kdump and kexec, so having an option to still see the
      sequence of events up to the crash is very advantageous.
      
      This patch adds the option to have the ftrace buffers dumped to the
      console in the latency_trace format on a panic. When the option is set,
      the default entries per CPU buffer are lowered to 16384, since the writing
      to the serial (if that is the console) may take an awful long time
      otherwise.
      
      [
       Changes since -v1:
        Got alpine to send correctly (as well as spell check working).
        Removed config option.
        Moved the static variables into ftrace_dump itself.
        Gave printk a log level.
      ]
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3f5a54e3
    • S
      ftrace: ftrace_printk doc moved · 2f2c99db
      Steven Rostedt 提交于
      Based on Randy Dunlap's suggestion, the ftrace_printk kernel-doc belongs
      with the ftrace_printk macro that should be used. Not with the
      __ftrace_printk internal function.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Acked-by: NRandy Dunlap <randy.dunlap@oracle.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2f2c99db
    • S
      ftrace: printk formatting infrastructure · dd0e545f
      Steven Rostedt 提交于
      This patch adds a feature that can help kernel developers debug their
      code using ftrace.
      
        int ftrace_printk(const char *fmt, ...);
      
      This records into the ftrace buffer using printf formatting. The entry
      size in the buffers are still a fixed length. A new type has been added
      that allows for more entries to be used for a single recording.
      
      The start of the print is still the same as the other entries.
      
      It returns the number of characters written to the ftrace buffer.
      
      For example:
      
      Having a module with the following code:
      
      static int __init ftrace_print_test(void)
      {
              ftrace_printk("jiffies are %ld\n", jiffies);
              return 0;
      }
      
      Gives me:
      
        insmod-5441  3...1 7569us : ftrace_print_test: jiffies are 4296626666
      
      for the latency_trace file and:
      
                insmod-5441  [03]  1959.370498: ftrace_print_test jiffies are 4296626666
      
      for the trace file.
      
      Note: Only the infrastructure should go into the kernel. It is to help
      facilitate debugging for other kernel developers. Calls to ftrace_printk
      is not intended to be left in the kernel, and should be frowned upon just
      like scattering printks around in the code.
      
      But having this easily at your fingertips helps the debugging go faster
      and bugs be solved quicker.
      
      Maybe later on, we can hook this with markers and have their printf format
      be sucked into ftrace output.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      dd0e545f
    • S
      ftrace: new continue entry - separate out from trace_entry · 2e2ca155
      Steven Rostedt 提交于
      Some tracers will need to work with more than one entry. In order to do this
      the trace_entry structure was split into two fields. One for the start of
      all entries, and one to continue an existing entry.
      
      The trace_entry structure now has a "field" entry that consists of the previous
      content of the trace_entry, and a "cont" entry that is just a string buffer
      the size of the "field" entry.
      
      Thanks to Andrew Morton for suggesting this idea.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2e2ca155
    • S
      ftrace: remove old pointers to mcount · fed1939c
      Steven Rostedt 提交于
      When a mcount pointer is recorded into a table, it is used to add or
      remove calls to mcount (replacing them with nops). If the code is removed
      via removing a module, the pointers still exist.  At modifying the code
      a check is always made to make sure the code being replaced is the code
      expected. In-other-words, the code being replaced is compared to what
      it is expected to be before being replaced.
      
      There is a very small chance that the code being replaced just happens
      to look like code that calls mcount (very small since the call to mcount
      is relative). To remove this chance, this patch adds ftrace_release to
      allow module unloading to remove the pointers to mcount within the module.
      
      Another change for init calls is made to not trace calls marked with
      __init. The tracing can not be started until after init is done anyway.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fed1939c
    • S
      ftrace: do not show freed records in available_filter_functions · a9fdda33
      Steven Rostedt 提交于
      Seems that freed records can appear in the available_filter_functions list.
      This patch fixes that.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a9fdda33
    • S
      ftrace: enable mcount recording for modules · 90d595fe
      Steven Rostedt 提交于
      This patch enables the loading of the __mcount_section of modules and
      changing all the callers of mcount into nops.
      
      The modification is done before the init_module function is called, so
      again, we do not need to use kstop_machine to make these changes.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      90d595fe
    • S
      ftrace: mcount call site on boot nops core · 68bf21aa
      Steven Rostedt 提交于
      This is the infrastructure to the converting the mcount call sites
      recorded by the __mcount_loc section into nops on boot. It also allows
      for using these sites to enable tracing as normal. When the __mcount_loc
      section is used, the "ftraced" kernel thread is disabled.
      
      This uses the current infrastructure to record the mcount call sites
      as well as convert them to nops. The mcount function is kept as a stub
      on boot up and not converted to the ftrace_record_ip function. We use the
      ftrace_record_ip to only record from the table.
      
      This patch does not handle modules. That comes with a later patch.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      68bf21aa
    • S
      ftrace: create __mcount_loc section · 8da3821b
      Steven Rostedt 提交于
      This patch creates a section in the kernel called "__mcount_loc".
      This will hold a list of pointers to the mcount relocation for
      each call site of mcount.
      
      For example:
      
      objdump -dr init/main.o
      [...]
      Disassembly of section .text:
      
      0000000000000000 <do_one_initcall>:
         0:   55                      push   %rbp
      [...]
      000000000000017b <init_post>:
       17b:   55                      push   %rbp
       17c:   48 89 e5                mov    %rsp,%rbp
       17f:   53                      push   %rbx
       180:   48 83 ec 08             sub    $0x8,%rsp
       184:   e8 00 00 00 00          callq  189 <init_post+0xe>
                              185: R_X86_64_PC32      mcount+0xfffffffffffffffc
      [...]
      
      We will add a section to point to each function call.
      
         .section __mcount_loc,"a",@progbits
      [...]
         .quad .text + 0x185
      [...]
      
      The offset to of the mcount call site in init_post is an offset from
      the start of the section, and not the start of the function init_post.
      The mcount relocation is at the call site 0x185 from the start of the
      .text section.
      
        .text + 0x185  == init_post + 0xa
      
      We need a way to add this __mcount_loc section in a way that we do not
      lose the relocations after final link.  The .text section here will
      be attached to all other .text sections after final link and the
      offsets will be meaningless.  We need to keep track of where these
      .text sections are.
      
      To do this, we use the start of the first function in the section.
      do_one_initcall.  We can make a tmp.s file with this function as a reference
      to the start of the .text section.
      
         .section __mcount_loc,"a",@progbits
      [...]
         .quad do_one_initcall + 0x185
      [...]
      
      Then we can compile the tmp.s into a tmp.o
      
        gcc -c tmp.s -o tmp.o
      
      And link it into back into main.o.
      
        ld -r main.o tmp.o -o tmp_main.o
        mv tmp_main.o main.o
      
      But we have a problem.  What happens if the first function in a section
      is not exported, and is a static function. The linker will not let
      the tmp.o use it.  This case exists in main.o as well.
      
      Disassembly of section .init.text:
      
      0000000000000000 <set_reset_devices>:
         0:   55                      push   %rbp
         1:   48 89 e5                mov    %rsp,%rbp
         4:   e8 00 00 00 00          callq  9 <set_reset_devices+0x9>
                              5: R_X86_64_PC32        mcount+0xfffffffffffffffc
      
      The first function in .init.text is a static function.
      
      00000000000000a8 t __setup_set_reset_devices
      000000000000105f t __setup_str_set_reset_devices
      0000000000000000 t set_reset_devices
      
      The lowercase 't' means that set_reset_devices is local and is not exported.
      If we simply try to link the tmp.o with the set_reset_devices we end
      up with two symbols: one local and one global.
      
       .section __mcount_loc,"a",@progbits
       .quad set_reset_devices + 0x10
      
      00000000000000a8 t __setup_set_reset_devices
      000000000000105f t __setup_str_set_reset_devices
      0000000000000000 t set_reset_devices
                       U set_reset_devices
      
      We still have an undefined reference to set_reset_devices, and if we try
      to compile the kernel, we will end up with an undefined reference to
      set_reset_devices, or even worst, it could be exported someplace else,
      and then we will have a reference to the wrong location.
      
      To handle this case, we make an intermediate step using objcopy.
      We convert set_reset_devices into a global exported symbol before linking
      it with tmp.o and set it back afterwards.
      
      00000000000000a8 t __setup_set_reset_devices
      000000000000105f t __setup_str_set_reset_devices
      0000000000000000 T set_reset_devices
      
      00000000000000a8 t __setup_set_reset_devices
      000000000000105f t __setup_str_set_reset_devices
      0000000000000000 T set_reset_devices
      
      00000000000000a8 t __setup_set_reset_devices
      000000000000105f t __setup_str_set_reset_devices
      0000000000000000 t set_reset_devices
      
      Now we have a section in main.o called __mcount_loc that we can place
      somewhere in the kernel using vmlinux.ld.S and access it to convert
      all these locations that call mcount into nops before starting SMP
      and thus, eliminating the need to do this with kstop_machine.
      
      Note, A well documented perl script (scripts/recordmcount.pl) is used
      to do all this in one location.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8da3821b
    • I
      tracing: clean up tracepoints kconfig structure · 5f87f112
      Ingo Molnar 提交于
      do not expose users to CONFIG_TRACEPOINTS - tracers can select it
      just fine.
      
      update ftrace to select CONFIG_TRACEPOINTS.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5f87f112
    • M
      ftrace: port to tracepoints · b07c3f19
      Mathieu Desnoyers 提交于
      Porting the trace_mark() used by ftrace to tracepoints. (cleanup)
      
      Changelog :
      - Change error messages : marker -> tracepoint
      
      [ mingo@elte.hu: conflict resolutions ]
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Acked-by: N'Peter Zijlstra' <peterz@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b07c3f19
  2. 29 9月, 2008 1 次提交
    • T
      hrtimer: prevent migration of per CPU hrtimers · ccc7dadf
      Thomas Gleixner 提交于
      Impact: per CPU hrtimers can be migrated from a dead CPU
      
      The hrtimer code has no knowledge about per CPU timers, but we need to
      prevent the migration of such timers and warn when such a timer is
      active at migration time.
      
      Explicitely mark the timers as per CPU and use a more understandable
      mode descriptor for the interrupts safe unlocked callback mode, which
      is used by hrtimer_sleeper and the scheduler code.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      ccc7dadf
  3. 28 7月, 2008 2 次提交
  4. 26 7月, 2008 3 次提交
  5. 24 7月, 2008 1 次提交
  6. 19 7月, 2008 2 次提交
    • M
      cpumask: Replace cpumask_of_cpu with cpumask_of_cpu_ptr · 65c01184
      Mike Travis 提交于
        * This patch replaces the dangerous lvalue version of cpumask_of_cpu
          with new cpumask_of_cpu_ptr macros.  These are patterned after the
          node_to_cpumask_ptr macros.
      
          In general terms, if there is a cpumask_of_cpu_map[] then a pointer to
          the cpumask_of_cpu_map[cpu] entry is used.  The cpumask_of_cpu_map
          is provided when there is a large NR_CPUS count, reducing
          greatly the amount of code generated and stack space used for
          cpumask_of_cpu().  The pointer to the cpumask_t value is needed for
          calling set_cpus_allowed_ptr() to reduce the amount of stack space
          needed to pass the cpumask_t value.
      
          If there isn't a cpumask_of_cpu_map[], then a temporary variable is
          declared and filled in with value from cpumask_of_cpu(cpu) as well as
          a pointer variable pointing to this temporary variable.  Afterwards,
          the pointer is used to reference the cpumask value.  The compiler
          will optimize out the extra dereference through the pointer as well
          as the stack space used for the pointer, resulting in identical code.
      
          A good example of the orthogonal usages is in net/sunrpc/svc.c:
      
      	case SVC_POOL_PERCPU:
      	{
      		unsigned int cpu = m->pool_to[pidx];
      		cpumask_of_cpu_ptr(cpumask, cpu);
      
      		*oldmask = current->cpus_allowed;
      		set_cpus_allowed_ptr(current, cpumask);
      		return 1;
      	}
      	case SVC_POOL_PERNODE:
      	{
      		unsigned int node = m->pool_to[pidx];
      		node_to_cpumask_ptr(nodecpumask, node);
      
      		*oldmask = current->cpus_allowed;
      		set_cpus_allowed_ptr(current, nodecpumask);
      		return 1;
      	}
      Signed-off-by: NMike Travis <travis@sgi.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      65c01184
    • S
      ftrace: only trace preempt off with preempt tracer · 1e01cb0c
      Steven Rostedt 提交于
      When PREEMPT_TRACER and IRQSOFF_TRACER are both configured and irqsoff
      tracer is running, the preempt_off sections might also be traced.
      
      Thanks to Andrew Morton for pointing out my mistake of spin_lock disabling
      interrupts while he was reviewing ftrace.txt. Seems that my example I used
      actually hit this bug.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1e01cb0c
  7. 18 7月, 2008 1 次提交
    • S
      ftrace: fix 4d3702b6 (post-v2.6.26): WARNING: at kernel/lockdep.c:2731 check_flags (ftrace) · e59494f4
      Steven Rostedt 提交于
      On Wed, 16 Jul 2008, Vegard Nossum wrote:
      
      > When booting 4d3702b6, I got this huge thing:
      >
      > Testing tracer wakeup: <4>------------[ cut here ]------------
      > WARNING: at kernel/lockdep.c:2731 check_flags+0x123/0x160()
      > Modules linked in:
      > Pid: 1, comm: swapper Not tainted 2.6.26-crashing-02127-g4d3702b6 #30
      >  [<c015c349>] warn_on_slowpath+0x59/0xb0
      >  [<c01276c6>] ? ftrace_call+0x5/0x8
      >  [<c012d800>] ? native_read_tsc+0x0/0x20
      >  [<c0158de2>] ? sub_preempt_count+0x12/0xf0
      >  [<c01814eb>] ? trace_hardirqs_off+0xb/0x10
      >  [<c0182fbc>] ? __lock_acquire+0x2cc/0x1120
      >  [<c01814eb>] ? trace_hardirqs_off+0xb/0x10
      >  [<c01276af>] ? mcount_call+0x5/0xa
      >  [<c017ff53>] check_flags+0x123/0x160
      >  [<c0183e61>] lock_acquire+0x51/0xd0
      >  [<c01276c6>] ? ftrace_call+0x5/0x8
      >  [<c0613d4f>] _spin_lock_irqsave+0x5f/0xa0
      >  [<c01a8d45>] ? ftrace_record_ip+0xf5/0x220
      >  [<c02d5413>] ? debug_locks_off+0x3/0x50
      >  [<c01a8d45>] ftrace_record_ip+0xf5/0x220
      >  [<c01276af>] mcount_call+0x5/0xa
      >  [<c02d5418>] ? debug_locks_off+0x8/0x50
      >  [<c017ff27>] check_flags+0xf7/0x160
      >  [<c0183e61>] lock_acquire+0x51/0xd0
      >  [<c01276c6>] ? ftrace_call+0x5/0x8
      >  [<c0613d4f>] _spin_lock_irqsave+0x5f/0xa0
      >  [<c01affcd>] ? wakeup_tracer_call+0x6d/0xf0
      >  [<c01625e2>] ? _local_bh_enable+0x62/0xb0
      >  [<c0158ddd>] ? sub_preempt_count+0xd/0xf0
      >  [<c01affcd>] wakeup_tracer_call+0x6d/0xf0
      >  [<c0162724>] ? __do_softirq+0xf4/0x110
      >  [<c01afff1>] ? wakeup_tracer_call+0x91/0xf0
      >  [<c01276c6>] ftrace_call+0x5/0x8
      >  [<c0162724>] ? __do_softirq+0xf4/0x110
      >  [<c0158de2>] ? sub_preempt_count+0x12/0xf0
      >  [<c01625e2>] _local_bh_enable+0x62/0xb0
      >  [<c0162724>] __do_softirq+0xf4/0x110
      >  [<c01627ed>] do_softirq+0xad/0xb0
      >  [<c0162a15>] irq_exit+0xa5/0xb0
      >  [<c013a506>] smp_apic_timer_interrupt+0x66/0xa0
      >  [<c02d3fac>] ? trace_hardirqs_off_thunk+0xc/0x10
      >  [<c0127449>] apic_timer_interrupt+0x2d/0x34
      >  [<c018007b>] ? find_usage_backwards+0xb/0xf0
      >  [<c0613a09>] ? _spin_unlock_irqrestore+0x69/0x80
      >  [<c014ef32>] tg_shares_up+0x132/0x1d0
      >  [<c014d2a2>] walk_tg_tree+0x62/0xa0
      >  [<c014ee00>] ? tg_shares_up+0x0/0x1d0
      >  [<c014a860>] ? tg_nop+0x0/0x10
      >  [<c015499d>] update_shares+0x5d/0x80
      >  [<c0154a2f>] try_to_wake_up+0x6f/0x280
      >  [<c01a8b90>] ? __ftrace_modify_code+0x0/0xc0
      >  [<c01a8b90>] ? __ftrace_modify_code+0x0/0xc0
      >  [<c0154c94>] wake_up_process+0x14/0x20
      >  [<c01725f6>] kthread_create+0x66/0xb0
      >  [<c0195400>] ? do_stop+0x0/0x200
      >  [<c0195320>] ? __stop_machine_run+0x30/0xb0
      >  [<c0195340>] __stop_machine_run+0x50/0xb0
      >  [<c0195400>] ? do_stop+0x0/0x200
      >  [<c01a8b90>] ? __ftrace_modify_code+0x0/0xc0
      >  [<c061242d>] ? mutex_unlock+0xd/0x10
      >  [<c01953cc>] stop_machine_run+0x2c/0x60
      >  [<c01a94d3>] unregister_ftrace_function+0x103/0x180
      >  [<c01b0517>] stop_wakeup_tracer+0x17/0x60
      >  [<c01b056f>] wakeup_tracer_ctrl_update+0xf/0x30
      >  [<c01ab8d5>] trace_selftest_startup_wakeup+0xb5/0x130
      >  [<c01ab950>] ? trace_wakeup_test_thread+0x0/0x70
      >  [<c01aadf5>] register_tracer+0x135/0x1b0
      >  [<c0877d02>] init_wakeup_tracer+0xd/0xf
      >  [<c085d437>] kernel_init+0x1a9/0x2ce
      >  [<c061397b>] ? _spin_unlock_irq+0x3b/0x60
      >  [<c02d3f9c>] ? trace_hardirqs_on_thunk+0xc/0x10
      >  [<c0877cf5>] ? init_wakeup_tracer+0x0/0xf
      >  [<c0182646>] ? trace_hardirqs_on_caller+0x126/0x180
      >  [<c02d3f9c>] ? trace_hardirqs_on_thunk+0xc/0x10
      >  [<c01269c8>] ? restore_nocheck_notrace+0x0/0xe
      >  [<c085d28e>] ? kernel_init+0x0/0x2ce
      >  [<c085d28e>] ? kernel_init+0x0/0x2ce
      >  [<c01275fb>] kernel_thread_helper+0x7/0x10
      >  =======================
      > ---[ end trace a7919e7f17c0a725 ]---
      > irq event stamp: 579530
      > hardirqs last  enabled at (579528): [<c01826ab>] trace_hardirqs_on+0xb/0x10
      > hardirqs last disabled at (579529): [<c01814eb>] trace_hardirqs_off+0xb/0x10
      > softirqs last  enabled at (579530): [<c0162724>] __do_softirq+0xf4/0x110
      > softirqs last disabled at (579517): [<c01627ed>] do_softirq+0xad/0xb0
      > irq event stamp: 579530
      > hardirqs last  enabled at (579528): [<c01826ab>] trace_hardirqs_on+0xb/0x10
      > hardirqs last disabled at (579529): [<c01814eb>] trace_hardirqs_off+0xb/0x10
      > softirqs last  enabled at (579530): [<c0162724>] __do_softirq+0xf4/0x110
      > softirqs last disabled at (579517): [<c01627ed>] do_softirq+0xad/0xb0
      > PASSED
      >
      > Incidentally, the kernel also hung while I was typing in this report.
      
      Things get weird between lockdep and ftrace because ftrace can be called
      within lockdep internal code (via the mcount pointer) and lockdep can be
      called with ftrace (via spin_locks).
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Tested-by: NVegard Nossum <vegard.nossum@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e59494f4
  8. 11 7月, 2008 8 次提交
    • I
      ftrace: build fix for ftraced_suspend · b2613e37
      Ingo Molnar 提交于
      fix:
      
       kernel/trace/ftrace.c:1615: error: 'ftraced_suspend' undeclared (first use in this function)
       kernel/trace/ftrace.c:1615: error: (Each undeclared identifier is reported only once
       kernel/trace/ftrace.c:1615: error: for each function it appears in.)
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b2613e37
    • S
      ftrace: separate out the function enabled variable · 60bc0800
      Steven Rostedt 提交于
      Currently the function tracer uses the global tracer_enabled variable that
      is used to keep track if the tracer is enabled or not. The function tracing
      startup needs to be separated out, otherwise the internal happenings of
      the tracer startup is also recorded.
      
      This patch creates a ftrace_function_enabled variable to all the starting
      of the function traces to happen after everything has been started.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      60bc0800
    • S
      ftrace: add ftrace_kill_atomic · a2bb6a3d
      Steven Rostedt 提交于
      It has been suggested that I add a way to disable the function tracer
      on an oops. This code adds a ftrace_kill_atomic. It is not meant to be
      used in normal situations. It will disable the ftrace tracer, but will
      not perform the nice shutdown that requires scheduling.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a2bb6a3d
    • S
      ftrace: use current CPU for function startup · 26bc83f4
      Steven Rostedt 提交于
      This is more of a clean up. Currently the function tracer initializes the
      tracer with which ever CPU was last used for tracing. This value isn't
      realy useful for function tracing, but at least it should be something other
      than a random number.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      26bc83f4
    • S
      ftrace: start wakeup tracing after setting function tracer · ad591240
      Steven Rostedt 提交于
      Enabling the wakeup tracer before enabling the function tracing causes
      some strange results due to the dynamic enabling of the functions.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ad591240
    • S
      ftrace: check proper config for preempt type · b5c21b45
      Steven Rostedt 提交于
      There is no CONFIG_PREEMPT_DESKTOP. Use the proper entry CONFIG_PREEMPT.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b5c21b45
    • S
      ftrace: define function trace nop · 001b6767
      Steven Rostedt 提交于
      When CONFIG_FTRACE is not enabled, the tracing_start_functon_trace
      and tracing_stop_function_trace should be nops.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      001b6767
    • S
      ftrace: move sched_switch enable after markers · 007c05d4
      Steven Rostedt 提交于
      We have two markers now that are enabled on sched_switch. One that records
      the context switching and the other that records task wake ups. Currently
      we enable the tracing first and then set the markers. This causes some
      confusing traces:
      
      # tracer: sched_switch
      #
      #           TASK-PID   CPU#    TIMESTAMP  FUNCTION
      #              | |      |          |         |
             trace-cmd-3973  [00]   115.834817:   3973:120:R   +     3:  0:S
             trace-cmd-3973  [01]   115.834910:   3973:120:R   +     6:  0:S
             trace-cmd-3973  [02]   115.834910:   3973:120:R   +     9:  0:S
             trace-cmd-3973  [03]   115.834910:   3973:120:R   +    12:  0:S
             trace-cmd-3973  [02]   115.834910:   3973:120:R   +     9:  0:S
                <idle>-0     [02]   115.834910:      0:140:R ==>  3973:120:R
      
      Here we see that trace-cmd with PID 3973 wakes up task 9 but the next line
      shows the idle task doing a context switch to task 3973.
      
      Enabling the tracing to _after_ the markers are set creates a much saner
      output:
      
      # tracer: sched_switch
      #
      #           TASK-PID   CPU#    TIMESTAMP  FUNCTION
      #              | |      |          |         |
                <idle>-0     [02]  7922.634225:      0:140:R ==>  4790:120:R
             trace-cmd-4789  [03]  7922.634225:      0:140:R   +  4790:120:R
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      007c05d4
  9. 03 7月, 2008 1 次提交
  10. 24 6月, 2008 3 次提交
    • A
      ftrace: avoid modifying kprobe'd records · f22f9a89
      Abhishek Sagar 提交于
      Avoid modifying the mcount call-site if there is a kprobe installed on it.
      These records are not marked as failed however. This allowed the filter
      rules on them to remain up-to-date. Whenever the kprobe on the corresponding
      record is removed, the record gets updated as normal.
      Signed-off-by: NAbhishek Sagar <sagar.abhishek@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f22f9a89
    • A
      ftrace: freeze kprobe'd records · ecea656d
      Abhishek Sagar 提交于
      Let records identified as being kprobe'd be marked as "frozen". The trouble
      with records which have a kprobe installed on their mcount call-site is
      that they don't get updated. So if such a function which is currently being
      traced gets its tracing disabled due to a new filter rule (or because it
      was added to the notrace list) then it won't be updated and continue being
      traced. This patch allows scanning of all frozen records during tracing to
      check if they should be traced.
      Signed-off-by: NAbhishek Sagar <sagar.abhishek@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ecea656d
    • A
      ftrace: store mcount address in rec->ip · 395a59d0
      Abhishek Sagar 提交于
      Record the address of the mcount call-site. Currently all archs except sparc64
      record the address of the instruction following the mcount call-site. Some
      general cleanups are entailed. Storing mcount addresses in rec->ip enables
      looking them up in the kprobe hash table later on to check if they're kprobe'd.
      Signed-off-by: NAbhishek Sagar <sagar.abhishek@gmail.com>
      Cc: davem@davemloft.net
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      395a59d0
  11. 16 6月, 2008 1 次提交