1. 27 5月, 2008 1 次提交
    • S
      ftrace: powerpc clean ups · ccbfac29
      Steven Rostedt 提交于
      This patch cleans up the ftrace code in PowerPC based on the comments from
      Michael Ellerman.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Cc: Michael Ellerman <michael@ellerman.id.au>
      Cc: proski@gnu.org
      Cc: a.p.zijlstra@chello.nl
      Cc: Pekka Paalanen <pq@iki.fi>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Cc: linuxppc-dev@ozlabs.org
      Cc: Soeren Sandmann Pedersen <sandmann@redhat.com>
      Cc: paulus@samba.org
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      ccbfac29
  2. 24 5月, 2008 19 次提交
    • S
      ftrace: add have dynamic ftrace config for archs · 677aa9f7
      Steven Rostedt 提交于
      Now that ftrace is being ported to other architectures, it has become
      apparent that DYNAMIC_FTRACE is dependent on whether or not that
      architecture implements dynamic ftrace. FTRACE itself may be ported to
      an architecture without porting dynamic ftrace.
      
      This patch adds HAVE_DYNAMIC_FTRACE to allow architectures to port ftrace
      without having to also port the dynamic aspect as well.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      677aa9f7
    • S
      ftrace: use the new kbuild CFLAGS_REMOVE for x86/kernel directory · 7fa09f24
      Steven Rostedt 提交于
      This patch removes the Makefile turd and uses the nice CFLAGS_REMOVE macro
      in the x86/kernel directory.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      7fa09f24
    • S
      ftrace: support for PowerPC · 4e491d14
      Steven Rostedt 提交于
      This patch adds full support for ftrace for PowerPC (both 64 and 32 bit).
      This includes dynamic tracing and function filtering.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      4e491d14
    • I
      ftrace: fix mcount export bug · 37135677
      Ingo Molnar 提交于
      David S. Miller noticed the following bug: the -pg instrumentation
      function callback is named differently on each platform. On x86 it
      is mcount, on sparc it is _mcount. So the export does not make sense
      in kernel/trace/ftrace.c - move it to x86.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      37135677
    • D
      sparc64: add ftrace support. · d05f5f99
      David Miller 提交于
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      d05f5f99
    • P
      x86: fix SMP alternatives: use mutex instead of spinlock, text_poke is sleepable · 2f1dafe5
      Pekka Paalanen 提交于
      text_poke is sleepable.
      The original fix by Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>.
      Signed-off-by: NPekka Paalanen <pq@iki.fi>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      2f1dafe5
    • P
      x86_64: fix kernel rodata NX setting · 72b59d67
      Pekka Paalanen 提交于
      Without CONFIG_DYNAMIC_FTRACE, mark_rodata_ro() would mark a wrong
      number of pages as no-execute. The bug was introduced in the patch
      "ftrace: dont write protect kernel text". The symptom was machine reboot
      after a CPU hotplug.
      Signed-off-by: NPekka Paalanen <pq@iki.fi>
      Acked-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      72b59d67
    • P
      x86: add a list for custom page fault handlers. · 86069782
      Pekka Paalanen 提交于
      Provides kernel modules a way to register custom page fault handlers.
      On every page fault this will call a list of registered functions. The
      functions may handle the fault and force do_page_fault() to return
      immediately.
      
      This functionality is similar to the now removed page fault notifiers.
      Custom page fault handlers are used by debugging and reverse engineering
      tools. Mmiotrace is one such tool and a patch to add it into the tree
      will follow.
      
      The custom page fault handlers are called earlier in do_page_fault()
      than the page fault notifiers were.
      Signed-off-by: NPekka Paalanen <pq@iki.fi>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      86069782
    • S
      ftrace: dont write protect kernel text · 8f0f996e
      Steven Rostedt 提交于
      Dynamic ftrace cant work when the kernel has its text write protected.
      This patch keeps the kernel from being write protected when
      dynamic ftrace is in place.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      8f0f996e
    • S
      ftrace: fix the fault label in updating code · a56be3fe
      Steven Rostedt 提交于
      The fault label to jump to on fault of updating the code was misplaced
      preventing the fault from being recorded.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      a56be3fe
    • I
      ftrace: fix kexec · f43fdad8
      Ingo Molnar 提交于
      disable the tracer while kexec pulls the rug from under the old
      kernel.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      f43fdad8
    • S
      ftrace: use dynamic patching for updating mcount calls · d61f82d0
      Steven Rostedt 提交于
      This patch replaces the indirect call to the mcount function
      pointer with a direct call that will be patched by the
      dynamic ftrace routines.
      
      On boot up, the mcount function calls the ftace_stub function.
      When the dynamic ftrace code is initialized, the ftrace_stub
      is replaced with a call to the ftrace_record_ip, which records
      the instruction pointers of the locations that call it.
      
      Later, the ftraced daemon will call kstop_machine and patch all
      the locations to nops.
      
      When a ftrace is enabled, the original calls to mcount will now
      be set top call ftrace_caller, which will do a direct call
      to the registered ftrace function. This direct call is also patched
      when the function that should be called is updated.
      
      All patching is performed by a kstop_machine routine to prevent any
      type of race conditions that is associated with modifying code
      on the fly.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      d61f82d0
    • S
      ftrace: move memory management out of arch code · 3c1720f0
      Steven Rostedt 提交于
      This patch moves the memory management of the ftrace
      records out of the arch code and into the generic code
      making the arch code simpler.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      3c1720f0
    • S
      ftrace: use nops instead of jmp · dfa60aba
      Steven Rostedt 提交于
      This patch patches the call to mcount with nops instead
      of a jmp over the mcount call.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      dfa60aba
    • S
      ftrace: dynamic enabling/disabling of function calls · 3d083395
      Steven Rostedt 提交于
      This patch adds a feature to dynamically replace the ftrace code
      with the jmps to allow a kernel with ftrace configured to run
      as fast as it can without it configured.
      
      The way this works, is on bootup (if ftrace is enabled), a ftrace
      function is registered to record the instruction pointer of all
      places that call the function.
      
      Later, if there's still any code to patch, a kthread is awoken
      (rate limited to at most once a second) that performs a stop_machine,
      and replaces all the code that was called with a jmp over the call
      to ftrace. It only replaces what was found the previous time. Typically
      the system reaches equilibrium quickly after bootup and there's no code
      patching needed at all.
      
      e.g.
      
        call ftrace  /* 5 bytes */
      
      is replaced with
      
        jmp 3f  /* jmp is 2 bytes and we jump 3 forward */
      3:
      
      When we want to enable ftrace for function tracing, the IP recording
      is removed, and stop_machine is called again to replace all the locations
      of that were recorded back to the call of ftrace.  When it is disabled,
      we replace the code back to the jmp.
      
      Allocation is done by the kthread. If the ftrace recording function is
      called, and we don't have any record slots available, then we simply
      skip that call. Once a second a new page (if needed) is allocated for
      recording new ftrace function calls.  A large batch is allocated at
      boot up to get most of the calls there.
      
      Because we do this via stop_machine, we don't have to worry about another
      CPU executing a ftrace call as we modify it. But we do need to worry
      about NMI's so all functions that might be called via nmi must be
      annotated with notrace_nmi. When this code is configured in, the NMI code
      will not call notrace.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      3d083395
    • S
      ftrace: trace preempt off critical timings · 6cd8a4bb
      Steven Rostedt 提交于
      Add preempt off timings. A lot of kernel core code is taken from the RT patch
      latency trace that was written by Ingo Molnar.
      
      This adds "preemptoff" and "preemptirqsoff" to /debugfs/tracing/available_tracers
      
      Now instead of just tracing irqs off, preemption off can be selected
      to be recorded.
      
      When this is selected, it shares the same files as irqs off timings.
      One can either trace preemption off, irqs off, or one or the other off.
      
      By echoing "preemptoff" into /debugfs/tracing/current_tracer, recording
      of preempt off only is performed. "irqsoff" will only record the time
      irqs are disabled, but "preemptirqsoff" will take the total time irqs
      or preemption are disabled. Runtime switching of these options is now
      supported by simpling echoing in the appropriate trace name into
      /debugfs/tracing/current_tracer.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      6cd8a4bb
    • S
      ftrace: trace irq disabled critical timings · 81d68a96
      Steven Rostedt 提交于
      This patch adds latency tracing for critical timings
      (how long interrupts are disabled for).
      
       "irqsoff" is added to /debugfs/tracing/available_tracers
      
      Note:
        tracing_max_latency
          also holds the max latency for irqsoff (in usecs).
         (default to large number so one must start latency tracing)
      
        tracing_thresh
          threshold (in usecs) to always print out if irqs off
          is detected to be longer than stated here.
          If irq_thresh is non-zero, then max_irq_latency
          is ignored.
      
      Here's an example of a trace with ftrace_enabled = 0
      
      =======
      preemption latency trace v1.1.5 on 2.6.24-rc7
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      --------------------------------------------------------------------
       latency: 100 us, #3/3, CPU#1 | (M:rt VP:0, KP:0, SP:0 HP:0 #P:2)
          -----------------
          | task: swapper-0 (uid:0 nice:0 policy:0 rt_prio:0)
          -----------------
       => started at: _spin_lock_irqsave+0x2a/0xb7
       => ended at:   _spin_unlock_irqrestore+0x32/0x5f
      
                       _------=> CPU#
                      / _-----=> irqs-off
                     | / _----=> need-resched
                     || / _---=> hardirq/softirq
                     ||| / _--=> preempt-depth
                     |||| /
                     |||||     delay
         cmd     pid ||||| time  |   caller
            \   /    |||||   \   |   /
       swapper-0     1d.s3    0us+: _spin_lock_irqsave+0x2a/0xb7 (e1000_update_stats+0x47/0x64c [e1000])
       swapper-0     1d.s3  100us : _spin_unlock_irqrestore+0x32/0x5f (e1000_update_stats+0x641/0x64c [e1000])
       swapper-0     1d.s3  100us : trace_hardirqs_on_caller+0x75/0x89 (_spin_unlock_irqrestore+0x32/0x5f)
      
      vim:ft=help
      =======
      
      And this is a trace with ftrace_enabled == 1
      
      =======
      preemption latency trace v1.1.5 on 2.6.24-rc7
      --------------------------------------------------------------------
       latency: 102 us, #12/12, CPU#1 | (M:rt VP:0, KP:0, SP:0 HP:0 #P:2)
          -----------------
          | task: swapper-0 (uid:0 nice:0 policy:0 rt_prio:0)
          -----------------
       => started at: _spin_lock_irqsave+0x2a/0xb7
       => ended at:   _spin_unlock_irqrestore+0x32/0x5f
      
                       _------=> CPU#
                      / _-----=> irqs-off
                     | / _----=> need-resched
                     || / _---=> hardirq/softirq
                     ||| / _--=> preempt-depth
                     |||| /
                     |||||     delay
         cmd     pid ||||| time  |   caller
            \   /    |||||   \   |   /
       swapper-0     1dNs3    0us+: _spin_lock_irqsave+0x2a/0xb7 (e1000_update_stats+0x47/0x64c [e1000])
       swapper-0     1dNs3   46us : e1000_read_phy_reg+0x16/0x225 [e1000] (e1000_update_stats+0x5e2/0x64c [e1000])
       swapper-0     1dNs3   46us : e1000_swfw_sync_acquire+0x10/0x99 [e1000] (e1000_read_phy_reg+0x49/0x225 [e1000])
       swapper-0     1dNs3   46us : e1000_get_hw_eeprom_semaphore+0x12/0xa6 [e1000] (e1000_swfw_sync_acquire+0x36/0x99 [e1000])
       swapper-0     1dNs3   47us : __const_udelay+0x9/0x47 (e1000_read_phy_reg+0x116/0x225 [e1000])
       swapper-0     1dNs3   47us+: __delay+0x9/0x50 (__const_udelay+0x45/0x47)
       swapper-0     1dNs3   97us : preempt_schedule+0xc/0x84 (__delay+0x4e/0x50)
       swapper-0     1dNs3   98us : e1000_swfw_sync_release+0xc/0x55 [e1000] (e1000_read_phy_reg+0x211/0x225 [e1000])
       swapper-0     1dNs3   99us+: e1000_put_hw_eeprom_semaphore+0x9/0x35 [e1000] (e1000_swfw_sync_release+0x50/0x55 [e1000])
       swapper-0     1dNs3  101us : _spin_unlock_irqrestore+0xe/0x5f (e1000_update_stats+0x641/0x64c [e1000])
       swapper-0     1dNs3  102us : _spin_unlock_irqrestore+0x32/0x5f (e1000_update_stats+0x641/0x64c [e1000])
       swapper-0     1dNs3  102us : trace_hardirqs_on_caller+0x75/0x89 (_spin_unlock_irqrestore+0x32/0x5f)
      
      vim:ft=help
      =======
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      81d68a96
    • A
      ftrace: add basic support for gcc profiler instrumentation · 16444a8a
      Arnaldo Carvalho de Melo 提交于
      If CONFIG_FTRACE is selected and /proc/sys/kernel/ftrace_enabled is
      set to a non-zero value the ftrace routine will be called everytime
      we enter a kernel function that is not marked with the "notrace"
      attribute.
      
      The ftrace routine will then call a registered function if a function
      happens to be registered.
      
      [ This code has been highly hacked by Steven Rostedt and Ingo Molnar,
        so don't blame Arnaldo for all of this ;-) ]
      
      Update:
        It is now possible to register more than one ftrace function.
        If only one ftrace function is registered, that will be the
        function that ftrace calls directly. If more than one function
        is registered, then ftrace will call a function that will loop
        through the functions to call.
      Signed-off-by: NArnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      16444a8a
    • S
      x86: add notrace annotations to vsyscall. · 23adec55
      Steven Rostedt 提交于
      Add the notrace annotations to the vsyscall functions - there we are
      not in kernel context yet, so the tracer function cannot (and must not)
      be called.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      23adec55
  3. 22 5月, 2008 10 次提交
  4. 20 5月, 2008 9 次提交
  5. 19 5月, 2008 1 次提交
    • J
      [POWERPC] 4xx: Workaround for CHIP_11 Errata · 13c501e6
      Josh Boyer 提交于
      The PowerPC 440EP, 440GR, 440EPx, and 440GRx chips have an issue that
      causes the PLB3-to-PLB4 bridge to wait indefinitely for transaction
      requests that cross the end-of-memory-range boundary.  Since the DDR
      controller only returns the valid portion of a read request, the bridge
      will prevent other PLB masters from completing their transactions.
      
      This implements the recommended workaround for this errata for chips that
      use older versions of firmware that do not already handle it.  The last
      4KiB of memory are hidden from the kernel to prevent the problem
      transactions from occurring.
      Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
      Acked-by: NStefan Roese <sr@denx.de>
      Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
      13c501e6