1. 03 12月, 2008 4 次提交
    • S
      ftrace: print real return in dumpstack for function graph · 7ee991fb
      Steven Rostedt 提交于
      Impact: better dumpstack output
      
      I noticed in my crash dumps and even in the stack tracer that a
      lot of functions listed in the stack trace are simply
      return_to_handler which is ftrace graphs way to insert its own
      call into the return of a function.
      
      But we lose out where the actually function was called from.
      
      This patch adds in hooks to the dumpstack mechanism that detects
      this and finds the real function to print. Both are printed to
      let the user know that a hook is still in place.
      
      This does give a funny side effect in the stack tracer output:
      
              Depth   Size      Location    (80 entries)
              -----   ----      --------
        0)     4144      48   save_stack_trace+0x2f/0x4d
        1)     4096     128   ftrace_call+0x5/0x2b
        2)     3968      16   mempool_alloc_slab+0x16/0x18
        3)     3952     384   return_to_handler+0x0/0x73
        4)     3568    -240   stack_trace_call+0x11d/0x209
        5)     3808     144   return_to_handler+0x0/0x73
        6)     3664    -128   mempool_alloc+0x4d/0xfe
        7)     3792     128   return_to_handler+0x0/0x73
        8)     3664     -32   scsi_sg_alloc+0x48/0x4a [scsi_mod]
      
      As you can see, the real functions are now negative. This is due
      to them not being found inside the stack.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7ee991fb
    • S
      ftrace: add ftrace_graph_stop() · 14a866c5
      Steven Rostedt 提交于
      Impact: new ftrace_graph_stop function
      
      While developing more features of function graph, I hit a bug that
      caused the WARN_ON to trigger in the prepare_ftrace_return function.
      Well, it was hard for me to find out that was happening because the
      bug would not print, it would just cause a hard lockup or reboot.
      The reason is that it is not safe to call printk from this function.
      
      Looking further, I also found that it calls unregister_ftrace_graph,
      which grabs a mutex and calls kstop machine. This would definitely
      lock the box up if it were to trigger.
      
      This patch adds a fast and safe ftrace_graph_stop() which will
      stop the function tracer. Then it is safe to call the WARN ON.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      14a866c5
    • S
      ftrace: have function graph use mcount caller address · bb4304c7
      Steven Rostedt 提交于
      Impact: consistency change for function graph
      
      This patch makes function graph record the mcount caller address
      the same way the function tracer does.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bb4304c7
    • S
      ftrace: clean up function graph asm · 347fdd9d
      Steven Rostedt 提交于
      Impact: clean up
      
      There exists macros for x86 asm to handle x86_64 and i386.
      This patch updates function graph asm to use them.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      347fdd9d
  2. 02 12月, 2008 3 次提交
  3. 01 12月, 2008 23 次提交
  4. 28 11月, 2008 5 次提交
    • S
      powerpc/ppc32: static ftrace fixes for PPC32 · f1eecf0e
      Steven Rostedt 提交于
      Impact: fix for PowerPC 32 code
      
      There were some early init code that was not safe for static
      ftrace to boot on my PowerBook. This code must only use relative
      addressing, and static mcount performs a compare of the
      ftrace_trace_function pointer, and gets that with an absolute address.
      In the early init boot up code, this will cause a fault.
      
      This patch removes tracing from the files containing the offending
      functions.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f1eecf0e
    • S
      powerpc: ftrace, use create_branch · 0029ff87
      Steven Rostedt 提交于
      Impact: clean up
      
      Paul Mackerras pointed out that the code to determine if the branch
      can reach the destination is incorrect. Michael Ellerman suggested
      to pull out the code from create_branch and use that.
      
      Simply using create_branch is probably the best.
      Reported-by: NMichael Ellerman <michael@ellerman.id.au>
      Reported-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0029ff87
    • S
      powerpc: ftrace, added missing icache flush · ec682cef
      Steven Rostedt 提交于
      Impact: fix to PowerPC code modification
      
      After modifying code it is essential to flush the icache. This patch
      adds the missing flush.
      Reported-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ec682cef
    • S
      powerpc: ftrace, fix cast aliasing and add code verification · d9af12b7
      Steven Rostedt 提交于
      Impact: clean up and robustness addition
      
      This patch addresses the comments made by Paul Mackerras.
      It removes the type casting between unsigned int and unsigned char
      pointers, and replaces them with a use of all unsigned int.
      
      Verification that the jump is indeed made to a trampoline has also
      been added.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d9af12b7
    • S
      powerpc: ftrace, do nothing in mcount call for dyn ftrace · c7b0d173
      Steven Rostedt 提交于
      Impact: quicken mcount calls that are not replaced by dyn ftrace
      
      Dynamic ftrace no longer does on the fly recording of mcount locations.
      The mcount locations are now found at compile time. The mcount
      function no longer needs to store registers and call a stub function.
      It can now just simply return.
      
      Since there are some functions that do not get converted to a nop
      (.init sections and other code that may disappear), this patch should
      help speed up that code.
      
      Also, the stub for mcount on PowerPC 32 can not be a simple branch
      link register like it is on PowerPC 64. According to the ABI specification:
      
      "The _mcount routine is required to restore the link register from
       the stack so that the profiling code can be inserted transparently,
       whether or not the profiled function saves the link register itself."
      
      This means that we must restore the link register that was used
      to make the call to mcount.  The minimal mcount function for PPC32
      ends up being:
      
       mcount:
              mflr    r0
              mtctr   r0
              lwz     r0, 4(r1)
              mtlr    r0
              bctr
      
      Where we move the link register used to call mcount into the
      ctr register, and then restore the link register from the stack.
      Then we use the ctr register to jump back to the mcount caller.
      The r0 register is free for us to use.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c7b0d173
  5. 27 11月, 2008 5 次提交
    • M
      a730b327
    • J
      x86: always define DECLARE_PCI_UNMAP* macros · b627c8b1
      Joerg Roedel 提交于
      Impact: fix boot crash on AMD IOMMU if CONFIG_GART_IOMMU is off
      
      Currently these macros evaluate to a no-op except the kernel is compiled
      with GART or Calgary support. But we also need these macros when we have
      SWIOTLB, VT-d or AMD IOMMU in the kernel. Since we always compile at
      least with SWIOTLB we can define these macros always.
      
      This patch is also for stable backport for the same reason the SWIOTLB
      default selection patch is.
      Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b627c8b1
    • M
      abd94219
    • H
      [S390] Fix alignment of initial kernel stack. · 0778dc3a
      Heiko Carstens 提交于
      We need an alignment of 16384 bytes for the initial kernel stack if
      the kernel is configured for 16384 bytes stacks but the linker script
      currently guarantees only an alignment of 8192 bytes.
      
      So fix this and simply use THREAD_SIZE as alignment value which will
      always do the right thing.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      0778dc3a
    • C
      [S390] pgtable.h: Fix oops in unmap_vmas for KVM processes · 2944a5c9
      Christian Borntraeger 提交于
      When running several kvm processes with lots of memory overcommitment,
      we have seen an oops during process shutdown:
      ------------[ cut here ]------------
      Kernel BUG at 0000000000193434 [verbose debug info unavailable]
      addressing exception: 0005 [#1] PREEMPT SMP
      Modules linked in: kvm sunrpc qeth_l2 dm_mod qeth ccwgroup
      CPU: 10 Not tainted 2.6.28-rc4-kvm-bigiron-00521-g0ccca08-dirty #8
      Process kuli (pid: 14460, task: 0000000149822338, ksp: 0000000024f57650)
      Krnl PSW : 0704e00180000000 0000000000193434 (unmap_vmas+0x884/0xf10)
      R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:2 PM:0 EA:3
      Krnl GPRS: 0000000000000002 0000000000000000 000000051008d000 000003e05e6034e0
                 00000000001933f6 00000000000001e9 0000000407259e0a 00000002be88c400
                 00000200001c1000 0000000407259608 0000000407259e08 0000000024f577f0
                 0000000407259e09 0000000000445fa8 00000000001933f6 0000000024f577f0
      Krnl Code: 0000000000193426: eb22000c000d sllg %r2,%r2,12
                 000000000019342c: a7180000 lhi %r1,0
                 0000000000193430: b2290012 iske %r1,%r2
                >0000000000193434: a7110002 tmll %r1,2
                 0000000000193438: a7840006 brc 8,193444
                 000000000019343c: 9602c000 oi 0(%r12),2
                 0000000000193440: 96806000 oi 0(%r6),128
                 0000000000193444: a7110004 tmll %r1,4
      Call Trace:
      ([<00000000001933f6>] unmap_vmas+0x846/0xf10)
      [<0000000000199680>] exit_mmap+0x210/0x458
      [<000000000012a8f8>] mmput+0x54/0xfc
      [<000000000012f714>] exit_mm+0x134/0x144
      [<000000000013120c>] do_exit+0x240/0x878
      [<00000000001318dc>] do_group_exit+0x98/0xc8
      [<000000000013e6b0>] get_signal_to_deliver+0x30c/0x358
      [<000000000010bee0>] do_signal+0xec/0x860
      [<0000000000112e30>] sysc_sigpending+0xe/0x22
      [<000002000013198a>] 0x2000013198a
      INFO: lockdep is turned off.
      Last Breaking-Event-Address:
      [<00000000001a68d0>] free_swap_and_cache+0x1a0/0x1a4
      <4>---[ end trace bc19f1d51ac9db7c ]---
      
      The faulting instruction is the storage key operation (iske) in
      ptep_rcp_copy (called by pte_clear, called by unmap_vmas). iske
      reads dirty and reference bit information for a physical page and
      requires a valid physical address. Since we are in pte_clear, we
      cannot rely on the pte containing a valid address. Fortunately we
      dont need these information in pte_clear - after all there is no
      mapping. The best fix is to remove the needless call to ptep_rcp_copy
      that contains the iske.
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      2944a5c9