1. 25 2月, 2010 2 次提交
    • J
      tracing: Fix ftrace_event_call alignment for use with gcc 4.5 · 86c38a31
      Jeff Mahoney 提交于
      GCC 4.5 introduces behavior that forces the alignment of structures to
       use the largest possible value. The default value is 32 bytes, so if
       some structures are defined with a 4-byte alignment and others aren't
       declared with an alignment constraint at all - it will align at 32-bytes.
      
       For things like the ftrace events, this results in a non-standard array.
       When initializing the ftrace subsystem, we traverse the _ftrace_events
       section and call the initialization callback for each event. When the
       structures are misaligned, we could be treating another part of the
       structure (or the zeroed out space between them) as a function pointer.
      
       This patch forces the alignment for all the ftrace_event_call structures
       to 4 bytes.
      
       Without this patch, the kernel fails to boot very early when built with
       gcc 4.5.
      
       It's trivial to check the alignment of the members of the array, so it
       might be worthwhile to add something to the build system to do that
       automatically. Unfortunately, that only covers this case. I've asked one
       of the gcc developers about adding a warning when this condition is seen.
      
      Cc: stable@kernel.org
      Signed-off-by: NJeff Mahoney <jeffm@suse.com>
      LKML-Reference: <4B85770B.6010901@suse.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      86c38a31
    • S
      ftrace: Remove memory barriers from NMI code when not needed · 0c54dd34
      Steven Rostedt 提交于
      The code in stop_machine that modifies the kernel text has a bit
      of logic to handle the case of NMIs. stop_machine does not prevent
      NMIs from executing, and if an NMI were to trigger on another CPU
      as the modifying CPU is changing the NMI text, a GPF could result.
      
      To prevent the GPF, the NMI calls ftrace_nmi_enter() which may
      modify the code first, then any other NMIs will just change the
      text to the same content which will do no harm. The code that
      stop_machine called must wait for NMIs to finish while it changes
      each location in the kernel. That code may also change the text
      to what the NMI changed it to. The key is that the text will never
      change content while another CPU is executing it.
      
      To make the above work, the call to ftrace_nmi_enter() must also
      do a smp_mb() as well as atomic_inc().  But for applications like
      perf that require a high number of NMIs for profiling, this can have
      a dramatic effect on the system. Not only is it doing a full memory
      barrier on both nmi_enter() as well as nmi_exit() it is also
      modifying a global variable with an atomic operation. This kills
      performance on large SMP machines.
      
      Since the memory barriers are only needed when ftrace is in the
      process of modifying the text (which is seldom), this patch
      adds a "modifying_code" variable that gets set before stop machine
      is executed and cleared afterwards.
      
      The NMIs will check this variable and store it in a per CPU
      "save_modifying_code" variable that it will use to check if it
      needs to do the memory barriers and atomic dec on NMI exit.
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0c54dd34
  2. 16 2月, 2010 1 次提交
    • S
      tracing: Add notrace to TRACE_EVENT implementation functions · 83f0d539
      Steven Rostedt 提交于
      The functions used to implement the TRACE_EVENT macro show up in
      function tracing. This is considered a distraction, and these should
      not be displayed. For example:
      
           <idle>-0     [000]    57.202149: task_of <-update_stats_wait_end
           <idle>-0     [000]    57.202149: ftrace_raw_event_sched_stat_wait <-update_stats_wait_end
           <idle>-0     [000]    57.202150: ftrace_raw_event_id_sched_stat_template <-ftrace_raw_event_sched_stat_wait
           <idle>-0     [000]    57.202150: sched_stat_wait: comm=sshd pid=2735 delay=19207 [ns]
      
      The "ftrace_raw_event_*" traces are just the utility functions used
      by TRACE_EVENT tracepoints.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Requested-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      83f0d539
  3. 12 2月, 2010 1 次提交
  4. 10 2月, 2010 1 次提交
    • S
      tracing: Add correct/incorrect to sort keys for branch annotation output · ede55c9d
      Steven Rostedt 提交于
      The branch annotation is a bit difficult to see the worst offenders
      because it only sorts by percentage:
      
       correct incorrect  %        Function                  File              Line
       ------- ---------  -        --------                  ----              ----
             0      163 100 qdisc_restart                  sch_generic.c        179
             0      163 100 pfifo_fast_dequeue             sch_generic.c        447
             0        4 100 pskb_trim_rcsum                skbuff.h             1689
             0        4 100 llc_rcv                        llc_input.c          170
             0       18 100 psmouse_interrupt              psmouse-base.c       304
             0        3 100 atkbd_interrupt                atkbd.c              389
             0        5 100 usb_alloc_dev                  usb.c                437
             0       11 100 vsscanf                        vsprintf.c           1897
             0        2 100 IS_ERR                         err.h                34
             0       23 100 __rmqueue_fallback             page_alloc.c         865
             0        4 100 probe_wakeup_sched_switch      trace_sched_wakeup.c 142
             0        3 100 move_masked_irq                migration.c          11
      
      Adding the incorrect and correct values as sort keys makes this file a
      bit more informative:
      
       correct incorrect  %        Function                  File              Line
       ------- ---------  -        --------                  ----              ----
             0   366541 100 audit_syscall_entry            auditsc.c            1637
             0   366538 100 audit_syscall_exit             auditsc.c            1685
             0   115839 100 sched_info_switch              sched_stats.h        269
             0    74567 100 sched_info_queued              sched_stats.h        222
             0    66578 100 sched_info_dequeued            sched_stats.h        177
             0    15113 100 trace_workqueue_insertion      workqueue.h          38
             0    15107 100 trace_workqueue_execution      workqueue.h          45
             0     3622 100 syscall_trace_leave            ptrace.c             1772
             0     2750 100 sched_move_task                sched.c              10100
             0     2750 100 sched_move_task                sched.c              10110
             0     1815 100 pre_schedule_rt                sched_rt.c           1462
             0      837 100 audit_alloc                    auditsc.c            879
             0      814 100 tcp_mss_split_point            tcp_output.c         1302
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      ede55c9d
  5. 29 1月, 2010 1 次提交
    • L
      tracing: Simplify test for function_graph tracing start point · ea2c68a0
      Lai Jiangshan 提交于
      In the function graph tracer, a calling function is to be traced
      only when it is enabled through the set_graph_function file,
      or when it is nested in an enabled function.
      
      Current code uses TSK_TRACE_FL_GRAPH to test whether it is nested
      or not. Looking at the code, we can get this:
      (trace->depth > 0) <==> (TSK_TRACE_FL_GRAPH is set)
      
      trace->depth is more explicit to tell that it is nested.
      So we use trace->depth directly and simplify the code.
      
      No functionality is changed.
      TSK_TRACE_FL_GRAPH is not removed yet, it is left for future usage.
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <4B4DB0B6.7040607@cn.fujitsu.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      ea2c68a0
  6. 17 1月, 2010 1 次提交
  7. 07 1月, 2010 11 次提交
  8. 06 1月, 2010 5 次提交
  9. 05 1月, 2010 16 次提交
  10. 04 1月, 2010 1 次提交
    • J
      resource: add helpers for fetching rlimits · 3e10e716
      Jiri Slaby 提交于
      We want to be sure that compiler fetches the limit variable only
      once, so add helpers for fetching current and maximal resource
      limits which do that.
      
      Add them to sched.h (instead of resource.h) due to circular dependency
       sched.h->resource.h->task_struct
      Alternative would be to create a separate res_access.h or similar.
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Cc: James Morris <jmorris@namei.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      3e10e716