1. 21 4月, 2009 3 次提交
    • S
      tracing: use recursive counter over irq level · aa18efb2
      Steven Rostedt 提交于
      Althought using the irq level (hardirq_count, softirq_count and in_nmi)
      was nice to detect bad recursion right away, but since the counters are
      not atomically updated with respect to the interrupts, the function tracer
      might trigger the test from an interrupt handler before the hardirq_count
      is updated. This will trigger a false warning.
      
      This patch converts the recursive detection to a simple counter.
      If the depth is greater than 16 then the recursive detection will trigger.
      16 is more than enough for any nested interrupts.
      
      [ Impact: fix false positive trace recursion detection ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      aa18efb2
    • S
      tracing: remove recursive test from ring_buffer_event_discard · e395898e
      Steven Rostedt 提交于
      The ring_buffer_event_discard is not tied to ring_buffer_lock_reserve.
      It can be called inside or outside the reserve/commit. Even if it
      is called inside the reserve/commit the commit part must also be called.
      
      Only ring_buffer_discard_commit can be used as a replacement for
      ring_buffer_unlock_commit.
      
      This patch removes the trace_recursive_unlock from ring_buffer_event_discard
      since it would be the wrong place to do so.
      
      [Impact: prevent breakage in trace recursive testing ]
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e395898e
    • S
      tracing: fix recursive test level calculation · 17487bfe
      Steven Rostedt 提交于
      The recursive tests to detect same level recursion in the ring buffers
      did not account for the hard/softirq_counts to be shifted. Thus the
      numbers could be larger than then mask to be tested.
      
      This patch includes the shift for the calculation of the irq depth.
      
      [ Impact: stop false positives in trace recursion detection ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      17487bfe
  2. 20 4月, 2009 2 次提交
    • F
      tracing/ring-buffer: Add unlock recursion protection on discard · f3b9aae1
      Frederic Weisbecker 提交于
      The pair of helpers trace_recursive_lock() and trace_recursive_unlock()
      have been introduced recently to provide generic tracing recursion
      protection.
      
      They are used in a symetric way:
      
       - trace_recursive_lock() on buffer reserve
       - trace_recursive_unlock() on buffer commit
      
      However sometimes, we don't commit but discard on entry
      to the buffer, ie: in case of filter checking.
      
      Then we must also unlock the recursion protection on discard time,
      otherwise the tracing gets definitely deactivated and a warning
      is raised spuriously, such as:
      
      111.119821] ------------[ cut here ]------------
      [  111.119829] WARNING: at kernel/trace/ring_buffer.c:1498 ring_buffer_lock_reserve+0x1b7/0x1d0()
      [  111.119835] Hardware name: AMILO Li 2727
      [  111.119839] Modules linked in:
      [  111.119846] Pid: 5731, comm: Xorg Tainted: G        W  2.6.30-rc1 #69
      [  111.119851] Call Trace:
      [  111.119863]  [<ffffffff8025ce68>] warn_slowpath+0xd8/0x130
      [  111.119873]  [<ffffffff8028a30f>] ? __lock_acquire+0x19f/0x1ae0
      [  111.119882]  [<ffffffff8028a30f>] ? __lock_acquire+0x19f/0x1ae0
      [  111.119891]  [<ffffffff802199b0>] ? native_sched_clock+0x20/0x70
      [  111.119899]  [<ffffffff80286dee>] ? put_lock_stats+0xe/0x30
      [  111.119906]  [<ffffffff80286eb8>] ? lock_release_holdtime+0xa8/0x150
      [  111.119913]  [<ffffffff802c8ae7>] ring_buffer_lock_reserve+0x1b7/0x1d0
      [  111.119921]  [<ffffffff802cd110>] trace_buffer_lock_reserve+0x30/0x70
      [  111.119930]  [<ffffffff802ce000>] trace_current_buffer_lock_reserve+0x20/0x30
      [  111.119939]  [<ffffffff802474e8>] ftrace_raw_event_sched_switch+0x58/0x100
      [  111.119948]  [<ffffffff808103b7>] __schedule+0x3a7/0x4cd
      [  111.119957]  [<ffffffff80211b56>] ? ftrace_call+0x5/0x2b
      [  111.119964]  [<ffffffff80211b56>] ? ftrace_call+0x5/0x2b
      [  111.119971]  [<ffffffff80810c08>] schedule+0x18/0x40
      [  111.119977]  [<ffffffff80810e09>] preempt_schedule+0x39/0x60
      [  111.119985]  [<ffffffff80813bd3>] _read_unlock+0x53/0x60
      [  111.119993]  [<ffffffff807259d2>] sock_def_readable+0x72/0x80
      [  111.120002]  [<ffffffff807ad5ed>] unix_stream_sendmsg+0x24d/0x3d0
      [  111.120011]  [<ffffffff807219a3>] sock_aio_write+0x143/0x160
      [  111.120019]  [<ffffffff80211b56>] ? ftrace_call+0x5/0x2b
      [  111.120026]  [<ffffffff80721860>] ? sock_aio_write+0x0/0x160
      [  111.120033]  [<ffffffff80721860>] ? sock_aio_write+0x0/0x160
      [  111.120042]  [<ffffffff8031c283>] do_sync_readv_writev+0xf3/0x140
      [  111.120049]  [<ffffffff80211b56>] ? ftrace_call+0x5/0x2b
      [  111.120057]  [<ffffffff80276ff0>] ? autoremove_wake_function+0x0/0x40
      [  111.120067]  [<ffffffff8045d489>] ? cap_file_permission+0x9/0x10
      [  111.120074]  [<ffffffff8045c1e6>] ? security_file_permission+0x16/0x20
      [  111.120082]  [<ffffffff8031cab4>] do_readv_writev+0xd4/0x1f0
      [  111.120089]  [<ffffffff80211b56>] ? ftrace_call+0x5/0x2b
      [  111.120097]  [<ffffffff80211b56>] ? ftrace_call+0x5/0x2b
      [  111.120105]  [<ffffffff8031cc18>] vfs_writev+0x48/0x70
      [  111.120111]  [<ffffffff8031cd65>] sys_writev+0x55/0xc0
      [  111.120119]  [<ffffffff80211e32>] system_call_fastpath+0x16/0x1b
      [  111.120125] ---[ end trace 15605f4e98d5ccb5 ]---
      
      [ Impact: fix spurious warning triggering tracing shutdown ]
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      f3b9aae1
    • F
      tracing/core: Add current context on tracing recursion warning · e057a5e5
      Frederic Weisbecker 提交于
      In case of tracing recursion detection, we only get the stacktrace.
      But the current context may be very useful to debug the issue.
      
      This patch adds the softirq/hardirq/nmi context with the warning
      using lockdep context display to have a familiar output.
      
      v2: Use printk_once()
      v3: drop {hardirq,softirq}_context which depend on lockdep,
          only keep what is part of current->trace_recursion,
          sufficient to debug the warning source.
      
      [ Impact: print context necessary to debug recursion ]
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      e057a5e5
  3. 18 4月, 2009 1 次提交
    • S
      tracing: add same level recursion detection · 261842b7
      Steven Rostedt 提交于
      The tracing infrastructure allows for recursion. That is, an interrupt
      may interrupt the act of tracing an event, and that interrupt may very well
      perform its own trace. This is a recursive trace, and is fine to do.
      
      The problem arises when there is a bug, and the utility doing the trace
      calls something that recurses back into the tracer. This recursion is not
      caused by an external event like an interrupt, but by code that is not
      expected to recurse. The result could be a lockup.
      
      This patch adds a bitmask to the task structure that keeps track
      of the trace recursion. To find the interrupt depth, the following
      algorithm is used:
      
        level = hardirq_count() + softirq_count() + in_nmi;
      
      Here, level will be the depth of interrutps and softirqs, and even handles
      the nmi. Then the corresponding bit is set in the recursion bitmask.
      If the bit was already set, we know we had a recursion at the same level
      and we warn about it and fail the writing to the buffer.
      
      After the data has been committed to the buffer, we clear the bit.
      No atomics are needed. The only races are with interrupts and they reset
      the bitmask before returning anywy.
      
      [ Impact: detect same irq level trace recursion ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      261842b7
  4. 17 4月, 2009 1 次提交
    • S
      tracing/events/ring-buffer: expose format of ring buffer headers to users · d1b182a8
      Steven Rostedt 提交于
      Currently, every thing needed to read the binary output from the
      ring buffers is available, with the exception of the way the ring
      buffers handles itself internally.
      
      This patch creates two special files in the debugfs/tracing/events
      directory:
      
       # cat /debug/tracing/events/header_page
              field: u64 timestamp;   offset:0;       size:8;
              field: local_t commit;  offset:8;       size:8;
              field: char data;       offset:16;      size:4080;
      
       # cat /debug/tracing/events/header_event
              type        :    2 bits
              len         :    3 bits
              time_delta  :   27 bits
              array       :   32 bits
      
              padding     : type == 0
              time_extend : type == 1
              data        : type == 3
      
      This is to allow a userspace app to see if the ring buffer format changes
      or not.
      
      [ Impact: allow userspace apps to know of ringbuffer format changes ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      d1b182a8
  5. 14 4月, 2009 1 次提交
    • S
      ring-buffer: add ring_buffer_discard_commit · fa1b47dd
      Steven Rostedt 提交于
      The ring_buffer_discard_commit is similar to ring_buffer_event_discard
      but it can only be done on an event that has yet to be commited.
      Unpredictable results can happen otherwise.
      
      The main difference between ring_buffer_discard_commit and
      ring_buffer_event_discard is that ring_buffer_discard_commit will try
      to free the data in the ring buffer if nothing has addded data
      after the reserved event. If something did, then it acts almost the
      same as ring_buffer_event_discard followed by a
      ring_buffer_unlock_commit.
      
      Note, either ring_buffer_commit_discard and ring_buffer_unlock_commit
      can be called on an event, not both.
      
      This commit also exports both discard functions to be usable by
      GPL modules.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fa1b47dd
  6. 07 4月, 2009 1 次提交
  7. 01 4月, 2009 1 次提交
    • S
      ring-buffer: do not remove reader page from list on ring buffer free · 2e572895
      Steven Rostedt 提交于
      Impact: prevent possible memory leak
      
      The reader page of the ring buffer is special. Although it points
      into the ring buffer, it is not part of the actual buffer. It is
      a page used by the reader to swap with a page in the ring buffer.
      Once the swap is made, the new reader page is again outside the
      buffer.
      
      Even though the reader page points into the buffer, it is really
      pointing to residual data. Note, this data is used by the reader.
      
                    reader page
                        |
                        v
             (prev)   +---+    (next)
           +----------|   |----------+
           |          +---+          |
           v                         v
         +---+        +---+        +---+
      -->|   |------->|   |------->|   |--->
      <--|   |<-------|   |<-------|   |<---
         +---+        +---+        +---+
      
           ^            ^            ^
            \           |            /
             ------- Buffer---------
      
      If we perform a list_del_init() on the reader page we will actually remove
      the last page the reader swapped with and not the reader page itself.
      This will cause that page to not be freed, and thus is a memory leak.
      
      Luckily, the only user of the ring buffer so far is ftrace. And ftrace
      will not free its ring buffer after it allocates it. There is no current
      possible memory leak. But once there are other users, or if ftrace
      dynamically creates and frees its ring buffer, then this would be a
      memory leak.
      
      This patch fixes the leak for future cases.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2e572895
  8. 23 3月, 2009 1 次提交
  9. 21 3月, 2009 1 次提交
    • F
      tracing/ring-buffer: don't annotate rb_cpu_notify with __cpuinit · 09c9e84d
      Frederic Weisbecker 提交于
      Impact: remove a section warning
      
      CONFIG_DEBUG_SECTION_MISMATCH raises the following warning on -tip:
      
        WARNING: kernel/trace/built-in.o(.text+0x5bc5): Section mismatch in
        reference from the function ring_buffer_alloc() to the function
        .cpuinit.text:rb_cpu_notify()
        The function ring_buffer_alloc() references
        the function __cpuinit rb_cpu_notify().
      
      This is actually harmless. The code in the ring buffer don't build
      rb_cpu_notify and other cpu hotplug stuffs when !CONFIG_HOTPLUG_CPU
      so we have no risk to reference freed memory here (it would even
      be harmless if we unconditionally build it because register_cpu_notifier
      would do nothing when !CONFIG_HOTPLUG_CPU.
      
      But since ring_buffer_alloc() can be called everytime, we don't want it
      to be annotated with __cpuinit so we drop the __cpuinit from
      rb_cpu_notify.
      
      This is not a waste of memory because it is only defined and used on
      CONFIG_HOTPLUG_CPU.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <1237606416-22268-1-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      09c9e84d
  10. 19 3月, 2009 1 次提交
    • F
      tracing/ring-buffer: fix non cpu hotplug case · 3bf832ce
      Frederic Weisbecker 提交于
      Impact: fix warning with irqsoff tracer
      
      The ring buffer allocates its buffers on pre-smp time (early_initcall).
      It means that, at first, only the boot cpu buffer is allocated and
      the ring-buffer cpumask only has the boot cpu set (cpu_online_mask).
      
      Later, the secondary cpu will show up and the ring-buffer will be notified
      about this event: the appropriate buffer will be allocated and the cpumask
      will be updated.
      
      Unfortunately, if !CONFIG_CPU_HOTPLUG, the ring-buffer will not be
      notified about the secondary cpus, meaning that the cpumask will have
      only the cpu boot set, and only one cpu buffer allocated.
      
      We fix that by using cpu_possible_mask if !CONFIG_CPU_HOTPLUG.
      
      This patch fixes the following warning with irqsoff tracer running:
      
      [  169.317794] WARNING: at kernel/trace/trace.c:466 update_max_tr_single+0xcc/0xf3()
      [  169.318002] Hardware name: AMILO Li 2727
      [  169.318002] Modules linked in:
      [  169.318002] Pid: 5624, comm: bash Not tainted 2.6.29-rc8-tip-02636-g6aafa6c #11
      [  169.318002] Call Trace:
      [  169.318002]  [<ffffffff81036182>] warn_slowpath+0xea/0x13d
      [  169.318002]  [<ffffffff8100b9d6>] ? ftrace_call+0x5/0x2b
      [  169.318002]  [<ffffffff8100b9d6>] ? ftrace_call+0x5/0x2b
      [  169.318002]  [<ffffffff8100b9d1>] ? ftrace_call+0x0/0x2b
      [  169.318002]  [<ffffffff8101ef10>] ? ftrace_modify_code+0xa9/0x108
      [  169.318002]  [<ffffffff8106e27f>] ? trace_hardirqs_off+0x25/0x27
      [  169.318002]  [<ffffffff8149afe7>] ? _spin_unlock_irqrestore+0x1f/0x2d
      [  169.318002]  [<ffffffff81064f52>] ? ring_buffer_reset_cpu+0xf6/0xfb
      [  169.318002]  [<ffffffff8106637c>] ? ring_buffer_reset+0x36/0x48
      [  169.318002]  [<ffffffff8106aeda>] update_max_tr_single+0xcc/0xf3
      [  169.318002]  [<ffffffff8100bc17>] ? sysret_check+0x22/0x5d
      [  169.318002]  [<ffffffff8106e3ea>] stop_critical_timing+0x142/0x204
      [  169.318002]  [<ffffffff8106e4cf>] trace_hardirqs_on_caller+0x23/0x25
      [  169.318002]  [<ffffffff8149ac28>] trace_hardirqs_on_thunk+0x3a/0x3c
      [  169.318002]  [<ffffffff8100bc17>] ? sysret_check+0x22/0x5d
      [  169.318002] ---[ end trace db76cbf775a750cf ]---
      
      Because this tracer may try to swap two cpu ring buffers for an
      unregistered cpu on the ring buffer.
      
      This patch might also fix a fair loss of traces due to unallocated buffers
      for secondary cpus.
      Reported-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-b: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <1237470453-5427-1-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3bf832ce
  11. 18 3月, 2009 1 次提交
  12. 13 3月, 2009 3 次提交
  13. 12 3月, 2009 1 次提交
    • S
      ring-buffer: only allocate buffers for online cpus · 554f786e
      Steven Rostedt 提交于
      Impact: save on memory
      
      Currently, a ring buffer was allocated for each "possible_cpus". On
      some systems, this is the same as NR_CPUS. Thus, if a system defined
      NR_CPUS = 64 but it only had 1 CPU, we could have possibly 63 useless
      ring buffers taking up space. With a default buffer of 3 megs, this
      could be quite drastic.
      
      This patch changes the ring buffer code to only allocate ring buffers
      for online CPUs.  If a CPU goes off line, we do not free the buffer.
      This is because the user may still have trace data in that buffer
      that they would like to look at.
      
      Perhaps in the future we could add code to delete a ring buffer if
      the CPU is offline and the ring buffer becomes empty.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      554f786e
  14. 06 3月, 2009 1 次提交
  15. 05 3月, 2009 1 次提交
    • S
      ring-buffer: fix timestamp in partial ring_buffer_page_read · 4f3640f8
      Steven Rostedt 提交于
      If a partial ring_buffer_page_read happens, then some of the
      incremental timestamps may be lost. This patch writes the
      recent timestamp into the page that is passed back to the caller.
      
      A partial ring_buffer_page_read is where the full page would not
      be written back to the user, and instead, just part of the page
      is copied to the user. A full page would be a page swap with the
      ring buffer and the timestamps would be correct.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      4f3640f8
  16. 04 3月, 2009 4 次提交
  17. 27 2月, 2009 1 次提交
  18. 17 2月, 2009 1 次提交
  19. 13 2月, 2009 1 次提交
    • S
      ring-buffer: rename label out_unlock to out_reset · 45141d46
      Steven Rostedt 提交于
      Impact: clean up
      
      While reviewing the ring buffer code, I thougth I saw a bug with
      
      	if (!__raw_spin_trylock(&cpu_buffer->lock))
      		goto out_unlock;
      
      But I forgot that we use a variable "lock_taken" that is set if
      the spinlock is taken, and only unlock it if that variable is set.
      
      To avoid further confusion from other reviewers, this patch
      renames the label out_unlock with out_reset, which is the more
      appropriate name.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      45141d46
  20. 11 2月, 2009 3 次提交
    • A
      ring_buffer: pahole struct ring_buffer · 00f62f61
      Arnaldo Carvalho de Melo 提交于
      While fixing some bugs in pahole (built-in.o files were not being
      processed due to relocation problems) I found out about these packable
      structures:
      
      $ pahole --packable kernel/trace/ring_buffer.o  | grep ring
      ring_buffer	72	64	8
      ring_buffer_per_cpu	112	104	8
      
      If we take a look at the current layout of struct ring_buffer we can see
      that we have two 4 bytes holes.
      
      $ pahole -C ring_buffer kernel/trace/ring_buffer.o
      struct ring_buffer {
      	unsigned int               pages;           /*     0     4 */
      	unsigned int               flags;           /*     4     4 */
      	int                        cpus;            /*     8     4 */
      
      	/* XXX 4 bytes hole, try to pack */
      
      	cpumask_var_t              cpumask;         /*    16     8 */
      	atomic_t                   record_disabled; /*    24     4 */
      
      	/* XXX 4 bytes hole, try to pack */
      
      	struct mutex               mutex;           /*    32    32 */
      	/* --- cacheline 1 boundary (64 bytes) --- */
      	struct ring_buffer_per_cpu * * buffers;     /*    64     8 */
      
      	/* size: 72, cachelines: 2, members: 7 */
      	/* sum members: 64, holes: 2, sum holes: 8 */
      	/* last cacheline: 8 bytes */
      };
      
      So, if I ask pahole to reorganize it:
      
      $ pahole -C ring_buffer --reorganize kernel/trace/ring_buffer.o
      
      struct ring_buffer {
      	unsigned int               pages;           /*     0     4 */
      	unsigned int               flags;           /*     4     4 */
      	int                        cpus;            /*     8     4 */
      	atomic_t                   record_disabled; /*    12     4 */
      	cpumask_var_t              cpumask;         /*    16     8 */
      	struct mutex               mutex;           /*    24    32 */
      	struct ring_buffer_per_cpu * * buffers;     /*    56     8 */
      	/* --- cacheline 1 boundary (64 bytes) --- */
      
      	/* size: 64, cachelines: 1, members: 7 */
      };   /* saved 8 bytes and 1 cacheline! */
      
      We get it using just one 64 bytes cacheline.
      
      To see what it did:
      
      $ pahole -C ring_buffer --reorganize --show_reorg_steps \
      	kernel/trace/ring_buffer.o | grep \/
      /* Moving 'record_disabled' from after 'cpumask' to after 'cpus' */
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      00f62f61
    • H
      tracing: fix sparse warnings: fix (un-)signedness · 5e39841c
      Hannes Eder 提交于
      Fix these sparse warnings:
      
        kernel/trace/ring_buffer.c:70:37: warning: incorrect type in argument 2 (different signedness)
        kernel/trace/ring_buffer.c:84:39: warning: incorrect type in argument 2 (different signedness)
        kernel/trace/ring_buffer.c:96:43: warning: incorrect type in argument 2 (different signedness)
        kernel/trace/ring_buffer.c:2475:13: warning: incorrect type in argument 2 (different signedness)
        kernel/trace/ring_buffer.c:2475:13: warning: incorrect type in argument 2 (different signedness)
        kernel/trace/ring_buffer.c:2478:42: warning: incorrect type in argument 2 (different signedness)
        kernel/trace/ring_buffer.c:2478:42: warning: incorrect type in argument 2 (different signedness)
        kernel/trace/ring_buffer.c:2500:40: warning: incorrect type in argument 3 (different signedness)
        kernel/trace/ring_buffer.c:2505:44: warning: incorrect type in argument 2 (different signedness)
        kernel/trace/ring_buffer.c:2507:46: warning: incorrect type in argument 2 (different signedness)
        kernel/trace/trace.c:2130:40: warning: incorrect type in argument 3 (different signedness)
        kernel/trace/trace.c:2280:40: warning: incorrect type in argument 3 (different signedness)
      Signed-off-by: NHannes Eder <hannes@hanneseder.net>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5e39841c
    • W
      tracing: fix typos in comments · c3706f00
      Wenji Huang 提交于
      Impact: clean up.
      
      Fix typos in the comments.
      Signed-off-by: NWenji Huang <wenji.huang@oracle.com>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      c3706f00
  21. 10 2月, 2009 2 次提交
    • L
      ring_buffer: fix ring_buffer_read_page() · 667d2412
      Lai Jiangshan 提交于
      Impact: change API and init bpage when copy
      
      ring_buffer_read_page()/rb_remove_entries() may be called for
      a partially consumed page.
      
      Add a parameter for rb_remove_entries() and make it update
      cpu_buffer->entries correctly for partially consumed pages.
      
      ring_buffer_read_page() now returns the offset to the next event.
      
      Init the bpage's time_stamp when return value is 0.
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      667d2412
    • L
      ring_buffer: fix typing mistake · b85fa01e
      Lai Jiangshan 提交于
      Impact: Fix bug
      
      I found several very very curious line.
      It's so curious that it may be brought by typing mistake.
      
      When (cpu_buffer->reader_page == cpu_buffer->commit_page):
      
      1) We haven't copied it for bpage is changed:
         bpage = cpu_buffer->reader_page->page;
         memcpy(bpage->data, cpu_buffer->reader_page->page->data + read ... )
      2) We need update cpu_buffer->reader_page->read, but
         "cpu_buffer->reader_page += read;" is not right.
      
      [
        This bug was a typo. The commit->reader_page is a page pointer
        and not an index into the page. The line should have been
        commit->reader_page->read += read.  The other changes
        by Lai are nice clean ups to the code.  - SDR
      ]
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      b85fa01e
  22. 08 2月, 2009 2 次提交
    • S
      ring-buffer: use generic version of in_nmi · a81bd80a
      Steven Rostedt 提交于
      Impact: clean up
      
      Now that a generic in_nmi is available, this patch removes the
      special code in the ring_buffer and implements the in_nmi generic
      version instead.
      
      With this change, I was also able to rename the "arch_ftrace_nmi_enter"
      back to "ftrace_nmi_enter" and remove the code from the ring buffer.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      a81bd80a
    • S
      ring-buffer: add NMI protection for spinlocks · 78d904b4
      Steven Rostedt 提交于
      Impact: prevent deadlock in NMI
      
      The ring buffers are not yet totally lockless with writing to
      the buffer. When a writer crosses a page, it grabs a per cpu spinlock
      to protect against a reader. The spinlocks taken by a writer are not
      to protect against other writers, since a writer can only write to
      its own per cpu buffer. The spinlocks protect against readers that
      can touch any cpu buffer. The writers are made to be reentrant
      with the spinlocks disabling interrupts.
      
      The problem arises when an NMI writes to the buffer, and that write
      crosses a page boundary. If it grabs a spinlock, it can be racing
      with another writer (since disabling interrupts does not protect
      against NMIs) or with a reader on the same CPU. Luckily, most of the
      users are not reentrant and protects against this issue. But if a
      user of the ring buffer becomes reentrant (which is what the ring
      buffers do allow), if the NMI also writes to the ring buffer then
      we risk the chance of a deadlock.
      
      This patch moves the ftrace_nmi_enter called by nmi_enter() to the
      ring buffer code. It replaces the current ftrace_nmi_enter that is
      used by arch specific code to arch_ftrace_nmi_enter and updates
      the Kconfig to handle it.
      
      When an NMI is called, it will set a per cpu variable in the ring buffer
      code and will clear it when the NMI exits. If a write to the ring buffer
      crosses page boundaries inside an NMI, a trylock is used on the spin
      lock instead. If the spinlock fails to be acquired, then the entry
      is discarded.
      
      This bug appeared in the ftrace work in the RT tree, where event tracing
      is reentrant. This workaround solved the deadlocks that appeared there.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      78d904b4
  23. 06 2月, 2009 1 次提交
    • A
      ring_buffer: remove unused flags parameter · 0a987751
      Arnaldo Carvalho de Melo 提交于
      Impact: API change, cleanup
      
      >From ring_buffer_{lock_reserve,unlock_commit}.
      
      $ codiff /tmp/vmlinux.before /tmp/vmlinux.after
      linux-2.6-tip/kernel/trace/trace.c:
        trace_vprintk              |  -14
        trace_graph_return         |  -14
        trace_graph_entry          |  -10
        trace_function             |   -8
        __ftrace_trace_stack       |   -8
        ftrace_trace_userstack     |   -8
        tracing_sched_switch_trace |   -8
        ftrace_trace_special       |  -12
        tracing_sched_wakeup_trace |   -8
       9 functions changed, 90 bytes removed, diff: -90
      
      linux-2.6-tip/block/blktrace.c:
        __blk_add_trace |   -1
       1 function changed, 1 bytes removed, diff: -1
      
      /tmp/vmlinux.after:
       10 functions changed, 91 bytes removed, diff: -91
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Acked-by: NFrédéric Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0a987751
  24. 22 1月, 2009 3 次提交
  25. 21 1月, 2009 1 次提交
    • L
      ring_buffer: reset write when reserve buffer fail · 551b4048
      Lai Jiangshan 提交于
      Impact: reset struct buffer_page.write when interrupt storm
      
      if struct buffer_page.write is not reset, any succedent committing
      will corrupted ring_buffer:
      
      static inline void
      rb_set_commit_to_write(struct ring_buffer_per_cpu *cpu_buffer)
      {
      	......
      		cpu_buffer->commit_page->commit =
      			cpu_buffer->commit_page->write;
      	......
      }
      
      when "if (RB_WARN_ON(cpu_buffer, next_page == reader_page))", ring_buffer
      is disabled, but some reserved buffers may haven't been committed.
      we need reset struct buffer_page.write.
      
      when "if (unlikely(next_page == cpu_buffer->commit_page))", ring_buffer
      is still available, we should not corrupt it.
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      551b4048
  26. 20 1月, 2009 1 次提交
    • S
      ring-buffer: fix alignment problem · 082605de
      Steven Rostedt 提交于
      Impact: fix to allow some archs to use the ring buffer
      
      Commits in the ring buffer are checked by pointer arithmetic.
      If the calculation is incorrect, then the commits will never take
      place and the buffer will simply fill up and report an error.
      
      Each page in the ring buffer has a small header:
      
      struct buffer_data_page {
      	u64		time_stamp;
      	local_t		commit;
      	unsigned char	data[];
      };
      
      Unfortuntely, some of the calculations used sizeof(struct buffer_data_page)
      to know the size of the header. But this is incorrect on some archs,
      where sizeof(struct buffer_data_page) does not equal
      offsetof(struct buffer_data_page, data), and on those archs, the commits
      are never processed.
      
      This patch replaces the sizeof with offsetof.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      082605de