1. 11 11月, 2008 13 次提交
    • F
      tracing: add a tracer to catch execution time of kernel functions · 15e6cb36
      Frederic Weisbecker 提交于
      Impact: add new tracing plugin which can trace full (entry+exit) function calls
      
      This tracer uses the low level function return ftrace plugin to
      measure the execution time of the kernel functions.
      
      The first field is the caller of the function, the second is the
      measured function, and the last one is the execution time in
      nanoseconds.
      
      - v3:
      
      - HAVE_FUNCTION_RET_TRACER have been added. Each arch that support ftrace return
        should enable it.
      - ftrace_return_stub becomes ftrace_stub.
      - CONFIG_FUNCTION_RET_TRACER depends now on CONFIG_FUNCTION_TRACER
      - Return traces printing can be used for other tracers on trace.c
      - Adapt to the new tracing API (no more ctrl_update callback)
      - Correct the check of "disabled" during insertion.
      - Minor changes...
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      15e6cb36
    • F
      tracing, x86: add low level support for ftrace return tracing · caf4b323
      Frederic Weisbecker 提交于
      Impact: add infrastructure for function-return tracing
      
      Add low level support for ftrace return tracing.
      
      This plug-in stores return addresses on the thread_info structure of
      the current task.
      
      The index of the current return address is initialized when the task
      is the first one (init) and when a process forks (the child). It is
      not needed when a task does a sys_execve because after this syscall,
      it still needs to return on the kernel functions it called.
      
      Note that the code of return_to_handler has been suggested by Steven
      Rostedt as almost all of the ideas of improvements in this V3.
      
      For purpose of security, arch/x86/kernel/process_32.c is not traced
      because __switch_to() changes the current task during its execution.
      That could cause inconsistency in the stored return address of this
      function even if I didn't have any crash after testing with tracing on
      this function enabled.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      caf4b323
    • I
    • S
      ring-buffer: replace most bug ons with warn on and disable buffer · f536aafc
      Steven Rostedt 提交于
      This patch replaces most of the BUG_ONs in the ring_buffer code with
      RB_WARN_ON variants. It adds some more variants as needed for the
      replacement. This lets the buffer die nicely and still warn the user.
      
      One BUG_ON remains in the code, and that is because it detects a
      bad pointer passed in by the calling function, and not a bug by
      the ring buffer code itself.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f536aafc
    • S
      ftrace: prevent ftrace_special from recursion · 5aa1ba6a
      Steven Rostedt 提交于
      Impact: stop ftrace_special from recursion
      
      The ftrace_special is used to help debug areas of the kernel.
      Because of this, if it is put in certain locations, the fact that
      it allows recursion can become a problem if the kernel developer
      using does not realize that.
      
      This patch changes ftrace_special to not allow recursion into itself
      to make it more robust.
      
      It also changes from preempt disable interrupts disable to prevent
      any loss of trace entries.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5aa1ba6a
    • I
      Merge branch 'tracing/urgent' into tracing/ftrace · e0cb4ebc
      Ingo Molnar 提交于
      Conflicts:
      	kernel/trace/trace.c
      e0cb4ebc
    • I
      Merge branch 'devel' of... · 45b86a96
      Ingo Molnar 提交于
      Merge branch 'devel' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace into tracing/urgent
      45b86a96
    • S
      ring-buffer: prevent infinite looping on time stamping · 4143c5cb
      Steven Rostedt 提交于
      Impact: removal of unnecessary looping
      
      The lockless part of the ring buffer allows for reentry into the code
      from interrupts. A timestamp is taken, a test is preformed and if it
      detects that an interrupt occurred that did tracing, it tries again.
      
      The problem arises if the timestamp code itself causes a trace.
      The detection will detect this and loop again. The difference between
      this and an interrupt doing tracing, is that this will fail every time,
      and cause an infinite loop.
      
      Currently, we test if the loop happens 1000 times, and if so, it will
      produce a warning and disable the ring buffer.
      
      The problem with this approach is that it makes it difficult to perform
      some types of tracing (tracing the timestamp code itself).
      
      Each trace entry has a delta timestamp from the previous entry.
      If a trace entry is reserved but and interrupt occurs and traces before
      the previous entry is commited, the delta timestamp for that entry will
      be zero. This actually makes sense in terms of tracing, because the
      interrupt entry happened before the preempted entry was commited, so
      one may consider the two happening at the same time. The order is
      still preserved in the buffer.
      
      With this idea, instead of trying to get a new timestamp if an interrupt
      made it in between the timestamp and the test, the entry could simply
      make the delta zero and continue. This will prevent interrupts or
      tracers in the timer code from causing the above loop.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      4143c5cb
    • S
      ftrace: disable tracing on resize · bf5e6519
      Steven Rostedt 提交于
      Impact: fix for bug on resize
      
      This patch addresses the bug found here:
      
       http://bugzilla.kernel.org/show_bug.cgi?id=11996
      
      When ftrace converted to the new unified trace buffer, the resizing of
      the buffer was not protected as much as it was originally. If tracing
      is performed while the resize occurs, then the buffer can be corrupted.
      
      This patch disables all ftrace buffer modifications before a resize
      takes place.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      bf5e6519
    • L
      Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound-2.6 · 3ad4f597
      Linus Torvalds 提交于
      * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound-2.6:
        ALSA: hda - Make the HP EliteBook 8530p use AD1884A model laptop
        ALSA: gusextreme: Fix build errors
        ALSA: hdsp: check for iobox and upload firmware during ioctl
        ALSA: HDSP: check for io box before uploading firmware
        ALSA: hda - Add another HP model (6730s) for AD1884A
        alsa: fix snd_BUG_on() and friends
        ALSA: hda - Add a quirk for MEDION MD96630
        ALSA: hda - Limit the number of GPIOs show in proc
      3ad4f597
    • T
      6b425660
    • T
      ALSA: hda - Make the HP EliteBook 8530p use AD1884A model laptop · 25424831
      Travis Place 提交于
      Added a QUIRK to patch_analog.c for the HP Elitebook 8530p
      (IDs 0x103c:0x30e7) to use AD1884A model 'laptop' by default.
      Playback and Capture confirmed working.
      Signed-off-by: NTravis Place <wishie@wishie.net>
      Signed-off-by: NTakashi Iwai <tiwai@suse.de>
      25424831
    • T
      libata: revert convert-to-block-tagging patches · 8a8bc223
      Tejun Heo 提交于
      This patch reverts the following three commits which convert libata to
      use block layer tagging.
      
       43a49cbd
       e013e13b
       2fca5ccf
      
      Although using block layer tagging is the right direction, due to the
      tight coupling among tag number, data structure allocation and
      hardware command slot allocation, libata doesn't work correctly with
      the current conversion.
      
      The biggest problem is guaranteeing that tag 0 is always used for
      non-NCQ commands.  Due to the way blk-tag is implemented and how SCSI
      starts and finishes requests, such guarantee can't be made.  I'm not
      sure whether this would actually break any low level driver but it
      doesn't look like a good idea to break such assumption given the
      frailty of ATA controllers.
      
      So, for the time being, keep using the old dumb in-libata qc
      allocation.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axobe <jens.axboe@oracle.com>
      Cc: Jeff Garzik <jeff@garzik.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8a8bc223
  2. 10 11月, 2008 13 次提交
  3. 09 11月, 2008 11 次提交
  4. 08 11月, 2008 3 次提交
    • I
      sched: improve sched_clock() performance · 0d12cdd5
      Ingo Molnar 提交于
      in scheduler-intense workloads native_read_tsc() overhead accounts for
      20% of the system overhead:
      
       659567 system_call                              41222.9375
       686796 schedule                                 435.7843
       718382 __switch_to                              665.1685
       823875 switch_mm                                4526.7857
       1883122 native_read_tsc                          55385.9412
       9761990 total                                      2.8468
      
      this is large part due to the rdtsc_barrier() that is done before
      and after reading the TSC.
      
      But sched_clock() is not a precise clock in the GTOD sense, using such
      barriers is completely pointless. So remove the barriers and only use
      them in vget_cycles().
      
      This improves lat_ctx performance by about 5%.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0d12cdd5
    • S
      ftrace: display start of CPU buffer in trace output · a309720c
      Steven Rostedt 提交于
      Impact: change in trace output
      
      Because the trace buffers are per cpu ring buffers, the start of
      the trace can be confusing. If one CPU is very active at the
      end of the trace, its history will not go as far back as the
      other CPU traces.  This means that output for a particular CPU
      may not appear for the first part of a trace.
      
      To help annotate what is happening, and to prevent any more
      confusion, this patch adds a line that annotates the start of
      a CPU buffer output.
      
      For example:
      
             automount-3495  [001]   184.596443: dnotify_parent <-vfs_write
      [...]
             automount-3495  [001]   184.596449: dput <-path_put
             automount-3496  [002]   184.596450: down_read_trylock <-do_page_fault
      [...]
                 sshd-3497  [001]   184.597069: up_read <-do_page_fault
                <idle>-0     [000]   184.597074: __exit_idle <-exit_idle
      [...]
             automount-3496  [002]   184.597257: filemap_fault <-__do_fault
                <idle>-0     [003]   184.597261: exit_idle <-smp_apic_timer_interrupt
      
      Note, parsers of a trace output should always ignore any lines that
      start with a '#'.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a309720c
    • S
      ftrace: force pass of preemptoff selftest · 769c48eb
      Steven Rostedt 提交于
      Impact: preemptoff not tested in selftest
      
      Due to the BKL not being preemptable anymore, the selftest of the
      preemptoff code can not be tested. It requires that it is called
      with preemption enabled, but since the BKL is held, that is no
      longer the case.
      
      This patch simply skips those tests if it detects that the context
      is not preemptable. The following will now show up in the tests:
      
      Testing tracer preemptoff: can not test ... force PASSED
      Testing tracer preemptirqsoff: can not test ... force PASSED
      
      When the BKL is removed, or it becomes preemptable once again, then
      the tests will be performed.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      769c48eb