1. 22 1月, 2013 11 次提交
    • M
      ftrace: Move ARCH_SUPPORTS_FTRACE_SAVE_REGS in Kconfig · 06aeaaea
      Masami Hiramatsu 提交于
      Move SAVE_REGS support flag into Kconfig and rename
      it to CONFIG_DYNAMIC_FTRACE_WITH_REGS. This also introduces
      CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS which indicates
      the architecture depending part of ftrace has a code
      that saves full registers.
      On the other hand, CONFIG_DYNAMIC_FTRACE_WITH_REGS indicates
      the code is enabled.
      
      Link: http://lkml.kernel.org/r/20120928081516.3560.72534.stgit@ltc138.sdl.hitachi.co.jp
      
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      06aeaaea
    • S
      tracing/fgraph: Add max_graph_depth to limit function_graph depth · 8741db53
      Steven Rostedt 提交于
      Add the file max_graph_depth to the debug tracing directory that lets
      the user define the depth of the function graph.
      
      A very useful operation is to set the depth to 1. Then it traces only
      the first function that is called when entering the kernel. This can
      be used to determine what system operations interrupt a process.
      
      For example, to work on NOHZ processes (single tasks running without
      a timer tick), if any interrupt goes off and preempts that task, this
      code will show it happening.
      
        # cd /sys/kernel/debug/tracing
        # echo 1 > max_graph_depth
        # echo function_graph > current_tracer
        # cat per_cpu/cpu/<cpu-of-process>/trace
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8741db53
    • S
      tracing/lockdep: Disable lockdep first in entering NMI · 0f1ac8fd
      Steven Rostedt 提交于
      When function tracing with either debug locks enabled or tracing
      preempt disabled, the add_preempt_count() is traced. This is an
      issue with lockdep and function tracing. As function tracing
      can disable interrupts, and lockdep records that change,
      lockdep may not be able to handle this recursion if it happens from
      an NMI context.
      
      The first thing that an NMI does is:
      
       #define nmi_enter()					\
      	do {							\
      		ftrace_nmi_enter();				\
      		BUG_ON(in_nmi());				\
      		add_preempt_count(NMI_OFFSET + HARDIRQ_OFFSET);	\
      		lockdep_off();					\
      		rcu_nmi_enter();				\
      		trace_hardirq_enter();				\
      	} while (0)
      
      When the add_preempt_count() is traced, and the tracing callback
      disables interrupts, it will jump into the lockdep code. There's
      some places in lockdep that can't handle this re-entrance, and
      causes lockdep to fail.
      
      As the lockdep_off() (and lockdep_on) is a simple:
      
      void lockdep_off(void)
      {
      	current->lockdep_recursion++;
      }
      
      and is never traced, it can be called first in nmi_enter()
      and lockdep_on() last in nmi_exit().
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0f1ac8fd
    • S
      tracing: Remove unneeded check of max_tr->buffer before tracing_reset · 84c6cf0d
      Steven Rostedt 提交于
      There's now a check in tracing_reset_online_cpus() if the buffer is
      allocated or NULL. No need to do a check before calling it with max_tr.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      84c6cf0d
    • H
      tracing: Add checks if tr->buffer is NULL in tracing_reset{_online_cpus} · a5416411
      Hiraku Toyooka 提交于
      max_tr->buffer could be NULL in the tracing_reset{_online_cpus}. In this
      case, a NULL pointer dereference happens, so we should return immediately
      from these functions.
      
      Note, the current code does not call tracing_reset*() with max_tr when
      its buffer is NULL, but future code will. This patch is needed to prevent
      the future code from crashing.
      
      Link: http://lkml.kernel.org/r/20121219070234.31200.93863.stgit@liselsiaSigned-off-by: NHiraku Toyooka <hiraku.toyooka.gu@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a5416411
    • F
      tracing/syscalls: Make local functions static · 6aea49cb
      Fengguang Wu 提交于
      Some functions in the syscall tracing is used only locally to
      the file, but they are labeled global. Convert them to static functions.
      Signed-off-by: NFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      6aea49cb
    • J
      tracing: Verify target file before registering a uprobe event · d24d7dbf
      Jovi Zhang 提交于
      Without this patch, we can register a uprobe event for a directory.
      Enabling such a uprobe event would anyway fail.
      
      Example:
      $ echo 'p /bin:0x4245c0' > /sys/kernel/debug/tracing/uprobe_events
      
      However dirctories cannot be valid targets for uprobe.
      Hence verify if the target is a regular file during the probe
      registration.
      
      Link: http://lkml.kernel.org/r/20130103004212.690763002@goodmis.org
      
      Cc: Namhyung Kim <namhyung@kernel.org>
      Signed-off-by: NJovi Zhang <bookjovi@gmail.com>
      Acked-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      [ cleaned up whitespace and removed redundant IS_DIR() check ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      d24d7dbf
    • S
      tracing: Use this_cpu_ptr per-cpu helper · d8a0349c
      Shan Wei 提交于
      typeof(&buffer) is a pointer to array of 1024 char, or char (*)[1024].
      But, typeof(&buffer[0]) is a pointer to char which match the return type of get_trace_buf().
      As well-known, the value of &buffer is equal to &buffer[0].
      so return this_cpu_ptr(&percpu_buffer->buffer[0]) can avoid type cast.
      
      Link: http://lkml.kernel.org/r/50A1A800.3020102@gmail.comReviewed-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NShan Wei <davidshan@tencent.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      d8a0349c
    • S
      ring-buffer: Remove unnecessary recusive call in rb_advance_iter() · 771e0384
      Steven Rostedt 提交于
      The original ring-buffer code had special checks at the start
      of rb_advance_iter() and instead of repeating them again at the
      end of the function if a certain condition existed, I just did
      a recursive call to rb_advance_iter() because the special condition
      would cause rb_advance_iter() to return early (after the checks).
      
      But as things have changed, the special checks no longer exist
      and the only thing done for the special_condition is to call
      rb_inc_iter() and return. Instead of doing a confusing recursive call,
      just call rb_inc_iter instead.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      771e0384
    • S
      tracing: Fix sparse warning with is_signed_type() macro · 418c59e4
      Steven Rostedt 提交于
      Sparse complains when is_signed_type() is used on a pointer.
      This macro is needed for the format output used for ftrace
      and perf, to know if a binary field is a signed type or not.
      The is_signed_type() macro is used against all fields that are
      recorded by events to automate the operation.
      
      The problem sparse has is with the current way is_signed_type()
      works:
      
        ((type)-1 < 0)
      
      If "type" is a poiner, than sparse does not like it being compared
      to an integer (zero). The simple fix is to just give zero the
      same type. The runtime result stays the same.
      Reported-by: NRobert Jarzmik <robert.jarzmik@free.fr>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      418c59e4
    • S
      ftrace: Be first to run code modification on modules · c1bf08ac
      Steven Rostedt 提交于
      If some other kernel subsystem has a module notifier, and adds a kprobe
      to a ftrace mcount point (now that kprobes work on ftrace points),
      when the ftrace notifier runs it will fail and disable ftrace, as well
      as kprobes that are attached to ftrace points.
      
      Here's the error:
      
       WARNING: at kernel/trace/ftrace.c:1618 ftrace_bug+0x239/0x280()
       Hardware name: Bochs
       Modules linked in: fat(+) stap_56d28a51b3fe546293ca0700b10bcb29__8059(F) nfsv4 auth_rpcgss nfs dns_resolver fscache xt_nat iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack lockd sunrpc ppdev parport_pc parport microcode virtio_net i2c_piix4 drm_kms_helper ttm drm i2c_core [last unloaded: bid_shared]
       Pid: 8068, comm: modprobe Tainted: GF            3.7.0-0.rc8.git0.1.fc19.x86_64 #1
       Call Trace:
        [<ffffffff8105e70f>] warn_slowpath_common+0x7f/0xc0
        [<ffffffff81134106>] ? __probe_kernel_read+0x46/0x70
        [<ffffffffa0180000>] ? 0xffffffffa017ffff
        [<ffffffffa0180000>] ? 0xffffffffa017ffff
        [<ffffffff8105e76a>] warn_slowpath_null+0x1a/0x20
        [<ffffffff810fd189>] ftrace_bug+0x239/0x280
        [<ffffffff810fd626>] ftrace_process_locs+0x376/0x520
        [<ffffffff810fefb7>] ftrace_module_notify+0x47/0x50
        [<ffffffff8163912d>] notifier_call_chain+0x4d/0x70
        [<ffffffff810882f8>] __blocking_notifier_call_chain+0x58/0x80
        [<ffffffff81088336>] blocking_notifier_call_chain+0x16/0x20
        [<ffffffff810c2a23>] sys_init_module+0x73/0x220
        [<ffffffff8163d719>] system_call_fastpath+0x16/0x1b
       ---[ end trace 9ef46351e53bbf80 ]---
       ftrace failed to modify [<ffffffffa0180000>] init_once+0x0/0x20 [fat]
        actual: cc:bb:d2:4b:e1
      
      A kprobe was added to the init_once() function in the fat module on load.
      But this happened before ftrace could have touched the code. As ftrace
      didn't run yet, the kprobe system had no idea it was a ftrace point and
      simply added a breakpoint to the code (0xcc in the cc:bb:d2:4b:e1).
      
      Then when ftrace went to modify the location from a call to mcount/fentry
      into a nop, it didn't see a call op, but instead it saw the breakpoint op
      and not knowing what to do with it, ftrace shut itself down.
      
      The solution is to simply give the ftrace module notifier the max priority.
      This should have been done regardless, as the core code ftrace modification
      also happens very early on in boot up. This makes the module modification
      closer to core modification.
      
      Link: http://lkml.kernel.org/r/20130107140333.593683061@goodmis.org
      
      Cc: stable@vger.kernel.org
      Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Reported-by: NFrank Ch. Eigler <fche@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c1bf08ac
  2. 18 1月, 2013 2 次提交
  3. 17 1月, 2013 22 次提交
  4. 16 1月, 2013 4 次提交
  5. 15 1月, 2013 1 次提交
    • T
      ALSA: hda/hdmi - Work around "alsactl restore" errors · 6f54c361
      Takashi Iwai 提交于
      When "alsactl restore" is performed on HDMI codecs, it tries to
      restore the channel map value since the channel map controls are
      writable.  But hdmi_chmap_ctl_put() returns -EBADFD when no PCM stream
      is assigned yet, and this results in an error message from alsactl.
      Although the error is harmless, it's certainly ugly and can be
      regarded as a regression.
      
      As a workaround, this patch changes the return code in such a case to
      be zero for making others happy.  (A slight excuse is: when the chmap
      is changed through the proper alsa-lib API, the PCM status is checked
      there anyway, so we don't have to be too strict in the kernel side.)
      
      Cc: <stable@vger.kernel.org> [v3.7+]
      Signed-off-by: NTakashi Iwai <tiwai@suse.de>
      6f54c361