1. 15 9月, 2010 2 次提交
    • S
      tracing: Do not trace in irq when funcgraph-irq option is zero · b304d044
      Steven Rostedt 提交于
      When the function graph tracer funcgraph-irq option is zero, disable
      tracing in IRQs. This makes the option have two effects.
      
      1) When reading the trace file, do not display the functions that
         happen in interrupt context (when detected)
      
      2) [*new*] When recording a trace, skip those that are detected
         to be in interrupt by the 'in_irq()' function
      
      Note, in_irq() is updated at irq_enter() and irq_exit(). There are
      still functions that are recorded by the function graph tracer that
      is in interrupt context but outside the irq_enter/exit() routines.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b304d044
    • J
      tracing: Add funcgraph-irq option for function graph tracer. · 2bd16212
      Jiri Olsa 提交于
      It's handy to be able to disable the irq related output
      and not to have to jump over each irq related code, when
      you have no interrest in it.
      
      The option is by default enabled, so there's no change to
      current behaviour. It affects only the final output, so all
      the irq related data stay in the ring buffer.
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      LKML-Reference: <20100907145344.GC1912@jolsa.brq.redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2bd16212
  2. 13 9月, 2010 4 次提交
  3. 10 9月, 2010 25 次提交
    • P
      perf: Fix perf_init_event() · e5f4d339
      Peter Zijlstra 提交于
      We ought to return -ENOENT when non of the registered PMUs
      recognise the requested event.
      
      This fixes a boot crash that occurs if no PMU is available
      but the NMI watchdog tries to register an event.
      Reported-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e5f4d339
    • P
      perf: Ensure we call add_event_to_ctx() with the right locks held · cee010ec
      Peter Zijlstra 提交于
      Even though we call it from the inherit path, where the child is
      not yet accessible, we need to hold ctx->lock, add_event_to_ctx()
      assumes IRQs are disabled.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cee010ec
    • F
      irq: Fix circular headers dependency · 3b8fad3e
      Frederic Weisbecker 提交于
      asm-generic/hardirq.h needs asm/irq.h which might include
      linux/interrupt.h as in the sparc 32 case. At this point
      we need irq_cpustat generic definitions, but those are
      included later in asm-generic/hardirq.h.
      
      Then delay a bit the inclusion of irq.h from
      asm-generic/hardirq.h, it doesn't need to be included early.
      
      This fixes:
      
       include/linux/interrupt.h: In function '__raise_softirq_irqoff':
       include/linux/interrupt.h:414: error: implicit declaration of function 'local_softirq_pending'
       include/linux/interrupt.h:414: error: lvalue required as left operand of assignment
      Reported-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Koki Sanagi <sanagi.koki@jp.fujitsu.com>
      Cc: mathieu.desnoyers@efficios.com
      Cc: rostedt@goodmis.org
      Cc: nhorman@tuxdriver.com
      Cc: scott.a.mcmillan@intel.com
      Cc: eric.dumazet@gmail.com
      Cc: kaneshige.kenji@jp.fujitsu.com
      Cc: davem@davemloft.net
      Cc: izumi.taku@jp.fujitsu.com
      Cc: kosaki.motohiro@jp.fujitsu.com
      LKML-Reference: <20100908122557.GA5310@nowhere>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3b8fad3e
    • P
      perf: Fix up delayed_put_task_struct() · 4e231c79
      Peter Zijlstra 提交于
      I missed a perf_event_ctxp user when converting it to an array. Pull this
      last user into perf_event.c as well and fix it up.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4e231c79
    • P
      perf: Optimize context ops · 1b9a644f
      Peter Zijlstra 提交于
      Assuming we don't mix events of different pmus onto a single context
      (with the exeption of software events inside a hardware group) we can
      now assume that all events on a particular context belong to the same
      pmu, hence we can disable the pmu for the entire context operations.
      
      This reduces the amount of hardware writes.
      
      The exception for swevents comes from the fact that the sw pmu disable
      is a nop.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1b9a644f
    • P
      perf: Provide a separate task context for swevents · 89a1e187
      Peter Zijlstra 提交于
      Since software events are always schedulable, mixing them up with
      hardware events (who are not) can lead to funny scheduling oddities.
      
      Giving them their own context solves this.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      89a1e187
    • P
      perf: Multiple task contexts · 8dc85d54
      Peter Zijlstra 提交于
      Provide the infrastructure for multiple task contexts.
      
      A more flexible approach would have resulted in more pointer chases
      in the scheduling hot-paths. This approach has the limitation of a
      static number of task contexts.
      
      Since I expect most external PMUs to be system wide, or at least node
      wide (as per the intel uncore unit) they won't actually need a task
      context.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8dc85d54
    • P
      perf: Clean up perf_event_context allocation · eb184479
      Peter Zijlstra 提交于
      Unify the two perf_event_context allocation sites.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      eb184479
    • P
      perf: Move some code around · 97dee4f3
      Peter Zijlstra 提交于
      Move all inherit code near each other.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      97dee4f3
    • P
      perf: Per-pmu-per-cpu contexts · 108b02cf
      Peter Zijlstra 提交于
      Allocate per-cpu contexts per pmu.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      108b02cf
    • P
      perf: Per cpu-context rotation timer · b5ab4cd5
      Peter Zijlstra 提交于
      Give each cpu-context its own timer so that it is a self contained
      entity, this eases the way for per-pmu-per-cpu contexts as well as
      provides the basic infrastructure to allow different rotation
      times per pmu.
      
      Things to look at:
       - folding the tick and these TICK_NSEC timers
       - separate task context rotation
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b5ab4cd5
    • P
      perf: Remove the swevent hash-table from the cpu context · b28ab83c
      Peter Zijlstra 提交于
      Separate the swevent hash-table from the cpu_context bits in
      preparation for per pmu cpu contexts.
      
      This keeps the swevent hash a global entity.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b28ab83c
    • P
      perf: Separate find_get_context() from event initialization · c3f00c70
      Peter Zijlstra 提交于
      Separate find_get_context() from the event allocation and
      initialization so that we may make find_get_context() depend
      on the event pmu in a later patch.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c3f00c70
    • P
      perf: Remove the sysfs bits · 15ac9a39
      Peter Zijlstra 提交于
      Neither the overcommit nor the reservation sysfs parameter were
      actually working, remove them as they'll only get in the way.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      15ac9a39
    • P
      perf: Rework the PMU methods · a4eaf7f1
      Peter Zijlstra 提交于
      Replace pmu::{enable,disable,start,stop,unthrottle} with
      pmu::{add,del,start,stop}, all of which take a flags argument.
      
      The new interface extends the capability to stop a counter while
      keeping it scheduled on the PMU. We replace the throttled state with
      the generic stopped state.
      
      This also allows us to efficiently stop/start counters over certain
      code paths (like IRQ handlers).
      
      It also allows scheduling a counter without it starting, allowing for
      a generic frozen state (useful for rotating stopped counters).
      
      The stopped state is implemented in two different ways, depending on
      how the architecture implemented the throttled state:
      
       1) We disable the counter:
          a) the pmu has per-counter enable bits, we flip that
          b) we program a NOP event, preserving the counter state
      
       2) We store the counter state and ignore all read/overflow events
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a4eaf7f1
    • P
      perf: Shrink hw_perf_event · fa407f35
      Peter Zijlstra 提交于
      Use hw_perf_event::period_left instead of hw_perf_event::remaining
      and win back 8 bytes.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fa407f35
    • P
      perf: Default PMU ops · ad5133b7
      Peter Zijlstra 提交于
      Provide default implementations for the pmu txn methods, this
      allows us to remove some conditional code.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ad5133b7
    • P
      perf: Per PMU disable · 33696fc0
      Peter Zijlstra 提交于
      Changes perf_disable() into perf_pmu_disable().
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      33696fc0
    • P
      perf: Reduce perf_disable() usage · 24cd7f54
      Peter Zijlstra 提交于
      Since the current perf_disable() usage is only an optimization,
      remove it for now. This eases the removal of the __weak
      hw_perf_enable() interface.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      24cd7f54
    • P
      perf: Unindent labels · 9ed6060d
      Peter Zijlstra 提交于
      Fixup random annoying style bits.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9ed6060d
    • P
      perf: Register PMU implementations · b0a873eb
      Peter Zijlstra 提交于
      Simple registration interface for struct pmu, this provides the
      infrastructure for removing all the weak functions.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b0a873eb
    • P
      perf: Deconstify struct pmu · 51b0fe39
      Peter Zijlstra 提交于
      sed -ie 's/const struct pmu\>/struct pmu/g' `git grep -l "const struct pmu\>"`
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      51b0fe39
    • I
      Merge branch 'perf/urgent' into perf/core · 2aa61274
      Ingo Molnar 提交于
      Merge reason: Pick up pending fixes before applying dependent new changes.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2aa61274
    • P
      perf: Fix CPU hotplug · 5e11637e
      Peter Zijlstra 提交于
      Since we have UP_PREPARE, we should also have UP_CANCELED.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5e11637e
    • L
      perf, trace: Fix module leak · 9cb627d5
      Li Zefan 提交于
      Commit 1c024eca (perf, trace: Optimize tracepoints by using
      per-tracepoint-per-cpu hlist to track events) caused a module
      refcount leak.
      Reported-And-Tested-by: NAvi Kivity <avi@redhat.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <4C7E1F12.8030304@cn.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9cb627d5
  4. 08 9月, 2010 5 次提交
  5. 09 9月, 2010 1 次提交
    • S
      tracing: Do not allow llseek to set_ftrace_filter · 9c55cb12
      Steven Rostedt 提交于
      Reading the file set_ftrace_filter does three things.
      
      1) shows whether or not filters are set for the function tracer
      2) shows what functions are set for the function tracer
      3) shows what triggers are set on any functions
      
      3 is independent from 1 and 2.
      
      The way this file currently works is that it is a state machine,
      and as you read it, it may change state. But this assumption breaks
      when you use lseek() on the file. The state machine gets out of sync
      and the t_show() may use the wrong pointer and cause a kernel oops.
      
      Luckily, this will only kill the app that does the lseek, but the app
      dies while holding a mutex. This prevents anyone else from using the
      set_ftrace_filter file (or any other function tracing file for that matter).
      
      A real fix for this is to rewrite the code, but that is too much for
      a -rc release or stable. This patch simply disables llseek on the
      set_ftrace_filter() file for now, and we can do the proper fix for the
      next major release.
      Reported-by: NRobert Swiecki <swiecki@google.com>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Cc: Tavis Ormandy <taviso@google.com>
      Cc: Eugene Teo <eugene@redhat.com>
      Cc: vendor-sec@lst.de
      Cc: <stable@kernel.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      9c55cb12
  6. 08 9月, 2010 1 次提交
    • K
      perf: Add a script to show packets processing · 359d5106
      Koki Sanagi 提交于
      Add a perf script which shows packets processing and processed
      time. It helps us to investigate networking or network devices.
      
      If you want to use it, install perf and record perf.data like
      following.
      
      If you set script, perf gathers records until it ends.
      If not, you must Ctrl-C to stop recording.
      
      And if you want a report from record,
      
      If you use some options, you can limit the output.
      Option is below.
      
      tx: show only tx packets processing
      rx: show only rx packets processing
      dev=: show processing on this device
      debug: work with debug mode. It shows buffer status.
      
      For example, if you want to show received packets processing
      associated with eth4,
      
      106133.171439sec cpu=0
        irq_entry(+0.000msec irq=24:eth4)
               |
        softirq_entry(+0.006msec)
               |
               |---netif_receive_skb(+0.010msec skb=f2d15900 len=100)
               |            |
               |      skb_copy_datagram_iovec(+0.039msec 10291::10291)
               |
        napi_poll_exit(+0.022msec eth4)
      
      This perf script helps us to analyze the processing time of a
      transmit/receive sequence.
      Signed-off-by: NKoki Sanagi <sanagi.koki@jp.fujitsu.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Kaneshige Kenji <kaneshige.kenji@jp.fujitsu.com>
      Cc: Izumo Taku <izumi.taku@jp.fujitsu.com>
      Cc: Kosaki Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Scott Mcmillan <scott.a.mcmillan@intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      LKML-Reference: <4C72439D.3040001@jp.fujitsu.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      359d5106
  7. 07 9月, 2010 2 次提交
    • K
      skb: Add tracepoints to freeing skb · 07dc22e7
      Koki Sanagi 提交于
      This patch adds tracepoint to consume_skb and add trace_kfree_skb
      before __kfree_skb in skb_free_datagram_locked and net_tx_action.
      Combinating with tracepoint on dev_hard_start_xmit, we can check
      how long it takes to free transmitted packets. And using it, we can
      calculate how many packets driver had at that time. It is useful when
      a drop of transmitted packet is a problem.
      
                  sshd-6828  [000] 112689.258154: consume_skb: skbaddr=f2d99bb8
      Signed-off-by: NKoki Sanagi <sanagi.koki@jp.fujitsu.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Acked-by: NNeil Horman <nhorman@tuxdriver.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Kaneshige Kenji <kaneshige.kenji@jp.fujitsu.com>
      Cc: Izumo Taku <izumi.taku@jp.fujitsu.com>
      Cc: Kosaki Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Scott Mcmillan <scott.a.mcmillan@intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      LKML-Reference: <4C724364.50903@jp.fujitsu.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      07dc22e7
    • K
      netdev: Add tracepoints to netdev layer · cf66ba58
      Koki Sanagi 提交于
      This patch adds tracepoint to dev_queue_xmit, dev_hard_start_xmit,
      netif_rx and netif_receive_skb. These tracepoints help you to monitor
      network driver's input/output.
      
                <idle>-0     [001] 112447.902030: netif_rx: dev=eth1 skbaddr=f3ef0900 len=84
                <idle>-0     [001] 112447.902039: netif_receive_skb: dev=eth1 skbaddr=f3ef0900 len=84
                  sshd-6828  [000] 112447.903257: net_dev_queue: dev=eth4 skbaddr=f3fca538 len=226
                  sshd-6828  [000] 112447.903260: net_dev_xmit: dev=eth4 skbaddr=f3fca538 len=226 rc=0
      Signed-off-by: NKoki Sanagi <sanagi.koki@jp.fujitsu.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Acked-by: NNeil Horman <nhorman@tuxdriver.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Kaneshige Kenji <kaneshige.kenji@jp.fujitsu.com>
      Cc: Izumo Taku <izumi.taku@jp.fujitsu.com>
      Cc: Kosaki Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Scott Mcmillan <scott.a.mcmillan@intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      LKML-Reference: <4C72431E.3000901@jp.fujitsu.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      cf66ba58