1. 15 5月, 2010 8 次提交
    • S
      tracing: Comment the use of event_mutex with trace event flags · 1eaa4787
      Steven Rostedt 提交于
      The flags variable is protected by the event_mutex when modifying,
      but the event_mutex is not held when reading the variable.
      
      This is due to the fact that the reads occur in critical sections where
      taking a mutex (or even a spinlock) is not wanted.
      
      But the two flags that exist (enable and filter_active) have the code
      written as such to handle the reads to not need a lock.
      
      The enable flag is used just to know if the event is enabled or not
      and its use is always under the event_mutex. Whether or not the event
      is actually enabled is really determined by the tracepoint being
      registered. The flag is just a way to let the code know if the tracepoint
      is registered.
      
      The filter_active is different. It is read without the lock. If it
      is set, then the event probes jump to the filter code. There can be a
      slight mismatch between filters available and filter_active. If the flag is
      set but no filters are available, the code safely jumps to a filter nop.
      If the flag is not set and the filters are available, then the filters
      are skipped. This is acceptable since filters are usually set before
      tracing or they are set by humans, which would not notice the slight
      delay that this causes.
      
      v2: Fixed typo: "cacheing" -> "caching"
      Reported-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      1eaa4787
    • S
      tracing: Combine event filter_active and enable into single flags field · 553552ce
      Steven Rostedt 提交于
      The filter_active and enable both use an int (4 bytes each) to
      set a single flag. We can save 4 bytes per event by combining the
      two into a single integer.
      
         text	   data	    bss	    dec	    hex	filename
      4913961	1088356	 861512	6863829	 68bbd5	vmlinux.orig
      4894944	1018052	 861512	6774508	 675eec	vmlinux.id
      4894871	1012292	 861512	6768675	 674823	vmlinux.flags
      
      This gives us another 5K in savings.
      
      The modification of both the enable and filter fields are done
      under the event_mutex, so it is still safe to combine the two.
      
      Note: Although Mathieu gave his Acked-by, he would like it documented
       that the reads of flags are not protected by the mutex. The way the
       code works, these reads will not break anything, but will have a
       residual effect. Since this behavior is the same even before this
       patch, describing this situation is left to another patch, as this
       patch does not change the behavior, but just brought it to Mathieu's
       attention.
      
      v2: Updated the event trace self test to for this change.
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      553552ce
    • S
      tracing: Remove duplicate id information in event structure · 32c0edae
      Steven Rostedt 提交于
      Now that the trace_event structure is embedded in the ftrace_event_call
      structure, there is no need for the ftrace_event_call id field.
      The id field is the same as the trace_event type field.
      
      Removing the id and re-arranging the structure brings down the tracepoint
      footprint by another 5K.
      
         text	   data	    bss	    dec	    hex	filename
      4913961	1088356	 861512	6863829	 68bbd5	vmlinux.orig
      4895024	1023812	 861512	6780348	 6775bc	vmlinux.print
      4894944	1018052	 861512	6774508	 675eec	vmlinux.id
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      32c0edae
    • S
      tracing: Move print functions into event class · 80decc70
      Steven Rostedt 提交于
      Currently, every event has its own trace_event structure. This is
      fine since the structure is needed anyway. But the print function
      structure (trace_event_functions) is now separate. Since the output
      of the trace event is done by the class (with the exception of events
      defined by DEFINE_EVENT_PRINT), it makes sense to have the class
      define the print functions that all events in the class can use.
      
      This makes a bigger deal with the syscall events since all syscall events
      use the same class. The savings here is another 30K.
      
         text	   data	    bss	    dec	    hex	filename
      4913961	1088356	 861512	6863829	 68bbd5	vmlinux.orig
      4900382	1048964	 861512	6810858	 67ecea	vmlinux.init
      4900446	1049028	 861512	6810986	 67ed6a	vmlinux.preprint
      4895024	1023812	 861512	6780348	 6775bc	vmlinux.print
      
      To accomplish this, and to let the class know what event is being
      printed, the event structure is embedded in the ftrace_event_call
      structure. This should not be an issues since the event structure
      was created for each event anyway.
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      80decc70
    • S
      tracing: Allow events to share their print functions · a9a57763
      Steven Rostedt 提交于
      Multiple events may use the same method to print their data.
      Instead of having all events have a pointer to their print funtions,
      the trace_event structure now points to a trace_event_functions structure
      that will hold the way to print ouf the event.
      
      The event itself is now passed to the print function to let the print
      function know what kind of event it should print.
      
      This opens the door to consolidating the way several events print
      their output.
      
         text	   data	    bss	    dec	    hex	filename
      4913961	1088356	 861512	6863829	 68bbd5	vmlinux.orig
      4900382	1048964	 861512	6810858	 67ecea	vmlinux.init
      4900446	1049028	 861512	6810986	 67ed6a	vmlinux.preprint
      
      This change slightly increases the size but is needed for the next change.
      
      v3: Fix the branch tracer events to handle this change.
      
      v2: Fix the new function graph tracer event calls to handle this change.
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a9a57763
    • S
      tracing: Move raw_init from events to class · 0405ab80
      Steven Rostedt 提交于
      The raw_init function pointer in the event is used to initialize
      various kinds of events. The type of initialization needed is usually
      classed to the kind of event it is.
      
      Two events with the same class will always have the same initialization
      function, so it makes sense to move this to the class structure.
      
      Perhaps even making a special system structure would work since
      the initialization is the same for all events within a system.
      But since there's no system structure (yet), this will just move it
      to the class.
      
         text	   data	    bss	    dec	    hex	filename
      4913961	1088356	 861512	6863829	 68bbd5	vmlinux.orig
      4900375	1053380	 861512	6815267	 67fe23	vmlinux.fields
      4900382	1048964	 861512	6810858	 67ecea	vmlinux.init
      
      The text grew very slightly, but this is a constant growth that happened
      with the changing of the C files that call the init code.
      The bigger savings is the data which will be saved the more events share
      a class.
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0405ab80
    • S
      tracing: Move fields from event to class structure · 2e33af02
      Steven Rostedt 提交于
      Move the defined fields from the event to the class structure.
      Since the fields of the event are defined by the class they belong
      to, it makes sense to have the class hold the information instead
      of the individual events. The events of the same class would just
      hold duplicate information.
      
      After this change the size of the kernel dropped another 3K:
      
         text	   data	    bss	    dec	    hex	filename
      4913961	1088356	 861512	6863829	 68bbd5	vmlinux.orig
      4900252	1057412	 861512	6819176	 680d68	vmlinux.regs
      4900375	1053380	 861512	6815267	 67fe23	vmlinux.fields
      
      Although the text increased, this was mainly due to the C files
      having to adapt to the change. This is a constant increase, where
      new tracepoints will not increase the Text. But the big drop is
      in the data size (as well as needed allocations to hold the fields).
      This will give even more savings as more tracepoints are created.
      
      Note, if just TRACE_EVENT()s are used and not DECLARE_EVENT_CLASS()
      with several DEFINE_EVENT()s, then the savings will be lost. But
      we are pushing developers to consolidate events with DEFINE_EVENT()
      so this should not be an issue.
      
      The kprobes define a unique class to every new event, but are dynamic
      so it should not be a issue.
      
      The syscalls however have a single class but the fields for the individual
      events are different. The syscalls use a metadata to define the
      fields. I moved the fields list from the event to the metadata and
      added a "get_fields()" function to the class. This function is used
      to find the fields. For normal events and kprobes, get_fields() just
      returns a pointer to the fields list_head in the class. For syscall
      events, it returns the fields list_head in the metadata for the event.
      
      v2:  Fixed the syscall fields. The syscall metadata needs a list
           of fields for both enter and exit.
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2e33af02
    • S
      tracing: Remove per event trace registering · 2239291a
      Steven Rostedt 提交于
      This patch removes the register functions of TRACE_EVENT() to enable
      and disable tracepoints. The registering of a event is now down
      directly in the trace_events.c file. The tracepoint_probe_register()
      is now called directly.
      
      The prototypes are no longer type checked, but this should not be
      an issue since the tracepoints are created automatically by the
      macros. If a prototype is incorrect in the TRACE_EVENT() macro, then
      other macros will catch it.
      
      The trace_event_class structure now holds the probes to be called
      by the callbacks. This removes needing to have each event have
      a separate pointer for the probe.
      
      To handle kprobes and syscalls, since they register probes in a
      different manner, a "reg" field is added to the ftrace_event_class
      structure. If the "reg" field is assigned, then it will be called for
      enabling and disabling of the probe for either ftrace or perf. To let
      the reg function know what is happening, a new enum (trace_reg) is
      created that has the type of control that is needed.
      
      With this new rework, the 82 kernel events and 618 syscall events
      has their footprint dramatically lowered:
      
         text	   data	    bss	    dec	    hex	filename
      4913961	1088356	 861512	6863829	 68bbd5	vmlinux.orig
      4914025	1088868	 861512	6864405	 68be15	vmlinux.class
      4918492	1084612	 861512	6864616	 68bee8	vmlinux.tracepoint
      4900252	1057412	 861512	6819176	 680d68	vmlinux.regs
      
      The size went from 6863829 to 6819176, that's a total of 44K
      in savings. With tracepoints being continuously added, this is
      critical that the footprint becomes minimal.
      
      v5: Added #ifdef CONFIG_PERF_EVENTS around a reference to perf
          specific structure in trace_events.c.
      
      v4: Fixed trace self tests to check probe because regfunc no longer
          exists.
      
      v3: Updated to handle void *data in beginning of probe parameters.
          Also added the tracepoint: check_trace_callback_type_##call().
      
      v2: Changed the callback probes to pass void * and typecast the
          value within the function.
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2239291a
  2. 14 5月, 2010 3 次提交
    • S
      tracing: Let tracepoints have data passed to tracepoint callbacks · 38516ab5
      Steven Rostedt 提交于
      This patch adds data to be passed to tracepoint callbacks.
      
      The created functions from DECLARE_TRACE() now need a mandatory data
      parameter. For example:
      
      DECLARE_TRACE(mytracepoint, int value, value)
      
      Will create the register function:
      
      int register_trace_mytracepoint((void(*)(void *data, int value))probe,
                                      void *data);
      
      As the first argument, all callbacks (probes) must take a (void *data)
      parameter. So a callback for the above tracepoint will look like:
      
      void myprobe(void *data, int value)
      {
      }
      
      The callback may choose to ignore the data parameter.
      
      This change allows callbacks to register a private data pointer along
      with the function probe.
      
      	void mycallback(void *data, int value);
      
      	register_trace_mytracepoint(mycallback, mydata);
      
      Then the mycallback() will receive the "mydata" as the first parameter
      before the args.
      
      A more detailed example:
      
        DECLARE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));
      
        /* In the C file */
      
        DEFINE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));
      
        [...]
      
             trace_mytracepoint(status);
      
        /* In a file registering this tracepoint */
      
        int my_callback(void *data, int status)
        {
      	struct my_struct my_data = data;
      	[...]
        }
      
        [...]
      	my_data = kmalloc(sizeof(*my_data), GFP_KERNEL);
      	init_my_data(my_data);
      	register_trace_mytracepoint(my_callback, my_data);
      
      The same callback can also be registered to the same tracepoint as long
      as the data registered is different. Note, the data must also be used
      to unregister the callback:
      
      	unregister_trace_mytracepoint(my_callback, my_data);
      
      Because of the data parameter, tracepoints declared this way can not have
      no args. That is:
      
        DECLARE_TRACE(mytracepoint, TP_PROTO(void), TP_ARGS());
      
      will cause an error.
      
      If no arguments are needed, a new macro can be used instead:
      
        DECLARE_TRACE_NOARGS(mytracepoint);
      
      Since there are no arguments, the proto and args fields are left out.
      
      This is part of a series to make the tracepoint footprint smaller:
      
         text	   data	    bss	    dec	    hex	filename
      4913961	1088356	 861512	6863829	 68bbd5	vmlinux.orig
      4914025	1088868	 861512	6864405	 68be15	vmlinux.class
      4918492	1084612	 861512	6864616	 68bee8	vmlinux.tracepoint
      
      Again, this patch also increases the size of the kernel, but
      lays the ground work for decreasing it.
      
       v5: Fixed net/core/drop_monitor.c to handle these updates.
      
       v4: Moved the DECLARE_TRACE() DECLARE_TRACE_NOARGS out of the
           #ifdef CONFIG_TRACE_POINTS, since the two are the same in both
           cases. The __DECLARE_TRACE() is what changes.
           Thanks to Frederic Weisbecker for pointing this out.
      
       v3: Made all register_* functions require data to be passed and
           all callbacks to take a void * parameter as its first argument.
           This makes the calling functions comply with C standards.
      
           Also added more comments to the modifications of DECLARE_TRACE().
      
       v2: Made the DECLARE_TRACE() have the ability to pass arguments
           and added a new DECLARE_TRACE_NOARGS() for tracepoints that
           do not need any arguments.
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Cc: David S. Miller <davem@davemloft.net>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      38516ab5
    • M
      tracepoints: Add check trace callback type · 53da59aa
      Mathieu Desnoyers 提交于
      This check is meant to be used by tracepoint users which do a direct cast of
      callbacks to (void *) for direct registration, thus bypassing the
      register_trace_##name and unregister_trace_##name checks.
      
      This permits to ensure that the callback type matches the function type at the
      call site, but without generating any code.
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      LKML-Reference: <20100430165959.GA25605@Krystal>
      CC: Ingo Molnar <mingo@elte.hu>
      CC: Andrew Morton <akpm@linux-foundation.org>
      CC: Thomas Gleixner <tglx@linutronix.de>
      CC: Peter Zijlstra <peterz@infradead.org>
      CC: Arnaldo Carvalho de Melo <acme@redhat.com>
      CC: Lai Jiangshan <laijs@cn.fujitsu.com>
      CC: Li Zefan <lizf@cn.fujitsu.com>
      CC: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      53da59aa
    • S
      tracing: Create class struct for events · 8f082018
      Steven Rostedt 提交于
      This patch creates a ftrace_event_class struct that event structs point to.
      This class struct will be made to hold information to modify the
      events. Currently the class struct only holds the events system name.
      
      This patch slightly increases the size, but this change lays the ground work
      of other changes to make the footprint of tracepoints smaller.
      
      With 82 standard tracepoints, and 618 system call tracepoints
      (two tracepoints per syscall: enter and exit):
      
         text	   data	    bss	    dec	    hex	filename
      4913961	1088356	 861512	6863829	 68bbd5	vmlinux.orig
      4914025	1088868	 861512	6864405	 68be15	vmlinux.class
      
      This patch also cleans up some stale comments in ftrace.h.
      
      v2: Fixed missing semi-colon in macro.
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8f082018
  3. 11 5月, 2010 2 次提交
    • C
      sched, wait: Use wrapper functions · a93d2f17
      Changli Gao 提交于
      epoll should not touch flags in wait_queue_t. This patch introduces a new
      function __add_wait_queue_exclusive(), for the users, who use wait queue as a
      LIFO queue.
      
      __add_wait_queue_tail_exclusive() is introduced too instead of
      add_wait_queue_exclusive_locked(). remove_wait_queue_locked() is removed, as
      it is a duplicate of __remove_wait_queue(), disliked by users, and with less
      users.
      Signed-off-by: NChangli Gao <xiaosuo@gmail.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Paul Menage <menage@google.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Davide Libenzi <davidel@xmailserver.org>
      Cc: <containers@lists.linux-foundation.org>
      LKML-Reference: <1273214006-2979-1-git-send-email-xiaosuo@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a93d2f17
    • I
      Revert "perf: Fix exit() vs PERF_FORMAT_GROUP" · e3174cfd
      Ingo Molnar 提交于
      This reverts commit 4fd38e45.
      
      It causes various crashes and hangs when events are activated.
      
      The cause is not fully understood yet but we need to revert it
      because the effects are severe.
      Reported-by: NStephane Eranian <eranian@google.com>
      Reported-by: NLin Ming <ming.m.lin@intel.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e3174cfd
  4. 10 5月, 2010 2 次提交
  5. 09 5月, 2010 3 次提交
    • F
      tracing: Factorize lock events in a lock class · 2c193c73
      Frederic Weisbecker 提交于
      lock_acquired, lock_contended and lock_release now share the
      same prototype and format. Let's factorize them into a lock
      event class.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      2c193c73
    • F
      tracing: Drop the nested field from lock_release event · 93135439
      Frederic Weisbecker 提交于
      Drop the nested field as we don't use it. Every nested state can
      be computed from a state machine on post processing already.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      93135439
    • F
      tracing: Drop lock_acquired waittime field · 883a2a31
      Frederic Weisbecker 提交于
      Drop the waittime field from the lock_acquired event, we can
      calculate it by substracting the lock_acquired event timestamp
      with the matching lock_acquire one.
      
      It is not needed and takes useless space in the traces.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      883a2a31
  6. 08 5月, 2010 1 次提交
    • T
      cpu_stop: add dummy implementation for UP · bbf1bb3e
      Tejun Heo 提交于
      When !CONFIG_SMP, cpu_stop functions weren't defined at all which
      could lead to build failures if UP code uses cpu_stop facility.  Add
      dummy cpu_stop implementation for UP.  The waiting variants execute
      the work function directly with preempt disabled and
      stop_one_cpu_nowait() schedules a workqueue work.
      
      Makefile and ifdefs around stop_machine implementation are updated to
      accomodate CONFIG_SMP && !CONFIG_STOP_MACHINE case.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NIngo Molnar <mingo@elte.hu>
      bbf1bb3e
  7. 07 5月, 2010 7 次提交
    • L
      perf: Add group scheduling transactional APIs · 6bde9b6c
      Lin Ming 提交于
      Add group scheduling transactional APIs to struct pmu.
      These APIs will be implemented in arch code, based on Peter's idea as
      below.
      
      > the idea behind hw_perf_group_sched_in() is to not perform
      > schedulability tests on each event in the group, but to add the group
      > as a whole and then perform one test.
      >
      > Of course, when that test fails, you'll have to roll-back the whole
      > group again.
      >
      > So start_txn (or a better name) would simply toggle a flag in the pmu
      > implementation that will make pmu::enable() not perform the
      > schedulablilty test.
      >
      > Then commit_txn() will perform the schedulability test (so note the
      > method has to have a !void return value.
      >
      > This will allow us to use the regular
      > kernel/perf_event.c::group_sched_in() and all the rollback code.
      > Currently each hw_perf_group_sched_in() implementation duplicates all
      > the rolllback code (with various bugs).
      
      ->start_txn:
      Start group events scheduling transaction, set a flag to make
      pmu::enable() not perform the schedulability test, it will be performed
      at commit time.
      
      ->commit_txn:
      Commit group events scheduling transaction, perform the group
      schedulability as a whole
      
      ->cancel_txn:
      Stop group events scheduling transaction, clear the flag so
      pmu::enable() will perform the schedulability test.
      Reviewed-by: NStephane Eranian <eranian@google.com>
      Reviewed-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NLin Ming <ming.m.lin@intel.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1272002160.5707.60.camel@minggr.sh.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6bde9b6c
    • P
      perf, x86: Improve the PEBS ABI · ab608344
      Peter Zijlstra 提交于
      Rename perf_event_attr::precise to perf_event_attr::precise_ip and
      widen it to 2 bits. This new field describes the required precision of
      the PERF_SAMPLE_IP field:
      
        0 - SAMPLE_IP can have arbitrary skid
        1 - SAMPLE_IP must have constant skid
        2 - SAMPLE_IP requested to have 0 skid
        3 - SAMPLE_IP must have 0 skid
      
      And modify the Intel PEBS code accordingly. The PEBS implementation
      now supports up to precise_ip == 2, where we perform the IP fixup.
      
      Also s/PERF_RECORD_MISC_EXACT/&_IP/ to clarify its meaning, this bit
      should be set for each PERF_SAMPLE_IP field known to match the actual
      instruction triggering the event.
      
      This new scheme allows for a PEBS mode that uses the buffer for more
      than a single event.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Stephane Eranian <eranian@google.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ab608344
    • P
      perf: Fix exit() vs PERF_FORMAT_GROUP · 4fd38e45
      Peter Zijlstra 提交于
      Both Stephane and Corey reported that PERF_FORMAT_GROUP didn't work
      as expected if the task the counters were attached to quit before
      the read() call.
      
      The cause is that we unconditionally destroy the grouping when we
      remove counters from their context. Fix this by only doing this when
      we free the counter itself.
      Reported-by: NCorey Ashford <cjashfor@linux.vnet.ibm.com>
      Reported-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1273160566.5605.404.camel@twins>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4fd38e45
    • P
      sched: Remove rq argument to the tracepoints · 27a9da65
      Peter Zijlstra 提交于
      struct rq isn't visible outside of sched.o so its near useless to
      expose the pointer, also there are no users of it, so remove it.
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1272997616.1642.207.camel@laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      27a9da65
    • T
      sched: replace migration_thread with cpu_stop · 969c7921
      Tejun Heo 提交于
      Currently migration_thread is serving three purposes - migration
      pusher, context to execute active_load_balance() and forced context
      switcher for expedited RCU synchronize_sched.  All three roles are
      hardcoded into migration_thread() and determining which job is
      scheduled is slightly messy.
      
      This patch kills migration_thread and replaces all three uses with
      cpu_stop.  The three different roles of migration_thread() are
      splitted into three separate cpu_stop callbacks -
      migration_cpu_stop(), active_load_balance_cpu_stop() and
      synchronize_sched_expedited_cpu_stop() - and each use case now simply
      asks cpu_stop to execute the callback as necessary.
      
      synchronize_sched_expedited() was implemented with private
      preallocated resources and custom multi-cpu queueing and waiting
      logic, both of which are provided by cpu_stop.
      synchronize_sched_expedited_count is made atomic and all other shared
      resources along with the mutex are dropped.
      
      synchronize_sched_expedited() also implemented a check to detect cases
      where not all the callback got executed on their assigned cpus and
      fall back to synchronize_sched().  If called with cpu hotplug blocked,
      cpu_stop already guarantees that and the condition cannot happen;
      otherwise, stop_machine() would break.  However, this patch preserves
      the paranoid check using a cpumask to record on which cpus the stopper
      ran so that it can serve as a bisection point if something actually
      goes wrong theree.
      
      Because the internal execution state is no longer visible,
      rcu_expedited_torture_stats() is removed.
      
      This patch also renames cpu_stop threads to from "stopper/%d" to
      "migration/%d".  The names of these threads ultimately don't matter
      and there's no reason to make unnecessary userland visible changes.
      
      With this patch applied, stop_machine() and sched now share the same
      resources.  stop_machine() is faster without wasting any resources and
      sched migration users are much cleaner.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Dipankar Sarma <dipankar@in.ibm.com>
      Cc: Josh Triplett <josh@freedesktop.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Dimitri Sivanich <sivanich@sgi.com>
      969c7921
    • T
      stop_machine: reimplement using cpu_stop · 3fc1f1e2
      Tejun Heo 提交于
      Reimplement stop_machine using cpu_stop.  As cpu stoppers are
      guaranteed to be available for all online cpus,
      stop_machine_create/destroy() are no longer necessary and removed.
      
      With resource management and synchronization handled by cpu_stop, the
      new implementation is much simpler.  Asking the cpu_stop to execute
      the stop_cpu() state machine on all online cpus with cpu hotplug
      disabled is enough.
      
      stop_machine itself doesn't need to manage any global resources
      anymore, so all per-instance information is rolled into struct
      stop_machine_data and the mutex and all static data variables are
      removed.
      
      The previous implementation created and destroyed RT workqueues as
      necessary which made stop_machine() calls highly expensive on very
      large machines.  According to Dimitri Sivanich, preventing the dynamic
      creation/destruction makes booting faster more than twice on very
      large machines.  cpu_stop resources are preallocated for all online
      cpus and should have the same effect.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Dimitri Sivanich <sivanich@sgi.com>
      3fc1f1e2
    • T
      cpu_stop: implement stop_cpu[s]() · 1142d810
      Tejun Heo 提交于
      Implement a simplistic per-cpu maximum priority cpu monopolization
      mechanism.  A non-sleeping callback can be scheduled to run on one or
      multiple cpus with maximum priority monopolozing those cpus.  This is
      primarily to replace and unify RT workqueue usage in stop_machine and
      scheduler migration_thread which currently is serving multiple
      purposes.
      
      Four functions are provided - stop_one_cpu(), stop_one_cpu_nowait(),
      stop_cpus() and try_stop_cpus().
      
      This is to allow clean sharing of resources among stop_cpu and all the
      migration thread users.  One stopper thread per cpu is created which
      is currently named "stopper/CPU".  This will eventually replace the
      migration thread and take on its name.
      
      * This facility was originally named cpuhog and lived in separate
        files but Peter Zijlstra nacked the name and thus got renamed to
        cpu_stop and moved into stop_machine.c.
      
      * Better reporting of preemption leak as per Peter's suggestion.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Dimitri Sivanich <sivanich@sgi.com>
      1142d810
  8. 05 5月, 2010 1 次提交
    • S
      tracing: Fix tracepoint.h DECLARE_TRACE() to allow more than one header · 2e26ca71
      Steven Rostedt 提交于
      When more than one header is included under CREATE_TRACE_POINTS
      the DECLARE_TRACE() macro is not defined back to its original meaning
      and the second include will fail to initialize the TRACE_EVENT()
      and DECLARE_TRACE() correctly.
      
      To fix this the tracepoint.h file moves the define of DECLARE_TRACE()
      out of the #ifdef _LINUX_TRACEPOINT_H protection (just like the
      define of the TRACE_EVENT()). This way the define_trace.h will undef
      the DECLARE_TRACE() at the end and allow new headers to start
      from scratch.
      
      This patch also requires fixing the include/events/napi.h
      
      It currently uses DECLARE_TRACE() and should be converted to a TRACE_EVENT()
      format. But I'll leave that change to the authors of that file.
      But since the napi.h file depends on using the CREATE_TRACE_POINTS
      and does not define its own DEFINE_TRACE() it must use the define_trace.h
      method instead.
      
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2e26ca71
  9. 04 5月, 2010 2 次提交
  10. 03 5月, 2010 1 次提交
  11. 01 5月, 2010 4 次提交
    • F
      hw-breakpoints: Get the number of available registers on boot dynamically · feef47d0
      Frederic Weisbecker 提交于
      The breakpoint generic layer assumes that archs always know in advance
      the static number of address registers available to host breakpoints
      through the HBP_NUM macro.
      
      However this is not true for every archs. For example Arm needs to get
      this information dynamically to handle the compatiblity between
      different versions.
      
      To solve this, this patch proposes to drop the static HBP_NUM macro
      and let the arch provide the number of available slots through a
      new hw_breakpoint_slots() function. For archs that have
      CONFIG_HAVE_MIXED_BREAKPOINTS_REGS selected, it will be called once
      as the number of registers fits for instruction and data breakpoints
      together.
      For the others it will be called first to get the number of
      instruction breakpoint registers and another time to get the
      data breakpoint registers, the targeted type is given as a
      parameter of hw_breakpoint_slots().
      Reported-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NPaul Mundt <lethal@linux-sh.org>
      Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Cc: K. Prasad <prasad@linux.vnet.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Jason Wessel <jason.wessel@windriver.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      feef47d0
    • F
      hw-breakpoints: Separate constraint space for data and instruction breakpoints · 0102752e
      Frederic Weisbecker 提交于
      There are two outstanding fashions for archs to implement hardware
      breakpoints.
      
      The first is to separate breakpoint address pattern definition
      space between data and instruction breakpoints. We then have
      typically distinct instruction address breakpoint registers
      and data address breakpoint registers, delivered with
      separate control registers for data and instruction breakpoints
      as well. This is the case of PowerPc and ARM for example.
      
      The second consists in having merged breakpoint address space
      definition between data and instruction breakpoint. Address
      registers can host either instruction or data address and
      the access mode for the breakpoint is defined in a control
      register. This is the case of x86 and Super H.
      
      This patch adds a new CONFIG_HAVE_MIXED_BREAKPOINTS_REGS config
      that archs can select if they belong to the second case. Those
      will have their slot allocation merged for instructions and
      data breakpoints.
      
      The others will have a separate slot tracking between data and
      instruction breakpoints.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NPaul Mundt <lethal@linux-sh.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Cc: K. Prasad <prasad@linux.vnet.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      0102752e
    • F
      hw-breakpoints: Tag ptrace breakpoint as exclude_kernel · 73266fc1
      Frederic Weisbecker 提交于
      Tag ptrace breakpoints with the exclude_kernel attribute set. This
      will make it easier to set generic policies on breakpoints, when it
      comes to ensure nobody unpriviliged try to breakpoint on the kernel.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NPaul Mundt <lethal@linux-sh.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Cc: K. Prasad <prasad@linux.vnet.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      73266fc1
    • D
      USB: rename usb_buffer_alloc() and usb_buffer_free() · 073900a2
      Daniel Mack 提交于
      For more clearance what the functions actually do,
      
        usb_buffer_alloc() is renamed to usb_alloc_coherent()
        usb_buffer_free()  is renamed to usb_free_coherent()
      
      They should only be used in code which really needs DMA coherency.
      
      [added compatibility macros so we can convert things easier - gregkh]
      Signed-off-by: NDaniel Mack <daniel@caiaq.de>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: Pedro Ribeiro <pedrib@gmail.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      073900a2
  12. 29 4月, 2010 3 次提交
    • N
      sctp: Fix skb_over_panic resulting from multiple invalid parameter errors (CVE-2010-1173) (v4) · 5fa782c2
      Neil Horman 提交于
      Ok, version 4
      
      Change Notes:
      1) Minor cleanups, from Vlads notes
      
      Summary:
      
      Hey-
      	Recently, it was reported to me that the kernel could oops in the
      following way:
      
      <5> kernel BUG at net/core/skbuff.c:91!
      <5> invalid operand: 0000 [#1]
      <5> Modules linked in: sctp netconsole nls_utf8 autofs4 sunrpc iptable_filter
      ip_tables cpufreq_powersave parport_pc lp parport vmblock(U) vsock(U) vmci(U)
      vmxnet(U) vmmemctl(U) vmhgfs(U) acpiphp dm_mirror dm_mod button battery ac md5
      ipv6 uhci_hcd ehci_hcd snd_ens1371 snd_rawmidi snd_seq_device snd_pcm_oss
      snd_mixer_oss snd_pcm snd_timer snd_page_alloc snd_ac97_codec snd soundcore
      pcnet32 mii floppy ext3 jbd ata_piix libata mptscsih mptsas mptspi mptscsi
      mptbase sd_mod scsi_mod
      <5> CPU:    0
      <5> EIP:    0060:[<c02bff27>]    Not tainted VLI
      <5> EFLAGS: 00010216   (2.6.9-89.0.25.EL)
      <5> EIP is at skb_over_panic+0x1f/0x2d
      <5> eax: 0000002c   ebx: c033f461   ecx: c0357d96   edx: c040fd44
      <5> esi: c033f461   edi: df653280   ebp: 00000000   esp: c040fd40
      <5> ds: 007b   es: 007b   ss: 0068
      <5> Process swapper (pid: 0, threadinfo=c040f000 task=c0370be0)
      <5> Stack: c0357d96 e0c29478 00000084 00000004 c033f461 df653280 d7883180
      e0c2947d
      <5>        00000000 00000080 df653490 00000004 de4f1ac0 de4f1ac0 00000004
      df653490
      <5>        00000001 e0c2877a 08000800 de4f1ac0 df653490 00000000 e0c29d2e
      00000004
      <5> Call Trace:
      <5>  [<e0c29478>] sctp_addto_chunk+0xb0/0x128 [sctp]
      <5>  [<e0c2947d>] sctp_addto_chunk+0xb5/0x128 [sctp]
      <5>  [<e0c2877a>] sctp_init_cause+0x3f/0x47 [sctp]
      <5>  [<e0c29d2e>] sctp_process_unk_param+0xac/0xb8 [sctp]
      <5>  [<e0c29e90>] sctp_verify_init+0xcc/0x134 [sctp]
      <5>  [<e0c20322>] sctp_sf_do_5_1B_init+0x83/0x28e [sctp]
      <5>  [<e0c25333>] sctp_do_sm+0x41/0x77 [sctp]
      <5>  [<c01555a4>] cache_grow+0x140/0x233
      <5>  [<e0c26ba1>] sctp_endpoint_bh_rcv+0xc5/0x108 [sctp]
      <5>  [<e0c2b863>] sctp_inq_push+0xe/0x10 [sctp]
      <5>  [<e0c34600>] sctp_rcv+0x454/0x509 [sctp]
      <5>  [<e084e017>] ipt_hook+0x17/0x1c [iptable_filter]
      <5>  [<c02d005e>] nf_iterate+0x40/0x81
      <5>  [<c02e0bb9>] ip_local_deliver_finish+0x0/0x151
      <5>  [<c02e0c7f>] ip_local_deliver_finish+0xc6/0x151
      <5>  [<c02d0362>] nf_hook_slow+0x83/0xb5
      <5>  [<c02e0bb2>] ip_local_deliver+0x1a2/0x1a9
      <5>  [<c02e0bb9>] ip_local_deliver_finish+0x0/0x151
      <5>  [<c02e103e>] ip_rcv+0x334/0x3b4
      <5>  [<c02c66fd>] netif_receive_skb+0x320/0x35b
      <5>  [<e0a0928b>] init_stall_timer+0x67/0x6a [uhci_hcd]
      <5>  [<c02c67a4>] process_backlog+0x6c/0xd9
      <5>  [<c02c690f>] net_rx_action+0xfe/0x1f8
      <5>  [<c012a7b1>] __do_softirq+0x35/0x79
      <5>  [<c0107efb>] handle_IRQ_event+0x0/0x4f
      <5>  [<c01094de>] do_softirq+0x46/0x4d
      
      Its an skb_over_panic BUG halt that results from processing an init chunk in
      which too many of its variable length parameters are in some way malformed.
      
      The problem is in sctp_process_unk_param:
      if (NULL == *errp)
      	*errp = sctp_make_op_error_space(asoc, chunk,
      					 ntohs(chunk->chunk_hdr->length));
      
      	if (*errp) {
      		sctp_init_cause(*errp, SCTP_ERROR_UNKNOWN_PARAM,
      				 WORD_ROUND(ntohs(param.p->length)));
      		sctp_addto_chunk(*errp,
      			WORD_ROUND(ntohs(param.p->length)),
      				  param.v);
      
      When we allocate an error chunk, we assume that the worst case scenario requires
      that we have chunk_hdr->length data allocated, which would be correct nominally,
      given that we call sctp_addto_chunk for the violating parameter.  Unfortunately,
      we also, in sctp_init_cause insert a sctp_errhdr_t structure into the error
      chunk, so the worst case situation in which all parameters are in violation
      requires chunk_hdr->length+(sizeof(sctp_errhdr_t)*param_count) bytes of data.
      
      The result of this error is that a deliberately malformed packet sent to a
      listening host can cause a remote DOS, described in CVE-2010-1173:
      http://cve.mitre.org/cgi-bin/cvename.cgi?name=2010-1173
      
      I've tested the below fix and confirmed that it fixes the issue.  We move to a
      strategy whereby we allocate a fixed size error chunk and ignore errors we don't
      have space to report.  Tested by me successfully
      Signed-off-by: NNeil Horman <nhorman@tuxdriver.com>
      Acked-by: NVlad Yasevich <vladislav.yasevich@hp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5fa782c2
    • V
      sctp: Fix oops when sending queued ASCONF chunks · c0786693
      Vlad Yasevich 提交于
      When we finish processing ASCONF_ACK chunk, we try to send
      the next queued ASCONF.  This action runs the sctp state
      machine recursively and it's not prepared to do so.
      
      kernel BUG at kernel/timer.c:790!
      invalid opcode: 0000 [#1] SMP
      last sysfs file: /sys/module/ipv6/initstate
      Modules linked in: sha256_generic sctp libcrc32c ipv6 dm_multipath
      uinput 8139too i2c_piix4 8139cp mii i2c_core pcspkr virtio_net joydev
      floppy virtio_blk virtio_pci [last unloaded: scsi_wait_scan]
      
      Pid: 0, comm: swapper Not tainted 2.6.34-rc4 #15 /Bochs
      EIP: 0060:[<c044a2ef>] EFLAGS: 00010286 CPU: 0
      EIP is at add_timer+0xd/0x1b
      EAX: cecbab14 EBX: 000000f0 ECX: c0957b1c EDX: 03595cf4
      ESI: cecba800 EDI: cf276f00 EBP: c0957aa0 ESP: c0957aa0
       DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
      Process swapper (pid: 0, ti=c0956000 task=c0988ba0 task.ti=c0956000)
      Stack:
       c0957ae0 d1851214 c0ab62e4 c0ab5f26 0500ffff 00000004 00000005 00000004
      <0> 00000000 d18694fd 00000004 1666b892 cecba800 cecba800 c0957b14
      00000004
      <0> c0957b94 d1851b11 ceda8b00 cecba800 cf276f00 00000001 c0957b14
      000000d0
      Call Trace:
       [<d1851214>] ? sctp_side_effects+0x607/0xdfc [sctp]
       [<d1851b11>] ? sctp_do_sm+0x108/0x159 [sctp]
       [<d1863386>] ? sctp_pname+0x0/0x1d [sctp]
       [<d1861a56>] ? sctp_primitive_ASCONF+0x36/0x3b [sctp]
       [<d185657c>] ? sctp_process_asconf_ack+0x2a4/0x2d3 [sctp]
       [<d184e35c>] ? sctp_sf_do_asconf_ack+0x1dd/0x2b4 [sctp]
       [<d1851ac1>] ? sctp_do_sm+0xb8/0x159 [sctp]
       [<d1863334>] ? sctp_cname+0x0/0x52 [sctp]
       [<d1854377>] ? sctp_assoc_bh_rcv+0xac/0xe1 [sctp]
       [<d1858f0f>] ? sctp_inq_push+0x2d/0x30 [sctp]
       [<d186329d>] ? sctp_rcv+0x797/0x82e [sctp]
      Tested-by: NWei Yongjun <yjwei@cn.fujitsu.com>
      Signed-off-by: NYuansong Qiao <ysqiao@research.ait.ie>
      Signed-off-by: NShuaijun Zhang <szhang@research.ait.ie>
      Signed-off-by: NVlad Yasevich <vladislav.yasevich@hp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c0786693
    • W
      sctp: avoid irq lock inversion while call sk->sk_data_ready() · 561b1733
      Wei Yongjun 提交于
      sk->sk_data_ready() of sctp socket can be called from both BH and non-BH
      contexts, but the default sk->sk_data_ready(), sock_def_readable(), can
      not be used in this case. Therefore, we have to make a new function
      sctp_data_ready() to grab sk->sk_data_ready() with BH disabling.
      
      =========================================================
      [ INFO: possible irq lock inversion dependency detected ]
      2.6.33-rc6 #129
      ---------------------------------------------------------
      sctp_darn/1517 just changed the state of lock:
       (clock-AF_INET){++.?..}, at: [<c06aab60>] sock_def_readable+0x20/0x80
      but this lock took another, SOFTIRQ-unsafe lock in the past:
       (slock-AF_INET){+.-...}
      
      and interrupts could create inverse lock ordering between them.
      
      other info that might help us debug this:
      1 lock held by sctp_darn/1517:
       #0:  (sk_lock-AF_INET){+.+.+.}, at: [<cdfe363d>] sctp_sendmsg+0x23d/0xc00 [sctp]
      Signed-off-by: NWei Yongjun <yjwei@cn.fujitsu.com>
      Signed-off-by: NVlad Yasevich <vladislav.yasevich@hp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      561b1733
  13. 28 4月, 2010 3 次提交
    • J
      coda: move backing-dev.h kernel include inside __KERNEL__ · 33f60e96
      Jens Axboe 提交于
      Otherwise we must export backing-dev.h as well, which doesn't make
      any sense.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      33f60e96
    • D
      ring-buffer: Make non-consuming read less expensive with lots of cpus. · 72c9ddfd
      David Miller 提交于
      When performing a non-consuming read, a synchronize_sched() is
      performed once for every cpu which is actively tracing.
      
      This is very expensive, and can make it take several seconds to open
      up the 'trace' file with lots of cpus.
      
      Only one synchronize_sched() call is actually necessary.  What is
      desired is for all cpus to see the disabling state change.  So we
      transform the existing sequence:
      
      	for_each_cpu() {
      		ring_buffer_read_start();
      	}
      
      where each ring_buffer_start() call performs a synchronize_sched(),
      into the following:
      
      	for_each_cpu() {
      		ring_buffer_read_prepare();
      	}
      	ring_buffer_read_prepare_sync();
      	for_each_cpu() {
      		ring_buffer_read_start();
      	}
      
      wherein only the single ring_buffer_read_prepare_sync() call needs to
      do the synchronize_sched().
      
      The first phase, via ring_buffer_read_prepare(), allocates the 'iter'
      memory and increments ->record_disabled.
      
      In the second phase, ring_buffer_read_prepare_sync() makes sure this
      ->record_disabled state is visible fully to all cpus.
      
      And in the final third phase, the ring_buffer_read_start() calls reset
      the 'iter' objects allocated in the first phase since we now know that
      none of the cpus are adding trace entries any more.
      
      This makes openning the 'trace' file nearly instantaneous on a
      sparc64 Niagara2 box with 128 cpus tracing.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      LKML-Reference: <20100420.154711.11246950.davem@davemloft.net>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      72c9ddfd
    • J
      tracing: Add graph output support for irqsoff tracer · 62b915f1
      Jiri Olsa 提交于
      Add function graph output to irqsoff tracer.
      
      The graph output is enabled by setting new 'display-graph' trace option.
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      LKML-Reference: <1270227683-14631-4-git-send-email-jolsa@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      62b915f1