1. 07 12月, 2009 1 次提交
    • P
      sched: Fix balance vs hotplug race · 6ad4c188
      Peter Zijlstra 提交于
      Since (e761b772: cpu hotplug, sched: Introduce cpu_active_map and redo
      sched domain managment) we have cpu_active_mask which is suppose to rule
      scheduler migration and load-balancing, except it never (fully) did.
      
      The particular problem being solved here is a crash in try_to_wake_up()
      where select_task_rq() ends up selecting an offline cpu because
      select_task_rq_fair() trusts the sched_domain tree to reflect the
      current state of affairs, similarly select_task_rq_rt() trusts the
      root_domain.
      
      However, the sched_domains are updated from CPU_DEAD, which is after the
      cpu is taken offline and after stop_machine is done. Therefore it can
      race perfectly well with code assuming the domains are right.
      
      Cure this by building the domains from cpu_active_mask on
      CPU_DOWN_PREPARE.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6ad4c188
  2. 06 12月, 2009 1 次提交
  3. 03 12月, 2009 6 次提交
    • S
      GFS2: Tag all metadata with jid · 0ab7d13f
      Steven Whitehouse 提交于
      There are two spare field in the header common to all GFS2
      metadata. One is just the right size to fit a journal id
      in it, and this patch updates the journal code so that each
      time a metadata block is modified, we tag it with the journal
      id of the node which is performing the modification.
      
      The reason for this is that it should make it much easier to
      debug issues which arise if we can tell which node was the
      last to modify a particular metadata block.
      
      Since the field is updated before the block is written into
      the journal, each journal should only contain metadata which
      is tagged with its own journal id. The one exception to this
      is the journal header block, which might have a different node's
      id in it, if that journal was recovered by another node in the
      cluster.
      
      Thus each journal will contain a record of which nodes recovered
      it, via the journal header.
      
      The other field in the metadata header could potentially be
      used to hold information about what kind of operation was
      performed, but for the time being we just zero it on each
      transaction so that if we use it for that in future, we'll
      know that the information (where it exists) is reliable.
      
      I did consider using the other field to hold the journal
      sequence number, however since in GFS2's journaling we write
      the modified data into the journal and not the original
      data, this gives no information as to what action caused the
      modification, so I think we can probably come up with a better
      use for those 64 bits in the future.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      0ab7d13f
    • S
      VFS: Export dquot_send_warning · 86e931a3
      Steven Whitehouse 提交于
      Sending a message to userspace in a generic format to warn
      of events (e.g. quota exceeded) in the quota subsystem is
      a generically useful feature. This patch makes some minor
      changes to the send_message function from dquot.c renaming
      it quota_send_message, moving it to quota.c and exporting it
      for use by filesystems which do not use the dquot code.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      86e931a3
    • S
      VFS: Add forget_all_cached_acls() · 796bd952
      Steven Whitehouse 提交于
      This is required for cluster filesystems which want to use
      cached ACLs so that they can invalidate the cache when
      required.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      Cc: Alexander Viro <aviro@redhat.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      796bd952
    • A
      include/linux/compiler-gcc4.h: Fix build bug - gcc-4.0.2 doesn't understand __builtin_object_size · 7cff7ce9
      Andrew Morton 提交于
      Maybe 4.1.0 doesn't too, but this fixed it for me.
      
      Caused by:
      
       4a312769: x86: Turn the copy_from_user check into an (optional) compile time warning
       9f0cf4ad: x86: Use __builtin_object_size() to validate the buffer size for copy_from_user()
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      LKML-Reference: <200910090724.n997OQl6013538@imap1.linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7cff7ce9
    • H
      sched, cputime: Introduce thread_group_times() · 0cf55e1e
      Hidetoshi Seto 提交于
      This is a real fix for problem of utime/stime values decreasing
      described in the thread:
      
         http://lkml.org/lkml/2009/11/3/522
      
      Now cputime is accounted in the following way:
      
       - {u,s}time in task_struct are increased every time when the thread
         is interrupted by a tick (timer interrupt).
      
       - When a thread exits, its {u,s}time are added to signal->{u,s}time,
         after adjusted by task_times().
      
       - When all threads in a thread_group exits, accumulated {u,s}time
         (and also c{u,s}time) in signal struct are added to c{u,s}time
         in signal struct of the group's parent.
      
      So {u,s}time in task struct are "raw" tick count, while
      {u,s}time and c{u,s}time in signal struct are "adjusted" values.
      
      And accounted values are used by:
      
       - task_times(), to get cputime of a thread:
         This function returns adjusted values that originates from raw
         {u,s}time and scaled by sum_exec_runtime that accounted by CFS.
      
       - thread_group_cputime(), to get cputime of a thread group:
         This function returns sum of all {u,s}time of living threads in
         the group, plus {u,s}time in the signal struct that is sum of
         adjusted cputimes of all exited threads belonged to the group.
      
      The problem is the return value of thread_group_cputime(),
      because it is mixed sum of "raw" value and "adjusted" value:
      
        group's {u,s}time = foreach(thread){{u,s}time} + exited({u,s}time)
      
      This misbehavior can break {u,s}time monotonicity.
      Assume that if there is a thread that have raw values greater
      than adjusted values (e.g. interrupted by 1000Hz ticks 50 times
      but only runs 45ms) and if it exits, cputime will decrease (e.g.
      -5ms).
      
      To fix this, we could do:
      
        group's {u,s}time = foreach(t){task_times(t)} + exited({u,s}time)
      
      But task_times() contains hard divisions, so applying it for
      every thread should be avoided.
      
      This patch fixes the above problem in the following way:
      
       - Modify thread's exit (= __exit_signal()) not to use task_times().
         It means {u,s}time in signal struct accumulates raw values instead
         of adjusted values.  As the result it makes thread_group_cputime()
         to return pure sum of "raw" values.
      
       - Introduce a new function thread_group_times(*task, *utime, *stime)
         that converts "raw" values of thread_group_cputime() to "adjusted"
         values, in same calculation procedure as task_times().
      
       - Modify group's exit (= wait_task_zombie()) to use this introduced
         thread_group_times().  It make c{u,s}time in signal struct to
         have adjusted values like before this patch.
      
       - Replace some thread_group_cputime() by thread_group_times().
         This replacements are only applied where conveys the "adjusted"
         cputime to users, and where already uses task_times() near by it.
         (i.e. sys_times(), getrusage(), and /proc/<PID>/stat.)
      
      This patch have a positive side effect:
      
       - Before this patch, if a group contains many short-life threads
         (e.g. runs 0.9ms and not interrupted by ticks), the group's
         cputime could be invisible since thread's cputime was accumulated
         after adjusted: imagine adjustment function as adj(ticks, runtime),
           {adj(0, 0.9) + adj(0, 0.9) + ....} = {0 + 0 + ....} = 0.
         After this patch it will not happen because the adjustment is
         applied after accumulated.
      
      v2:
       - remove if()s, put new variables into signal_struct.
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Spencer Candland <spencer@bluehost.com>
      Cc: Americo Wang <xiyou.wangcong@gmail.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Stanislaw Gruszka <sgruszka@redhat.com>
      LKML-Reference: <4B162517.8040909@jp.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0cf55e1e
    • H
      sched, cputime: Cleanups related to task_times() · d99ca3b9
      Hidetoshi Seto 提交于
      - Remove if({u,s}t)s because no one call it with NULL now.
      - Use cputime_{add,sub}().
      - Add ifndef-endif for prev_{u,s}time since they are used
        only when !VIRT_CPU_ACCOUNTING.
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Spencer Candland <spencer@bluehost.com>
      Cc: Americo Wang <xiyou.wangcong@gmail.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Stanislaw Gruszka <sgruszka@redhat.com>
      LKML-Reference: <4B1624C7.7040302@jp.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d99ca3b9
  4. 02 12月, 2009 10 次提交
  5. 01 12月, 2009 1 次提交
  6. 27 11月, 2009 2 次提交
  7. 26 11月, 2009 4 次提交
    • H
      sched, time: Define nsecs_to_jiffies() · b7b20df9
      Hidetoshi Seto 提交于
      Use of msecs_to_jiffies() for nsecs_to_cputime() have some
      problems:
      
       - The type of msecs_to_jiffies()'s argument is unsigned int, so
         it cannot convert msecs greater than UINT_MAX = about 49.7 days.
      
       - msecs_to_jiffies() returns MAX_JIFFY_OFFSET if MSB of argument
         is set, assuming that input was negative value.  So it cannot
         convert msecs greater than INT_MAX = about 24.8 days too.
      
      This patch defines a new function nsecs_to_jiffies() that can
      deal greater values, and that can deal all incoming values as
      unsigned.
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Stanislaw Gruszka <sgruszka@redhat.com>
      Cc: Spencer Candland <spencer@bluehost.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Amrico Wang <xiyou.wangcong@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: John Stultz <johnstul@linux.vnet.ibm.com>
      LKML-Reference: <4B0E16E7.5070307@jp.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b7b20df9
    • H
      sched: Remove task_{u,s,g}time() · d5b7c78e
      Hidetoshi Seto 提交于
      Now all task_{u,s}time() pairs are replaced by task_times().
      And task_gtime() is too simple to be an inline function.
      
      Cleanup them all.
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Stanislaw Gruszka <sgruszka@redhat.com>
      Cc: Spencer Candland <spencer@bluehost.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Americo Wang <xiyou.wangcong@gmail.com>
      LKML-Reference: <4B0E16D1.70902@jp.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d5b7c78e
    • H
      sched: Introduce task_times() to replace task_{u,s}time() pair · d180c5bc
      Hidetoshi Seto 提交于
      Functions task_{u,s}time() are called in pair in almost all
      cases.  However task_stime() is implemented to call task_utime()
      from its inside, so such paired calls run task_utime() twice.
      
      It means we do heavy divisions (div_u64 + do_div) twice to get
      utime and stime which can be obtained at same time by one set
      of divisions.
      
      This patch introduces a function task_times(*tsk, *utime,
      *stime) to retrieve utime and stime at once in better, optimized
      way.
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Stanislaw Gruszka <sgruszka@redhat.com>
      Cc: Spencer Candland <spencer@bluehost.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Americo Wang <xiyou.wangcong@gmail.com>
      LKML-Reference: <4B0E16AE.906@jp.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d180c5bc
    • I
      events: Rename TRACE_EVENT_TEMPLATE() to DECLARE_EVENT_CLASS() · 091ad365
      Ingo Molnar 提交于
      It is not quite obvious at first sight what TRACE_EVENT_TEMPLATE
      does: does it define an event as well beyond defining a template?
      
      To clarify this, rename it to DECLARE_EVENT_CLASS, which follows
      the various 'DECLARE_*()' idioms we already have in the kernel:
      
        DECLARE_EVENT_CLASS(class)
      
          DEFINE_EVENT(class, event1)
          DEFINE_EVENT(class, event2)
          DEFINE_EVENT(class, event3)
      
      To complete this logic we should also rename TRACE_EVENT() to:
      
        DEFINE_SINGLE_EVENT(single_event)
      
      ... but in a more quiet moment of the kernel cycle.
      
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <4B0E286A.2000405@cn.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      091ad365
  8. 25 11月, 2009 2 次提交
    • S
      tracing: Create new DEFINE_EVENT_PRINT · e5bc9721
      Steven Rostedt 提交于
      After creating the TRACE_EVENT_TEMPLATE I started to look at other
      trace points to see what duplication was made. I noticed that there
      are several trace points where they are almost identical except for
      the name and the output format. Since TRACE_EVENT_TEMPLATE was successful
      in bringing down the size of trace events, I added a DEFINE_EVENT_PRINT.
      
      DEFINE_EVENT_PRINT is used just like DEFINE_EVENT is. That is, the
      DEFINE_EVENT_PRINT also uses a TRACE_EVENT_TEMPLATE, but it allows the
      developer to overwrite the print format. If there are two or more
      TRACE_EVENTS that are identical except for the name and print, then
      they can be converted to use a TRACE_EVENT_TEMPLATE. Since the
      TRACE_EVENT_TEMPLATE already does the print output, the first trace event
      would have its print format held in the TRACE_EVENT_TEMPLATE and
      be defined with a DEFINE_EVENT. The rest will use the DEFINE_EVENT_PRINT
      and override the print format.
      
      Converting the sched trace points to both DEFINE_EVENT and
      DEFINE_EVENT_PRINT. Five were converted to DEFINE_EVENT and two were
      converted to DEFINE_EVENT_PRINT.
      
      I was able to get the following:
      
      $ size kernel/sched.o-*
         text	   data	    bss	    dec	    hex	filename
        79299	   6776	   2520	  88595	  15a13	kernel/sched.o-notrace
       101941	  11896	   2584	 116421	  1c6c5	kernel/sched.o-templ
       104779	  11896	   2584	 119259	  1d1db	kernel/sched.o-trace
      
      sched.o-notrace is the scheduler compiled with no trace points.
      sched.o-templ is with the use of DEFINE_EVENT and DEFINE_EVENT_PRINT
      sched.o-trace is the current trace events.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e5bc9721
    • S
      tracing: Create new TRACE_EVENT_TEMPLATE · ff038f5c
      Steven Rostedt 提交于
      There are some places in the kernel that define several tracepoints and
      they are all identical besides the name. The code to enable, disable and
      record is created for every trace point even if most of the code is
      identical.
      
      This patch adds TRACE_EVENT_TEMPLATE that lets the developer create
      a template TRACE_EVENT and create trace points with DEFINE_EVENT, which
      is based off of a given template. Each trace point used by this
      will share most of the code, and bring down the size of the kernel
      when there are several duplicate events.
      
      Usage is:
      
      TRACE_EVENT_TEMPLATE(name, proto, args, tstruct, assign, print);
      
      Which would be the same as defining a normal TRACE_EVENT.
      
      To create the trace events that the trace points will use:
      
      DEFINE_EVENT(template, name, proto, args) is done. The template
      is the name of the TRACE_EVENT_TEMPLATE to use. The name is the
      name of the trace point. The parameters proto and args must be the same
      as the proto and args of the template. If they are not the same,
      then a compile error will result. I tried hard removing this duplication
      but the C preprocessor is not powerful enough (or my CPP magic
      experience points is not at a high enough level) to not need them.
      
      A lot of trace events are coming in with new XFS development. Most of
      the trace points are identical except for the name. The following shows
      the advantage of having TRACE_EVENT_TEMPLATE:
      
      $ size fs/xfs/xfs.o.*
          text          data     bss     dec     hex filename
        452114          2788    3520  458422   6feb6 fs/xfs/xfs.o.old
        638482         38116    3744  680342   a6196 fs/xfs/xfs.o.template
        996954         38116    4480 1039550   fdcbe fs/xfs/xfs.o.trace
      
      xfs.o.old is without any tracepoints.
      xfs.o.template uses the new TRACE_EVENT_TEMPLATE.
      xfs.o.trace uses the current TRACE_EVENT macros.
      Requested-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      ff038f5c
  9. 24 11月, 2009 5 次提交
    • T
      locking: Use __[SPIN|RW]_LOCK_UNLOCKED in [spin|rw]_lock_init() · a49ed0bf
      Thomas Gleixner 提交于
      SPIN_LOCK_UNLOCKED and RW_LOCK_UNLOCKED are deprecated. Replace them
      with the __*_LOCK_UNLOCKED variants.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      a49ed0bf
    • T
      locking: Remove unused prototype · c9286b7e
      Thomas Gleixner 提交于
      commit 910067d1(remove generic__raw_read_trylock()) removed the
      implementation but left the prototype around. Remove it.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      c9286b7e
    • S
      remove CONFIG_SECURITY_FILE_CAPABILITIES compile option · b3a222e5
      Serge E. Hallyn 提交于
      As far as I know, all distros currently ship kernels with default
      CONFIG_SECURITY_FILE_CAPABILITIES=y.  Since having the option on
      leaves a 'no_file_caps' option to boot without file capabilities,
      the main reason to keep the option is that turning it off saves
      you (on my s390x partition) 5k.  In particular, vmlinux sizes
      came to:
      
      without patch fscaps=n:		 	53598392
      without patch fscaps=y:		 	53603406
      with this patch applied:		53603342
      
      with the security-next tree.
      
      Against this we must weigh the fact that there is no simple way for
      userspace to figure out whether file capabilities are supported,
      while things like per-process securebits, capability bounding
      sets, and adding bits to pI if CAP_SETPCAP is in pE are not supported
      with SECURITY_FILE_CAPABILITIES=n, leaving a bit of a problem for
      applications wanting to know whether they can use them and/or why
      something failed.
      
      It also adds another subtly different set of semantics which we must
      maintain at the risk of severe security regressions.
      
      So this patch removes the SECURITY_FILE_CAPABILITIES compile
      option.  It drops the kernel size by about 50k over the stock
      SECURITY_FILE_CAPABILITIES=y kernel, by removing the
      cap_limit_ptraced_target() function.
      
      Changelog:
      	Nov 20: remove cap_limit_ptraced_target() as it's logic
      		was ifndef'ed.
      Signed-off-by: NSerge E. Hallyn <serue@us.ibm.com>
      Acked-by: NAndrew G. Morgan" <morgan@kernel.org>
      Signed-off-by: NJames Morris <jmorris@namei.org>
      b3a222e5
    • F
      hw-breakpoints: Fix misordered ifdef · fa7c27ee
      Frederic Weisbecker 提交于
      Fix a misplaced ifdef. We need the perf event headers also in
      off-case to avoid the following build error:
      
       include/linux/hw_breakpoint.h:94: error: expected declaration specifiers or '...' before 'perf_callback_t'
       include/linux/hw_breakpoint.h:102: error: expected declaration specifiers or '...' before 'perf_callback_t'
       include/linux/hw_breakpoint.h:109: error: expected declaration specifiers or '...' before 'perf_callback_t'
       include/linux/hw_breakpoint.h:116: error: expected declaration specifiers or '...' before 'perf_callback_t'
      Reported-by: NKisskb-bot by Michael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Prasad <prasad@linux.vnet.ibm.com>
      LKML-Reference: <1259011812-8093-1-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fa7c27ee
    • F
      hw-breakpoints: Include only linux/perf_event.h from kernel part of bp headers · e6db4876
      Frederic Weisbecker 提交于
      As userspace only needs the breakpoints enum types from the
      breakpoints headers.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Prasad <prasad@linux.vnet.ibm.com>
      LKML-Reference: <1258987355-8751-1-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e6db4876
  10. 23 11月, 2009 3 次提交
    • P
      perf_events: Undo some recursion damage · 4ed7c92d
      Peter Zijlstra 提交于
      Make perf_swevent_get_recursion_context return a context number
      and disable preemption.
      
      This could be used to remove the IRQ disable from the trace bit
      and index the per-cpu buffer with.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20091123103819.993226816@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4ed7c92d
    • P
      rcu: Re-arrange code to reduce #ifdef pain · 6ebb237b
      Paul E. McKenney 提交于
      Remove #ifdefs from kernel/rcupdate.c and
      include/linux/rcupdate.h by moving code to
      include/linux/rcutiny.h, include/linux/rcutree.h, and
      kernel/rcutree.c.
      
      Also remove some definitions that are no longer used.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1258908830885-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6ebb237b
    • P
      rcu: Eliminate unneeded function wrapping · 9f680ab4
      Paul E. McKenney 提交于
      The functions rcu_init() is a wrapper for __rcu_init(), and also
      sets up the CPU-hotplug notifier for rcu_barrier_cpu_hotplug().
      But TINY_RCU doesn't need CPU-hotplug notification, and the
      rcu_barrier_cpu_hotplug() is a simple wrapper for
      rcu_cpu_notify().
      
      So push rcu_init() out to kernel/rcutree.c and kernel/rcutiny.c
      and get rid of the wrapper function rcu_barrier_cpu_hotplug().
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <12589088302320-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9f680ab4
  11. 22 11月, 2009 2 次提交
    • F
      hw-breakpoints: Separate the kernel part from breakpoint headers · 5093ebad
      Frederic Weisbecker 提交于
      So that we can include this header from userspace tools, like
      perf tools, to get the breakpoint types and len definitions.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Prasad <prasad@linux.vnet.ibm.com>
      LKML-Reference: <1258863695-10464-4-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5093ebad
    • F
      tracing: Use the perf recursion protection from trace event · ce71b9df
      Frederic Weisbecker 提交于
      When we commit a trace to perf, we first check if we are
      recursing in the same buffer so that we don't mess-up the buffer
      with a recursing trace. But later on, we do the same check from
      perf to avoid commit recursion. The recursion check is desired
      early before we touch the buffer but we want to do this check
      only once.
      
      Then export the recursion protection from perf and use it from
      the trace events before submitting a trace.
      
      v2: Put appropriate Reported-by tag
      Reported-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Cc: Jason Baron <jbaron@redhat.com>
      LKML-Reference: <1258864015-10579-1-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ce71b9df
  12. 21 11月, 2009 2 次提交
  13. 20 11月, 2009 1 次提交