1. 13 8月, 2013 1 次提交
    • F
      context_tracking: Remove full dynticks' hacky dependency on wide context tracking · d84d27a4
      Frederic Weisbecker 提交于
      Now that the full dynticks subsystem only enables the context tracking
      on full dynticks CPUs, lets remove the dependency on CONTEXT_TRACKING_FORCE
      
      This dependency was a hack to enable the context tracking widely for the
      full dynticks susbsystem until the latter becomes able to enable it in a
      more CPU-finegrained fashion.
      
      Now CONTEXT_TRACKING_FORCE only stands for testing on archs that
      work on support for the context tracking while full dynticks can't be
      used yet due to unmet dependencies. It simulates a system where all CPUs
      are full dynticks so that RCU user extended quiescent states and dynticks
      cputime accounting can be tested on the given arch.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Kevin Hilman <khilman@linaro.org>
      d84d27a4
  2. 15 7月, 2013 1 次提交
    • P
      kernel: delete __cpuinit usage from all core kernel files · 0db0628d
      Paul Gortmaker 提交于
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      This removes all the uses of the __cpuinit macros from C files in
      the core kernel directories (kernel, init, lib, mm, and include)
      that don't really have a specific maintainer.
      
      [1] https://lkml.org/lkml/2013/5/20/589Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      0db0628d
  3. 10 7月, 2013 1 次提交
  4. 08 7月, 2013 1 次提交
  5. 04 7月, 2013 3 次提交
  6. 25 6月, 2013 1 次提交
    • J
      build some drivers only when compile-testing · 4bb16672
      Jiri Slaby 提交于
      Some drivers can be built on more platforms than they run on. This is
      a burden for users and distributors who package a kernel. They have to
      manually deselect some (for them useless) drivers when updating their
      configs via oldconfig. And yet, sometimes it is even impossible to
      disable the drivers without patching the kernel.
      
      Introduce a new config option COMPILE_TEST and make all those drivers
      to depend on the platform they run on, or on the COMPILE_TEST option.
      Now, when users/distributors choose COMPILE_TEST=n they will not have
      the drivers in their allmodconfig setups, but developers still can
      compile-test them with COMPILE_TEST=y.
      
      Now the drivers where we use this new option:
      * PTP_1588_CLOCK_PCH: The PCH EG20T is only compatible with Intel Atom
        processors so it should depend on x86.
      * FB_GEODE: Geode is 32-bit only so only enable it for X86_32.
      * USB_CHIPIDEA_IMX: The OF_DEVICE dependency will be met on powerpc
        systems -- which do not actually support the hardware via that
        method.
      * INTEL_MID_PTI: It is specific to the Penwell type of Intel Atom
        device.
      
      [v2]
      * remove EXPERT dependency
      
      [gregkh - remove chipidea portion, as it's incorrect, and also doesn't
       apply to my driver-core tree]
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jeff Mahoney <jeffm@suse.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: linux-usb@vger.kernel.org
      Cc: Florian Tobias Schandinat <FlorianSchandinat@gmx.de>
      Cc: linux-geode@lists.infradead.org
      Cc: linux-fbdev@vger.kernel.org
      Cc: Richard Cochran <richardcochran@gmail.com>
      Cc: netdev@vger.kernel.org
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: "Keller, Jacob E" <jacob.e.keller@intel.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4bb16672
  7. 13 6月, 2013 1 次提交
  8. 11 6月, 2013 4 次提交
    • P
      rcu: Remove TINY_PREEMPT_RCU · 127781d1
      Paul E. McKenney 提交于
      TINY_PREEMPT_RCU adds significant code and complexity, but does not
      offer commensurate benefits.  People currently using TINY_PREEMPT_RCU
      can get much better memory footprint with TINY_RCU, or, if they really
      need preemptible RCU, they can use TREE_PREEMPT_RCU with a relatively
      minor degradation in memory footprint.  Please note that this move
      has been widely publicized on LKML (https://lkml.org/lkml/2012/11/12/545)
      and on LWN (http://lwn.net/Articles/541037/).
      
      This commit therefore removes TINY_PREEMPT_RCU.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      [ paulmck: Updated to eliminate #else in rcutiny.h as suggested by Josh ]
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      127781d1
    • P
      rcu: Apply Dave Jones's NOCB Kconfig help feedback · 676c3dc2
      Paul E. McKenney 提交于
      The Kconfig help text for the RCU_NOCB_CPU_NONE, RCU_NOCB_CPU_ZERO,
      and RCU_NOCB_CPU_ALL Kconfig options was unclear, so this commit
      adds a bit more detail.
      Reported-by: NDave Jones <davej@redhat.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      676c3dc2
    • P
      rcu: Remove "Experimental" flags · 9a5739d7
      Paul E. McKenney 提交于
      After a release or two, features are no longer experimental.  Therefore,
      this commit removes the "Experimental" tag from them.
      Reported-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      9a5739d7
    • S
      rcu: Don't call wakeup() with rcu_node structure ->lock held · 016a8d5b
      Steven Rostedt 提交于
      This commit fixes a lockdep-detected deadlock by moving a wake_up()
      call out from a rnp->lock critical section.  Please see below for
      the long version of this story.
      
      On Tue, 2013-05-28 at 16:13 -0400, Dave Jones wrote:
      
      > [12572.705832] ======================================================
      > [12572.750317] [ INFO: possible circular locking dependency detected ]
      > [12572.796978] 3.10.0-rc3+ #39 Not tainted
      > [12572.833381] -------------------------------------------------------
      > [12572.862233] trinity-child17/31341 is trying to acquire lock:
      > [12572.870390]  (rcu_node_0){..-.-.}, at: [<ffffffff811054ff>] rcu_read_unlock_special+0x9f/0x4c0
      > [12572.878859]
      > but task is already holding lock:
      > [12572.894894]  (&ctx->lock){-.-...}, at: [<ffffffff811390ed>] perf_lock_task_context+0x7d/0x2d0
      > [12572.903381]
      > which lock already depends on the new lock.
      >
      > [12572.927541]
      > the existing dependency chain (in reverse order) is:
      > [12572.943736]
      > -> #4 (&ctx->lock){-.-...}:
      > [12572.960032]        [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
      > [12572.968337]        [<ffffffff816ebc90>] _raw_spin_lock+0x40/0x80
      > [12572.976633]        [<ffffffff8113c987>] __perf_event_task_sched_out+0x2e7/0x5e0
      > [12572.984969]        [<ffffffff81088953>] perf_event_task_sched_out+0x93/0xa0
      > [12572.993326]        [<ffffffff816ea0bf>] __schedule+0x2cf/0x9c0
      > [12573.001652]        [<ffffffff816eacfe>] schedule_user+0x2e/0x70
      > [12573.009998]        [<ffffffff816ecd64>] retint_careful+0x12/0x2e
      > [12573.018321]
      > -> #3 (&rq->lock){-.-.-.}:
      > [12573.034628]        [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
      > [12573.042930]        [<ffffffff816ebc90>] _raw_spin_lock+0x40/0x80
      > [12573.051248]        [<ffffffff8108e6a7>] wake_up_new_task+0xb7/0x260
      > [12573.059579]        [<ffffffff810492f5>] do_fork+0x105/0x470
      > [12573.067880]        [<ffffffff81049686>] kernel_thread+0x26/0x30
      > [12573.076202]        [<ffffffff816cee63>] rest_init+0x23/0x140
      > [12573.084508]        [<ffffffff81ed8e1f>] start_kernel+0x3f1/0x3fe
      > [12573.092852]        [<ffffffff81ed856f>] x86_64_start_reservations+0x2a/0x2c
      > [12573.101233]        [<ffffffff81ed863d>] x86_64_start_kernel+0xcc/0xcf
      > [12573.109528]
      > -> #2 (&p->pi_lock){-.-.-.}:
      > [12573.125675]        [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
      > [12573.133829]        [<ffffffff816ebe9b>] _raw_spin_lock_irqsave+0x4b/0x90
      > [12573.141964]        [<ffffffff8108e881>] try_to_wake_up+0x31/0x320
      > [12573.150065]        [<ffffffff8108ebe2>] default_wake_function+0x12/0x20
      > [12573.158151]        [<ffffffff8107bbf8>] autoremove_wake_function+0x18/0x40
      > [12573.166195]        [<ffffffff81085398>] __wake_up_common+0x58/0x90
      > [12573.174215]        [<ffffffff81086909>] __wake_up+0x39/0x50
      > [12573.182146]        [<ffffffff810fc3da>] rcu_start_gp_advanced.isra.11+0x4a/0x50
      > [12573.190119]        [<ffffffff810fdb09>] rcu_start_future_gp+0x1c9/0x1f0
      > [12573.198023]        [<ffffffff810fe2c4>] rcu_nocb_kthread+0x114/0x930
      > [12573.205860]        [<ffffffff8107a91d>] kthread+0xed/0x100
      > [12573.213656]        [<ffffffff816f4b1c>] ret_from_fork+0x7c/0xb0
      > [12573.221379]
      > -> #1 (&rsp->gp_wq){..-.-.}:
      > [12573.236329]        [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
      > [12573.243783]        [<ffffffff816ebe9b>] _raw_spin_lock_irqsave+0x4b/0x90
      > [12573.251178]        [<ffffffff810868f3>] __wake_up+0x23/0x50
      > [12573.258505]        [<ffffffff810fc3da>] rcu_start_gp_advanced.isra.11+0x4a/0x50
      > [12573.265891]        [<ffffffff810fdb09>] rcu_start_future_gp+0x1c9/0x1f0
      > [12573.273248]        [<ffffffff810fe2c4>] rcu_nocb_kthread+0x114/0x930
      > [12573.280564]        [<ffffffff8107a91d>] kthread+0xed/0x100
      > [12573.287807]        [<ffffffff816f4b1c>] ret_from_fork+0x7c/0xb0
      
      Notice the above call chain.
      
      rcu_start_future_gp() is called with the rnp->lock held. Then it calls
      rcu_start_gp_advance, which does a wakeup.
      
      You can't do wakeups while holding the rnp->lock, as that would mean
      that you could not do a rcu_read_unlock() while holding the rq lock, or
      any lock that was taken while holding the rq lock. This is because...
      (See below).
      
      > [12573.295067]
      > -> #0 (rcu_node_0){..-.-.}:
      > [12573.309293]        [<ffffffff810b8d36>] __lock_acquire+0x1786/0x1af0
      > [12573.316568]        [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
      > [12573.323825]        [<ffffffff816ebc90>] _raw_spin_lock+0x40/0x80
      > [12573.331081]        [<ffffffff811054ff>] rcu_read_unlock_special+0x9f/0x4c0
      > [12573.338377]        [<ffffffff810760a6>] __rcu_read_unlock+0x96/0xa0
      > [12573.345648]        [<ffffffff811391b3>] perf_lock_task_context+0x143/0x2d0
      > [12573.352942]        [<ffffffff8113938e>] find_get_context+0x4e/0x1f0
      > [12573.360211]        [<ffffffff811403f4>] SYSC_perf_event_open+0x514/0xbd0
      > [12573.367514]        [<ffffffff81140e49>] SyS_perf_event_open+0x9/0x10
      > [12573.374816]        [<ffffffff816f4dd4>] tracesys+0xdd/0xe2
      
      Notice the above trace.
      
      perf took its own ctx->lock, which can be taken while holding the rq
      lock. While holding this lock, it did a rcu_read_unlock(). The
      perf_lock_task_context() basically looks like:
      
      rcu_read_lock();
      raw_spin_lock(ctx->lock);
      rcu_read_unlock();
      
      Now, what looks to have happened, is that we scheduled after taking that
      first rcu_read_lock() but before taking the spin lock. When we scheduled
      back in and took the ctx->lock, the following rcu_read_unlock()
      triggered the "special" code.
      
      The rcu_read_unlock_special() takes the rnp->lock, which gives us a
      possible deadlock scenario.
      
      	CPU0		CPU1		CPU2
      	----		----		----
      
      				     rcu_nocb_kthread()
          lock(rq->lock);
      		    lock(ctx->lock);
      				     lock(rnp->lock);
      
      				     wake_up();
      
      				     lock(rq->lock);
      
      		    rcu_read_unlock();
      
      		    rcu_read_unlock_special();
      
      		    lock(rnp->lock);
          lock(ctx->lock);
      
      **** DEADLOCK ****
      
      > [12573.382068]
      > other info that might help us debug this:
      >
      > [12573.403229] Chain exists of:
      >   rcu_node_0 --> &rq->lock --> &ctx->lock
      >
      > [12573.424471]  Possible unsafe locking scenario:
      >
      > [12573.438499]        CPU0                    CPU1
      > [12573.445599]        ----                    ----
      > [12573.452691]   lock(&ctx->lock);
      > [12573.459799]                                lock(&rq->lock);
      > [12573.467010]                                lock(&ctx->lock);
      > [12573.474192]   lock(rcu_node_0);
      > [12573.481262]
      >  *** DEADLOCK ***
      >
      > [12573.501931] 1 lock held by trinity-child17/31341:
      > [12573.508990]  #0:  (&ctx->lock){-.-...}, at: [<ffffffff811390ed>] perf_lock_task_context+0x7d/0x2d0
      > [12573.516475]
      > stack backtrace:
      > [12573.530395] CPU: 1 PID: 31341 Comm: trinity-child17 Not tainted 3.10.0-rc3+ #39
      > [12573.545357]  ffffffff825b4f90 ffff880219f1dbc0 ffffffff816e375b ffff880219f1dc00
      > [12573.552868]  ffffffff816dfa5d ffff880219f1dc50 ffff88023ce4d1f8 ffff88023ce4ca40
      > [12573.560353]  0000000000000001 0000000000000001 ffff88023ce4d1f8 ffff880219f1dcc0
      > [12573.567856] Call Trace:
      > [12573.575011]  [<ffffffff816e375b>] dump_stack+0x19/0x1b
      > [12573.582284]  [<ffffffff816dfa5d>] print_circular_bug+0x200/0x20f
      > [12573.589637]  [<ffffffff810b8d36>] __lock_acquire+0x1786/0x1af0
      > [12573.596982]  [<ffffffff810918f5>] ? sched_clock_cpu+0xb5/0x100
      > [12573.604344]  [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
      > [12573.611652]  [<ffffffff811054ff>] ? rcu_read_unlock_special+0x9f/0x4c0
      > [12573.619030]  [<ffffffff816ebc90>] _raw_spin_lock+0x40/0x80
      > [12573.626331]  [<ffffffff811054ff>] ? rcu_read_unlock_special+0x9f/0x4c0
      > [12573.633671]  [<ffffffff811054ff>] rcu_read_unlock_special+0x9f/0x4c0
      > [12573.640992]  [<ffffffff811390ed>] ? perf_lock_task_context+0x7d/0x2d0
      > [12573.648330]  [<ffffffff810b429e>] ? put_lock_stats.isra.29+0xe/0x40
      > [12573.655662]  [<ffffffff813095a0>] ? delay_tsc+0x90/0xe0
      > [12573.662964]  [<ffffffff810760a6>] __rcu_read_unlock+0x96/0xa0
      > [12573.670276]  [<ffffffff811391b3>] perf_lock_task_context+0x143/0x2d0
      > [12573.677622]  [<ffffffff81139070>] ? __perf_event_enable+0x370/0x370
      > [12573.684981]  [<ffffffff8113938e>] find_get_context+0x4e/0x1f0
      > [12573.692358]  [<ffffffff811403f4>] SYSC_perf_event_open+0x514/0xbd0
      > [12573.699753]  [<ffffffff8108cd9d>] ? get_parent_ip+0xd/0x50
      > [12573.707135]  [<ffffffff810b71fd>] ? trace_hardirqs_on_caller+0xfd/0x1c0
      > [12573.714599]  [<ffffffff81140e49>] SyS_perf_event_open+0x9/0x10
      > [12573.721996]  [<ffffffff816f4dd4>] tracesys+0xdd/0xe2
      
      This commit delays the wakeup via irq_work(), which is what
      perf and ftrace use to perform wakeups in critical sections.
      Reported-by: NDave Jones <davej@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      016a8d5b
  9. 04 6月, 2013 1 次提交
  10. 28 5月, 2013 1 次提交
    • S
      perf: Use hrtimers for event multiplexing · 9e630205
      Stephane Eranian 提交于
      The current scheme of using the timer tick was fine for per-thread
      events. However, it was causing bias issues in system-wide mode
      (including for uncore PMUs). Event groups would not get their fair
      share of runtime on the PMU. With tickless kernels, if a core is idle
      there is no timer tick, and thus no event rotation (multiplexing).
      However, there are events (especially uncore events) which do count
      even though cores are asleep.
      
      This patch changes the timer source for multiplexing.  It introduces a
      per-PMU per-cpu hrtimer. The advantage is that even when a core goes
      idle, it will come back to service the hrtimer, thus multiplexing on
      system-wide events works much better.
      
      The per-PMU implementation (suggested by PeterZ) enables adjusting the
      multiplexing interval per PMU. The preferred interval is stashed into
      the struct pmu. If not set, it will be forced to the default interval
      value.
      
      In order to minimize the impact of the hrtimer, it is turned on and
      off on demand. When the PMU on a CPU is overcommited, the hrtimer is
      activated.  It is stopped when the PMU is not overcommitted.
      
      In order for this to work properly, we had to change the order of
      initialization in start_kernel() such that hrtimer_init() is run
      before perf_event_init().
      
      The default interval in milliseconds is set to a timer tick just like
      with the old code. We will provide a sysctl to tune this in another
      patch.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Link: http://lkml.kernel.org/r/1364991694-5876-2-git-send-email-eranian@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9e630205
  11. 04 5月, 2013 1 次提交
    • F
      rcu: Fix full dynticks' dependency on wide RCU nocb mode · 73c30828
      Frederic Weisbecker 提交于
      Commit 0637e029
      ("nohz: Select wide RCU nocb for full dynticks") intended
      to force CONFIG_RCU_NOCB_CPU_ALL=y when full dynticks is
      enabled.
      
      However this option is part of a choice menu and Kconfig's
      "select" instruction has no effect on such targets.
      
      Fix this by using reverse dependencies on the targets we
      don't want instead.
      Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Hakan Akkan <hakanakkan@gmail.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Kevin Hilman <khilman@linaro.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      73c30828
  12. 02 5月, 2013 2 次提交
  13. 01 5月, 2013 2 次提交
  14. 30 4月, 2013 3 次提交
    • A
      init/main.c: convert to pr_foo() · ea676e84
      Andrew Morton 提交于
      Also enables cleanup of some 80-col trickery.
      
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ea676e84
    • R
      init: raise log level · c2409b00
      Richard Weinberger 提交于
      If the kernel was booted with the "quiet" boot option we have currently no
      chance to see why an initrd fails.  Change KERN_WARNING to KERN_ERR to see
      what is going on.
      Signed-off-by: NRichard Weinberger <richard@nod.at>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Jim Cromie <jim.cromie@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c2409b00
    • S
      init: scream bloody murder if interrupts are enabled too early · f91eb62f
      Steven Rostedt 提交于
      As I was testing a lot of my code recently, and having several
      "successes", I accidentally noticed in the dmesg this little line:
      
        start_kernel(): bug: interrupts were enabled *very* early, fixing it
      
      Sure enough, one of my patches two commits ago enabled interrupts early.
      The sad part here is that I never noticed it, and I ran several tests with
      ktest too, and ktest did not notice this line.
      
      What ktest looks for (and so does many other automated testing scripts) is
      a back trace produced by a WARN_ON() or BUG().  As a back trace was never
      produced, my buggy patch could have slipped into linux-next, or even
      worse, mainline.
      
      Adding a WARN(!irqs_disabled()) makes this bug a little more obvious:
      
        PID hash table entries: 4096 (order: 3, 32768 bytes)
        __ex_table already sorted, skipping sort
        Checking aperture...
        No AGP bridge found
        Calgary: detecting Calgary via BIOS EBDA area
        Calgary: Unable to locate Rio Grande table in EBDA - bailing!
        Memory: 2003252k/2054848k available (4857k kernel code, 460k absent, 51136k reserved, 6210k data, 1096k init)
        ------------[ cut here ]------------
        WARNING: at /home/rostedt/work/git/linux-trace.git/init/main.c:543 start_kernel+0x21e/0x415()
        Hardware name: To Be Filled By O.E.M.
        Interrupts were enabled *very* early, fixing it
        Modules linked in:
        Pid: 0, comm: swapper/0 Not tainted 3.8.0-test+ #286
        Call Trace:
          warn_slowpath_common+0x83/0x9b
          warn_slowpath_fmt+0x46/0x48
          start_kernel+0x21e/0x415
          x86_64_start_reservations+0x10e/0x112
          x86_64_start_kernel+0x102/0x111
        ---[ end trace 007d8b0491b4f5d8 ]---
        Preemptible hierarchical RCU implementation.
         RCU restricting CPUs from NR_CPUS=8 to nr_cpu_ids=4.
        NR_IRQS:4352 nr_irqs:712 16
        Console: colour VGA+ 80x25
        console [ttyS0] enabled, bootconsole disabled
      
      Do you see it?
      
      The original version of this patch just slapped a WARN_ON() in there and
      kept the printk().  Ard van Breemen suggested using the WARN() interface,
      which makes the code a bit cleaner.
      
      Also, while examining other warnings in init/main.c, I found two other
      locations that deserve a bloody murder scream if their conditions are hit,
      and updated them accordingly.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Ard van Breemen <ard@telegraafnet.nl>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f91eb62f
  15. 27 4月, 2013 1 次提交
    • F
      nohz: Select VIRT_CPU_ACCOUNTING_GEN from full dynticks config · c58b0df1
      Frederic Weisbecker 提交于
      Turn the full dynticks passive dependency on VIRT_CPU_ACCOUNTING_GEN
      to an active one.
      
      The full dynticks Kconfig is currently hidden behind the full dynticks
      cputime accounting, which is an awkward and counter-intuitive layout:
      the user first has to select the dynticks cputime accounting in order
      to make the full dynticks feature to be visible.
      
      We definetly want it the other way around. The usual way to perform
      this kind of active dependency is use "select" on the depended target.
      Now we can't use the Kconfig "select" instruction when the target is
      a "choice".
      
      So this patch inspires on how the RCU subsystem Kconfig interact
      with its dependencies on SMP and PREEMPT: we make sure that cputime
      accounting can't propose another option than VIRT_CPU_ACCOUNTING_GEN
      when NO_HZ_FULL is selected by using the right "depends on" instruction
      for each cputime accounting choices.
      
      v2: Keep full dynticks cputime accounting available even without
      full dynticks, as per Paul McKenney's suggestion.
      Reported-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Hakan Akkan <hakanakkan@gmail.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Kevin Hilman <khilman@linaro.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      c58b0df1
  16. 19 4月, 2013 1 次提交
    • F
      nohz: Ensure full dynticks CPUs are RCU nocbs · d1e43fa5
      Frederic Weisbecker 提交于
      We need full dynticks CPU to also be RCU nocb so
      that we don't have to keep the tick to handle RCU
      callbacks.
      
      Make sure the range passed to nohz_full= boot
      parameter is a subset of rcu_nocbs=
      
      The CPUs that fail to meet this requirement will be
      excluded from the nohz_full range. This is checked
      early in boot time, before any CPU has the opportunity
      to stop its tick.
      Suggested-by: NSteven Rostedt <rostedt@goodmis.org>
      Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Geoff Levand <geoff@infradead.org>
      Cc: Gilad Ben Yossef <gilad@benyossef.com>
      Cc: Hakan Akkan <hakanakkan@gmail.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Kevin Hilman <khilman@linaro.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      d1e43fa5
  17. 08 4月, 2013 1 次提交
  18. 03 4月, 2013 1 次提交
    • F
      nohz: Rename CONFIG_NO_HZ to CONFIG_NO_HZ_COMMON · 3451d024
      Frederic Weisbecker 提交于
      We are planning to convert the dynticks Kconfig options layout
      into a choice menu. The user must be able to easily pick
      any of the following implementations: constant periodic tick,
      idle dynticks, full dynticks.
      
      As this implies a mutual exclusion, the two dynticks implementions
      need to converge on the selection of a common Kconfig option in order
      to ease the sharing of a common infrastructure.
      
      It would thus seem pretty natural to reuse CONFIG_NO_HZ to
      that end. It already implements all the idle dynticks code
      and the full dynticks depends on all that code for now.
      So ideally the choice menu would propose CONFIG_NO_HZ_IDLE and
      CONFIG_NO_HZ_EXTENDED then both would select CONFIG_NO_HZ.
      
      On the other hand we want to stay backward compatible: if
      CONFIG_NO_HZ is set in an older config file, we want to
      enable CONFIG_NO_HZ_IDLE by default.
      
      But we can't afford both at the same time or we run into
      a circular dependency:
      
      1) CONFIG_NO_HZ_IDLE and CONFIG_NO_HZ_EXTENDED both select
         CONFIG_NO_HZ
      2) If CONFIG_NO_HZ is set, we default to CONFIG_NO_HZ_IDLE
      
      We might be able to support that from Kconfig/Kbuild but it
      may not be wise to introduce such a confusing behaviour.
      
      So to solve this, create a new CONFIG_NO_HZ_COMMON option
      which gathers the common code between idle and full dynticks
      (that common code for now is simply the idle dynticks code)
      and select it from their referring Kconfig.
      
      Then we'll later create CONFIG_NO_HZ_IDLE and map CONFIG_NO_HZ
      to it for backward compatibility.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Geoff Levand <geoff@infradead.org>
      Cc: Gilad Ben Yossef <gilad@benyossef.com>
      Cc: Hakan Akkan <hakanakkan@gmail.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Kevin Hilman <khilman@linaro.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      3451d024
  19. 26 3月, 2013 3 次提交
  20. 13 3月, 2013 2 次提交
    • K
      final removal of CONFIG_EXPERIMENTAL · 3d374d09
      Kees Cook 提交于
      Remove "config EXPERIMENTAL" itself, now that every "depends on" it has
      been removed from the tree.
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      3d374d09
    • P
      rcu: Remove restrictions on no-CBs CPUs · 34ed6246
      Paul E. McKenney 提交于
      Currently, CPU 0 is constrained to not be a no-CBs CPU, and furthermore
      at least one no-CBs CPU must remain online at any given time.  These
      restrictions are problematic in some situations, such as cases where
      all CPUs must run a real-time workload that needs to be insulated from
      OS jitter and latencies due to RCU callback invocation.  This commit
      therefore provides no-CBs CPUs a (very crude and energy-inefficient)
      way to start and to wait for grace periods independently of the normal
      RCU callback mechanisms.  This approach allows any or all of the CPUs to
      be designated as no-CBs CPUs, and allows any proper subset of the CPUs
      (whether no-CBs CPUs or not) to be offlined.
      
      This commit also provides a fix for a locking bug spotted by Xie
      ChanglongX <changlongx.xie@intel.com>.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      34ed6246
  21. 08 3月, 2013 1 次提交
    • F
      context_tracking: Enable probes by default for selftesting · 8b438766
      Frederic Weisbecker 提交于
      Until we provide the nohz_mask boot parameter, keeping
      the context tracking probes disabled by default is pointless
      since what we want is to runtime test this code anyway.
      
      It's furthermore confusing for the users which don't expect
      the probes to be off when they select RCU user mode or full
      dynticks cputime accounting.
      
      Let's enable these probes selftests by default for now.
      
      Suggested: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Kevin Hilman <khilman@linaro.org>
      Cc: Mats Liljegren <mats.liljegren@enea.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      8b438766
  22. 07 3月, 2013 1 次提交
  23. 16 2月, 2013 1 次提交
  24. 13 2月, 2013 5 次提交