1. 20 3月, 2015 1 次提交
    • P
      rcu: Yet another fix for preemption and CPU hotplug · a77da14c
      Paul E. McKenney 提交于
      As noted earlier, the following sequence of events can occur when
      running PREEMPT_RCU and HOTPLUG_CPU on a system with a multi-level
      rcu_node combining tree:
      
      1.	A group of tasks block on CPUs corresponding to a given leaf
      	rcu_node structure while within RCU read-side critical sections.
      2.	All CPUs corrsponding to that rcu_node structure go offline.
      3.	The next grace period starts, but because there are still tasks
      	blocked, the upper-level bits corresponding to this leaf rcu_node
      	structure remain set.
      4.	All the tasks exit their RCU read-side critical sections and
      	remove themselves from the leaf rcu_node structure's list,
      	leaving it empty.
      5.	But because there now is code to check for this condition at
      	force-quiescent-state time, the upper bits are cleared and the
      	grace period completes.
      
      However, there is another complication that can occur following step 4 above:
      
      4a.	The grace period starts, and the leaf rcu_node structure's
      	gp_tasks pointer is set to NULL because there are no tasks
      	blocked on this structure.
      4b.	One of the CPUs corresponding to the leaf rcu_node structure
      	comes back online.
      4b.	An endless stream of tasks are preempted within RCU read-side
      	critical sections on this CPU, such that the ->blkd_tasks
      	list is always non-empty.
      
      The grace period will never end.
      
      This commit therefore makes the force-quiescent-state processing check only
      for absence of tasks blocking the current grace period rather than absence
      of tasks altogether.  This will cause a quiescent state to be reported if
      the current leaf rcu_node structure is not blocking the current grace period
      and its parent thinks that it is, regardless of how RCU managed to get
      itself into this state.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: <stable@vger.kernel.org> # 4.0.x
      Tested-by: NSasha Levin <sasha.levin@oracle.com>
      a77da14c
  2. 13 3月, 2015 7 次提交
    • P
      rcu: Add diagnostics to grace-period cleanup · 5c60d25f
      Paul E. McKenney 提交于
      At grace-period initialization time, RCU checks that all quiescent
      states were really reported for the previous grace period.  Now that
      grace-period cleanup has been split out of grace-period initialization,
      this commit also performs those checks at grace-period cleanup time.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      5c60d25f
    • P
      rcu: Handle outgoing CPUs on exit from idle loop · 88428cc5
      Paul E. McKenney 提交于
      This commit informs RCU of an outgoing CPU just before that CPU invokes
      arch_cpu_idle_dead() during its last pass through the idle loop (via a
      new CPU_DYING_IDLE notifier value).  This change means that RCU need not
      deal with outgoing CPUs passing through the scheduler after informing
      RCU that they are no longer online.  Note that removing the CPU from
      the rcu_node ->qsmaskinit bit masks is done at CPU_DYING_IDLE time,
      and orphaning callbacks is still done at CPU_DEAD time, the reason being
      that at CPU_DEAD time we have another CPU that can adopt them.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      88428cc5
    • P
      cpu: Make CPU-offline idle-loop transition point more precise · 528a25b0
      Paul E. McKenney 提交于
      This commit uses a per-CPU variable to make the CPU-offline code path
      through the idle loop more precise, so that the outgoing CPU is
      guaranteed to make it into the idle loop before it is powered off.
      This commit is in preparation for putting the RCU offline-handling
      code on this code path, which will eliminate the magic one-jiffy
      wait that RCU uses as the maximum time for an outgoing CPU to get
      all the way through the scheduler.
      
      The magic one-jiffy wait for incoming CPUs remains a separate issue.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      528a25b0
    • P
      rcu: Eliminate ->onoff_mutex from rcu_node structure · c1990689
      Paul E. McKenney 提交于
      Because that RCU grace-period initialization need no longer exclude
      CPU-hotplug operations, this commit eliminates the ->onoff_mutex and
      its uses.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      c1990689
    • P
      rcu: Process offlining and onlining only at grace-period start · 0aa04b05
      Paul E. McKenney 提交于
      Races between CPU hotplug and grace periods can be difficult to resolve,
      so the ->onoff_mutex is used to exclude the two events.  Unfortunately,
      this means that it is impossible for an outgoing CPU to perform the
      last bits of its offlining from its last pass through the idle loop,
      because sleeplocks cannot be acquired in that context.
      
      This commit avoids these problems by buffering online and offline events
      in a new ->qsmaskinitnext field in the leaf rcu_node structures.  When a
      grace period starts, the events accumulated in this mask are applied to
      the ->qsmaskinit field, and, if needed, up the rcu_node tree.  The special
      case of all CPUs corresponding to a given leaf rcu_node structure being
      offline while there are still elements in that structure's ->blkd_tasks
      list is handled using a new ->wait_blkd_tasks field.  In this case,
      propagating the offline bits up the tree is deferred until the beginning
      of the grace period after all of the tasks have exited their RCU read-side
      critical sections and removed themselves from the list, at which point
      the ->wait_blkd_tasks flag is cleared.  If one of that leaf rcu_node
      structure's CPUs comes back online before the list empties, then the
      ->wait_blkd_tasks flag is simply cleared.
      
      This of course means that RCU's notion of which CPUs are offline can be
      out of date.  This is OK because RCU need only wait on CPUs that were
      online at the time that the grace period started.  In addition, RCU's
      force-quiescent-state actions will handle the case where a CPU goes
      offline after the grace period starts.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      0aa04b05
    • P
      rcu: Move rcu_report_unblock_qs_rnp() to common code · cc99a310
      Paul E. McKenney 提交于
      The rcu_report_unblock_qs_rnp() function is invoked when the
      last task blocking the current grace period exits its outermost
      RCU read-side critical section.  Previously, this was called only
      from rcu_read_unlock_special(), and was therefore defined only when
      CONFIG_RCU_PREEMPT=y.  However, this function will be invoked even when
      CONFIG_RCU_PREEMPT=n once CPU-hotplug operations are processed only at
      the beginnings of RCU grace periods.  The reason for this change is that
      the last task on a given leaf rcu_node structure's ->blkd_tasks list
      might well exit its RCU read-side critical section between the time that
      recent CPU-hotplug operations were applied and when the new grace period
      was initialized.  This situation could result in RCU waiting forever on
      that leaf rcu_node structure, because if all that structure's CPUs were
      already offline, there would be no quiescent-state events to drive that
      structure's part of the grace period.
      
      This commit therefore moves rcu_report_unblock_qs_rnp() to common code
      that is built unconditionally so that the quiescent-state-forcing code
      can clean up after this situation, avoiding the grace-period stall.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      cc99a310
    • P
      rcu: Rework preemptible expedited bitmask handling · 8eb74b2b
      Paul E. McKenney 提交于
      Currently, the rcu_node tree ->expmask bitmasks are initially set to
      reflect the online CPUs.  This is pointless, because only the CPUs
      preempted within RCU read-side critical sections by the preceding
      synchronize_sched_expedited() need to be tracked.  This commit therefore
      instead sets up these bitmasks based on the state of the ->blkd_tasks
      lists.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      8eb74b2b
  3. 12 3月, 2015 8 次提交
    • P
      rcu: Remove event tracing from rcu_cpu_notify(), used by offline CPUs · 999c2863
      Paul E. McKenney 提交于
      Offline CPUs cannot safely invoke trace events, but such CPUs do execute
      within rcu_cpu_notify().  Therefore, this commit removes the trace events
      from rcu_cpu_notify().  These trace events are for utilization, against
      which rcu_cpu_notify() execution time should be negligible.
      Reported-by: NFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      999c2863
    • P
      rcu: Provide diagnostic option to slow down grace-period initialization · 37745d28
      Paul E. McKenney 提交于
      Grace-period initialization normally proceeds quite quickly, so
      that it is very difficult to reproduce races against grace-period
      initialization.  This commit therefore allows grace-period
      initialization to be artificially slowed down, increasing
      race-reproduction probability.  A pair of new Kconfig parameters are
      provided, CONFIG_RCU_TORTURE_TEST_SLOW_INIT to enable the slowdowns, and
      CONFIG_RCU_TORTURE_TEST_SLOW_INIT_DELAY to specify the number of jiffies
      of slowdown to apply.  A boot-time parameter named rcutree.gp_init_delay
      allows boot-time delay to be specified.  By default, no delay will be
      applied even if CONFIG_RCU_TORTURE_TEST_SLOW_INIT is set.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      37745d28
    • P
      rcu: Detect stalls caused by failure to propagate up rcu_node tree · 237a0f21
      Paul E. McKenney 提交于
      If all CPUs have passed through quiescent states, then stalls might be
      due to starvation of the grace-period kthread or to failure to propagate
      the quiescent states up the rcu_node combining tree.  The current stall
      warning messages do not differentiate, so this commit adds a printout
      of the root rcu_node structure's ->qsmask field.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      237a0f21
    • P
      18c629ea
    • P
      rcu: Simplify sync_rcu_preempt_exp_init() · c8aead6a
      Paul E. McKenney 提交于
      This commit eliminates a boolean and associated "if" statement by
      rearranging the code.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      c8aead6a
    • P
    • P
      rcu: Consolidate offline-CPU callback initialization · b33078b6
      Paul E. McKenney 提交于
      Currently, both rcu_cleanup_dead_cpu() and rcu_send_cbs_to_orphanage()
      initialize the outgoing CPU's callback list.  However, only
      rcu_cleanup_dead_cpu() invokes rcu_send_cbs_to_orphanage(), and
      it does so unconditionally, which means that only one of these
      initializations is required.  This commit therefore consolidates the
      callback-list initialization with the rest of the callback handling in
      rcu_send_cbs_to_orphanage().
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      b33078b6
    • P
      smpboot: Add common code for notification from dying CPU · 8038dad7
      Paul E. McKenney 提交于
      RCU ignores offlined CPUs, so they cannot safely run RCU read-side code.
      (They -can- use SRCU, but not RCU.)  This means that any use of RCU
      during or after the call to arch_cpu_idle_dead().  Unfortunately,
      commit 2ed53c0d added a complete() call, which will contain RCU
      read-side critical sections if there is a task waiting to be awakened.
      
      Which, as it turns out, there almost never is.  In my qemu/KVM testing,
      the to-be-awakened task is not yet asleep more than 99.5% of the time.
      In current mainline, failure is even harder to reproduce, requiring a
      virtualized environment that delays the outgoing CPU by at least three
      jiffies between the time it exits its stop_machine() task at CPU_DYING
      time and the time it calls arch_cpu_idle_dead() from the idle loop.
      However, this problem really can occur, especially in virtualized
      environments, and therefore really does need to be fixed
      
      This suggests moving back to the polling loop, but using a much shorter
      wait, with gentle exponential backoff instead of the old 100-millisecond
      wait.  Most of the time, the loop will exit without waiting at all,
      and almost all of the remaining uses will wait only five microseconds.
      If the outgoing CPU is preempted, a loop will wait one jiffy, then
      increase the wait by a factor of 11/10ths, rounding up.  As before, there
      is a five-second timeout.
      
      This commit therefore provides common-code infrastructure to do the
      dying-to-surviving CPU handoff in a safe manner.  This code also
      provides an indication at CPU-online of whether the CPU to be onlined
      previously timed out on offline.  The new cpu_check_up_prepare() function
      returns -EBUSY if this CPU previously took more than five seconds to
      go offline, or -EAGAIN if it has not yet managed to go offline.  The
      rationale for -EAGAIN is that it might still be preempted, so an additional
      wait might well find it correctly offlined.  Architecture-specific code
      can decide how to handle these conditions.  Systems in which CPUs take
      themselves completely offline might respond to an -EBUSY return as if
      it was a zero (success) return.  Systems in which the surviving CPU must
      take some action might take it at this time, or might simply mark the
      other CPU as unusable.
      
      Note that architectures that take the easy way out and simply pass the
      -EBUSY and -EAGAIN upwards will change the sysfs API.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: <linux-api@vger.kernel.org>
      Cc: <linux-arch@vger.kernel.org>
      [ paulmck: Fixed state machine for architectures that don't check earlier
        CPU-hotplug results as suggested by James Hogan. ]
      8038dad7
  4. 20 2月, 2015 8 次提交
    • C
      debug: prevent entering debug mode on panic/exception. · 5516fd7b
      Colin Cross 提交于
      On non-developer devices, kgdb prevents the device from rebooting
      after a panic.
      
      Incase of panics and exceptions, to allow the device to reboot, prevent
      entering debug mode to avoid getting stuck waiting for the user to
      interact with debugger.
      
      To avoid entering the debugger on panic/exception without any extra
      configuration, panic_timeout is being used which can be set via
      /proc/sys/kernel/panic at run time and CONFIG_PANIC_TIMEOUT sets the
      default value.
      
      Setting panic_timeout indicates that the user requested machine to
      perform unattended reboot after panic. We dont want to get stuck waiting
      for the user input incase of panic.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: kgdb-bugreport@lists.sourceforge.net
      Cc: linux-kernel@vger.kernel.org
      Cc: Android Kernel Team <kernel-team@android.com>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Sumit Semwal <sumit.semwal@linaro.org>
      Signed-off-by: NColin Cross <ccross@android.com>
      [Kiran: Added context to commit message.
      panic_timeout is used instead of break_on_panic and
      break_on_exception to honor CONFIG_PANIC_TIMEOUT
      Modified the commit as per community feedback]
      Signed-off-by: NKiran Raparthy <kiran.kumar@linaro.org>
      Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
      5516fd7b
    • D
      kdb: Const qualifier for kdb_getstr's prompt argument · 32d375f6
      Daniel Thompson 提交于
      All current callers of kdb_getstr() can pass constant pointers via the
      prompt argument. This patch adds a const qualification to make explicit
      the fact that this is safe.
      Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
      32d375f6
    • D
      kdb: Provide forward search at more prompt · fb6daa75
      Daniel Thompson 提交于
      Currently kdb allows the output of comamnds to be filtered using the
      | grep feature. This is useful but does not permit the output emitted
      shortly after a string match to be examined without wading through the
      entire unfiltered output of the command. Such a feature is particularly
      useful to navigate function traces because these traces often have a
      useful trigger string *before* the point of interest.
      
      This patch reuses the existing filtering logic to introduce a simple
      forward search to kdb that can be triggered from the more prompt.
      Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
      fb6daa75
    • D
      kdb: Fix a prompt management bug when using | grep · ab08e464
      Daniel Thompson 提交于
      Currently when the "| grep" feature is used to filter the output of a
      command then the prompt is not displayed for the subsequent command.
      Likewise any characters typed by the user are also not echoed to the
      display. This rather disconcerting problem eventually corrects itself
      when the user presses Enter and the kdb_grepping_flag is cleared as
      kdb_parse() tries to make sense of whatever they typed.
      
      This patch resolves the problem by moving the clearing of this flag
      from the middle of command processing to the beginning.
      Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
      ab08e464
    • D
      kdb: Remove stack dump when entering kgdb due to NMI · 54543881
      Daniel Thompson 提交于
      Issuing a stack dump feels ergonomically wrong when entering due to NMI.
      
      Entering due to NMI is normally a reaction to a user request, either the
      NMI button on a server or a "magic knock" on a UART. Therefore the
      backtrace behaviour on entry due to NMI should be like SysRq-g (no stack
      dump) rather than like oops.
      
      Note also that the stack dump does not offer any information that
      cannot be trivial retrieved using the 'bt' command.
      Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
      54543881
    • D
      kdb: Avoid printing KERN_ levels to consoles · f7d4ca8b
      Daniel Thompson 提交于
      Currently when kdb traps printk messages then the raw log level prefix
      (consisting of '\001' followed by a numeral) does not get stripped off
      before the message is issued to the various I/O handlers supported by
      kdb. This causes annoying visual noise as well as causing problems
      grepping for ^. It is also a change of behaviour compared to normal usage
      of printk() usage. For example <SysRq>-h ends up with different output to
      that of kdb's "sr h".
      
      This patch addresses the problem by stripping log levels from messages
      before they are issued to the I/O handlers. printk() which can also
      act as an i/o handler in some cases is special cased; if the caller
      provided a log level then the prefix will be preserved when sent to
      printk().
      
      The addition of non-printable characters to the output of kdb commands is a
      regression, albeit and extremely elderly one, introduced by commit
      04d2c8c8 ("printk: convert the format for KERN_<LEVEL> to a 2 byte
      pattern"). Note also that this patch does *not* restore the original
      behaviour from v3.5. Instead it makes printk() from within a kdb command
      display the message without any prefix (i.e. like printk() normally does).
      Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org>
      Cc: Joe Perches <joe@perches.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
      f7d4ca8b
    • J
      kdb: Fix off by one error in kdb_cpu() · df0036d1
      Jason Wessel 提交于
      There was a follow on replacement patch against the prior
      "kgdb: Timeout if secondary CPUs ignore the roundup".
      
      See: https://lkml.org/lkml/2015/1/7/442
      
      This patch is the delta vs the patch that was committed upstream:
        * Fix an off-by-one error in kdb_cpu().
        * Replace NR_CPUS with CONFIG_NR_CPUS to tell checkpatch that we
          really want a static limit.
        * Removed the "KGDB: " prefix from the pr_crit() in debug_core.c
          (kgdb-next contains a patch which introduced pr_fmt() to this file
          to the tag will now be applied automatically).
      
      Cc: Daniel Thompson <daniel.thompson@linaro.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
      df0036d1
    • J
      kdb: fix incorrect counts in KDB summary command output · 14675592
      Jay Lan 提交于
      The output of KDB 'summary' command should report MemTotal, MemFree
      and Buffers output in kB. Current codes report in unit of pages.
      
      A define of K(x) as
      is defined in the code, but not used.
      
      This patch would apply the define to convert the values to kB.
      Please include me on Cc on replies. I do not subscribe to linux-kernel.
      Signed-off-by: NJay Lan <jlan@sgi.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
      14675592
  5. 18 2月, 2015 16 次提交