1. 19 8月, 2013 2 次提交
    • P
      nohz_full: Add per-CPU idle-state tracking · eb348b89
      Paul E. McKenney 提交于
      This commit adds the code that updates the rcu_dyntick structure's
      new fields to track the per-CPU idle state based on interrupts and
      transitions into and out of the idle loop (NMIs are ignored because NMI
      handlers cannot cleanly read out the time anyway).  This code is similar
      to the code that maintains RCU's idea of per-CPU idleness, but differs
      in that RCU treats CPUs running in user mode as idle, where this new
      code does not.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      eb348b89
    • P
      nohz_full: Add rcu_dyntick data for scalable detection of all-idle state · 2333210b
      Paul E. McKenney 提交于
      This commit adds fields to the rcu_dyntick structure that are used to
      detect idle CPUs.  These new fields differ from the existing ones in
      that the existing ones consider a CPU executing in user mode to be idle,
      where the new ones consider CPUs executing in user mode to be busy.
      The handling of these new fields is otherwise quite similar to that for
      the exiting fields.  This commit also adds the initialization required
      for these fields.
      
      So, why is usermode execution treated differently, with RCU considering
      it a quiescent state equivalent to idle, while in contrast the new
      full-system idle state detection considers usermode execution to be
      non-idle?
      
      It turns out that although one of RCU's quiescent states is usermode
      execution, it is not a full-system idle state.  This is because the
      purpose of the full-system idle state is not RCU, but rather determining
      when accurate timekeeping can safely be disabled.  Whenever accurate
      timekeeping is required in a CONFIG_NO_HZ_FULL kernel, at least one
      CPU must keep the scheduling-clock tick going.  If even one CPU is
      executing in user mode, accurate timekeeping is requires, particularly for
      architectures where gettimeofday() and friends do not enter the kernel.
      Only when all CPUs are really and truly idle can accurate timekeeping be
      disabled, allowing all CPUs to turn off the scheduling clock interrupt,
      thus greatly improving energy efficiency.
      
      This naturally raises the question "Why is this code in RCU rather than in
      timekeeping?", and the answer is that RCU has the data and infrastructure
      to efficiently make this determination.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      2333210b
  2. 30 7月, 2013 2 次提交
    • S
      rcu: Have the RCU tracepoints use the tracepoint_string infrastructure · f7f7bac9
      Steven Rostedt (Red Hat) 提交于
      Currently, RCU tracepoints save only a pointer to strings in the
      ring buffer. When displayed via the /sys/kernel/debug/tracing/trace file
      they are referenced like the printf "%s" that looks at the address
      in the ring buffer and prints out the string it points too. This requires
      that the strings are constant and persistent in the kernel.
      
      The problem with this is for tools like trace-cmd and perf that read the
      binary data from the buffers but have no access to the kernel memory to
      find out what string is represented by the address in the buffer.
      
      By using the tracepoint_string infrastructure, the RCU tracepoint strings
      can be exported such that userspace tools can map the addresses to
      the strings.
      
       # cat /sys/kernel/debug/tracing/printk_formats
      0xffffffff81a4a0e8 : "rcu_preempt"
      0xffffffff81a4a0f4 : "rcu_bh"
      0xffffffff81a4a100 : "rcu_sched"
      0xffffffff818437a0 : "cpuqs"
      0xffffffff818437a6 : "rcu_sched"
      0xffffffff818437a0 : "cpuqs"
      0xffffffff818437b0 : "rcu_bh"
      0xffffffff818437b7 : "Start context switch"
      0xffffffff818437cc : "End context switch"
      0xffffffff818437a0 : "cpuqs"
      [...]
      
      Now userspaces tools can display:
      
       rcu_utilization:      Start context switch
       rcu_dyntick:          Start 1 0
       rcu_utilization:      End context switch
       rcu_batch_start:      rcu_preempt CBs=0/5 bl=10
       rcu_dyntick:          End 0 140000000000000
       rcu_invoke_callback:  rcu_preempt rhp=0xffff880071c0d600 func=proc_i_callback
       rcu_invoke_callback:  rcu_preempt rhp=0xffff880077b5b230 func=__d_free
       rcu_dyntick:          Start 140000000000000 0
       rcu_invoke_callback:  rcu_preempt rhp=0xffff880077563980 func=file_free_rcu
       rcu_batch_end:        rcu_preempt CBs-invoked=3 idle=>c<>c<>c<>c<
       rcu_utilization:      End RCU core
       rcu_grace_period:     rcu_preempt 9741 start
       rcu_dyntick:          Start 1 0
       rcu_dyntick:          End 0 140000000000000
       rcu_dyntick:          Start 140000000000000 0
      
      Instead of:
      
       rcu_utilization:      ffffffff81843110
       rcu_future_grace_period: ffffffff81842f1d 9939 9939 9940 0 0 3 ffffffff81842f32
       rcu_batch_start:      ffffffff81842f1d CBs=0/4 bl=10
       rcu_future_grace_period: ffffffff81842f1d 9939 9939 9940 0 0 3 ffffffff81842f3c
       rcu_grace_period:     ffffffff81842f1d 9939 ffffffff81842f80
       rcu_invoke_callback:  ffffffff81842f1d rhp=0xffff88007888aac0 func=file_free_rcu
       rcu_grace_period:     ffffffff81842f1d 9939 ffffffff81842f95
       rcu_invoke_callback:  ffffffff81842f1d rhp=0xffff88006aeb4600 func=proc_i_callback
       rcu_future_grace_period: ffffffff81842f1d 9939 9939 9940 0 0 3 ffffffff81842f32
       rcu_future_grace_period: ffffffff81842f1d 9939 9939 9940 0 0 3 ffffffff81842f3c
       rcu_invoke_callback:  ffffffff81842f1d rhp=0xffff880071cb9fc0 func=__d_free
       rcu_grace_period:     ffffffff81842f1d 9939 ffffffff81842f80
       rcu_invoke_callback:  ffffffff81842f1d rhp=0xffff88007888ae80 func=file_free_rcu
       rcu_batch_end:        ffffffff81842f1d CBs-invoked=4 idle=>c<>c<>c<>c<
       rcu_utilization:      ffffffff8184311f
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f7f7bac9
    • S
      rcu: Simplify RCU_STATE_INITIALIZER() macro · a41bfeb2
      Steven Rostedt (Red Hat) 提交于
      The RCU_STATE_INITIALIZER() macro is used only in the rcutree.c file
      as well as the rcutree_plugin.h file. It is passed as a rvalue to
      a variable of a similar name. A per_cpu variable is also created
      with a similar name as well.
      
      The uses of RCU_STATE_INITIALIZER() can be simplified to remove some
      of the duplicate code that is done. Currently the three users of this
      macro has this format:
      
      struct rcu_state rcu_sched_state =
      	RCU_STATE_INITIALIZER(rcu_sched, call_rcu_sched);
      DEFINE_PER_CPU(struct rcu_data, rcu_sched_data);
      
      Notice that "rcu_sched" is called three times. This is the same with
      the other two users. This can be condensed to just:
      
      RCU_STATE_INITIALIZER(rcu_sched, call_rcu_sched);
      
      by moving the rest into the macro itself.
      
      This also opens the door to allow the RCU tracepoint strings and
      their addresses to be exported so that userspace tracing tools can
      translate the contents of the pointers of the RCU tracepoints.
      The change will allow for helper code to be placed in the
      RCU_STATE_INITIALIZER() macro to export the name that is used.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a41bfeb2
  3. 15 7月, 2013 1 次提交
    • P
      rcu: delete __cpuinit usage from all rcu files · 49fb4c62
      Paul Gortmaker 提交于
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      This removes all the drivers/rcu uses of the __cpuinit macros
      from all C files.
      
      [1] https://lkml.org/lkml/2013/5/20/589
      
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Josh Triplett <josh@freedesktop.org>
      Cc: Dipankar Sarma <dipankar@in.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      49fb4c62
  4. 11 6月, 2013 4 次提交
  5. 16 5月, 2013 1 次提交
  6. 15 5月, 2013 1 次提交
  7. 19 4月, 2013 1 次提交
    • F
      nohz: Ensure full dynticks CPUs are RCU nocbs · d1e43fa5
      Frederic Weisbecker 提交于
      We need full dynticks CPU to also be RCU nocb so
      that we don't have to keep the tick to handle RCU
      callbacks.
      
      Make sure the range passed to nohz_full= boot
      parameter is a subset of rcu_nocbs=
      
      The CPUs that fail to meet this requirement will be
      excluded from the nohz_full range. This is checked
      early in boot time, before any CPU has the opportunity
      to stop its tick.
      Suggested-by: NSteven Rostedt <rostedt@goodmis.org>
      Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Geoff Levand <geoff@infradead.org>
      Cc: Gilad Ben Yossef <gilad@benyossef.com>
      Cc: Hakan Akkan <hakanakkan@gmail.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Kevin Hilman <khilman@linaro.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      d1e43fa5
  8. 16 4月, 2013 1 次提交
    • P
      rcu: Kick adaptive-ticks CPUs that are holding up RCU grace periods · 65d798f0
      Paul E. McKenney 提交于
      Adaptive-ticks CPUs inform RCU when they enter kernel mode, but they do
      not necessarily turn the scheduler-clock tick back on.  This state of
      affairs could result in RCU waiting on an adaptive-ticks CPU running
      for an extended period in kernel mode.  Such a CPU will never run the
      RCU state machine, and could therefore indefinitely extend the RCU state
      machine, sooner or later resulting in an OOM condition.
      
      This patch, inspired by an earlier patch by Frederic Weisbecker, therefore
      causes RCU's force-quiescent-state processing to check for this condition
      and to send an IPI to CPUs that remain in that state for too long.
      "Too long" currently means about three jiffies by default, which is
      quite some time for a CPU to remain in the kernel without blocking.
      The rcu_tree.jiffies_till_first_fqs and rcutree.jiffies_till_next_fqs
      sysfs variables may be used to tune "too long" if needed.
      Reported-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Geoff Levand <geoff@infradead.org>
      Cc: Gilad Ben Yossef <gilad@benyossef.com>
      Cc: Hakan Akkan <hakanakkan@gmail.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Kevin Hilman <khilman@linaro.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      65d798f0
  9. 26 3月, 2013 11 次提交
  10. 14 3月, 2013 1 次提交
  11. 13 3月, 2013 1 次提交
    • P
      rcu: Remove restrictions on no-CBs CPUs · 34ed6246
      Paul E. McKenney 提交于
      Currently, CPU 0 is constrained to not be a no-CBs CPU, and furthermore
      at least one no-CBs CPU must remain online at any given time.  These
      restrictions are problematic in some situations, such as cases where
      all CPUs must run a real-time workload that needs to be insulated from
      OS jitter and latencies due to RCU callback invocation.  This commit
      therefore provides no-CBs CPUs a (very crude and energy-inefficient)
      way to start and to wait for grace periods independently of the normal
      RCU callback mechanisms.  This approach allows any or all of the CPUs to
      be designated as no-CBs CPUs, and allows any proper subset of the CPUs
      (whether no-CBs CPUs or not) to be offlined.
      
      This commit also provides a fix for a locking bug spotted by Xie
      ChanglongX <changlongx.xie@intel.com>.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      34ed6246
  12. 09 1月, 2013 2 次提交
    • P
      rcu: Make rcu_nocb_poll an early_param instead of module_param · 1b0048a4
      Paul Gortmaker 提交于
      The as-documented rcu_nocb_poll will fail to enable this feature
      for two reasons.  (1) there is an extra "s" in the documented
      name which is not in the code, and (2) since it uses module_param,
      it really is expecting a prefix, akin to "rcutree.fanout_leaf"
      and the prefix isn't documented.
      
      However, there are several reasons why we might not want to
      simply fix the typo and add the prefix:
      
      1) we'd end up with rcutree.rcu_nocb_poll, and rather probably make
      a change to rcutree.nocb_poll
      
      2) if we did #1, then the prefix wouldn't be consistent with the
      rcu_nocbs=<cpumap> parameter (i.e. one with, one without prefix)
      
      3) the use of module_param in a header file is less than desired,
      since it isn't immediately obvious that it will get processed
      via rcutree.c and get the prefix from that (although use of
      module_param_named() could clarify that.)
      
      4) the implied export of /sys/module/rcutree/parameters/rcu_nocb_poll
      data to userspace via module_param() doesn't really buy us anything,
      as it is read-only and we can tell if it is enabled already without
      it, since there is a printk at early boot telling us so.
      
      In light of all that, just change it from a module_param() to an
      early_setup() call, and worry about adding it to /sys later on if
      we decide to allow a dynamic setting of it.
      
      Also change the variable to be tagged as read_mostly, since it
      will only ever be fiddled with at most, once at boot.
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      1b0048a4
    • P
      rcu: Prevent soft-lockup complaints about no-CBs CPUs · 353af9c9
      Paul Gortmaker 提交于
      The wait_event() at the head of the rcu_nocb_kthread() can result in
      soft-lockup complaints if the CPU in question does not register RCU
      callbacks for an extended period.  This commit therefore changes
      the wait_event() to a wait_event_interruptible().
      Reported-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      353af9c9
  13. 17 11月, 2012 2 次提交
    • P
      rcu: Separate accounting of callbacks from callback-free CPUs · c635a4e1
      Paul E. McKenney 提交于
      Currently, callback invocations from callback-free CPUs are accounted to
      the CPU that registered the callback, but using the same field that is
      used for normal callbacks.  This makes it impossible to determine from
      debugfs output whether callbacks are in fact being diverted.  This commit
      therefore adds a separate ->n_nocbs_invoked field in the rcu_data structure
      in which diverted callback invocations are counted.  RCU's debugfs tracing
      still displays normal callback invocations using ci=, but displayed
      diverted callbacks with nci=.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      c635a4e1
    • P
      rcu: Add callback-free CPUs · 3fbfbf7a
      Paul E. McKenney 提交于
      RCU callback execution can add significant OS jitter and also can
      degrade both scheduling latency and, in asymmetric multiprocessors,
      energy efficiency.  This commit therefore adds the ability for selected
      CPUs ("rcu_nocbs=" boot parameter) to have their callbacks offloaded
      to kthreads.  If the "rcu_nocb_poll" boot parameter is also specified,
      these kthreads will do polling, removing the need for the offloaded
      CPUs to do wakeups.  At least one CPU must be doing normal callback
      processing: currently CPU 0 cannot be selected as a no-CBs CPU.
      In addition, attempts to offline the last normal-CBs CPU will fail.
      
      This feature was inspired by Jim Houston's and Joe Korty's JRCU, and
      this commit includes fixes to problems located by Fengguang Wu's
      kbuild test robot.
      
      [ paulmck: Added gfp.h include file as suggested by Fengguang Wu. ]
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      3fbfbf7a
  14. 14 11月, 2012 1 次提交
  15. 09 11月, 2012 1 次提交
  16. 24 10月, 2012 1 次提交
  17. 26 9月, 2012 1 次提交
  18. 23 9月, 2012 6 次提交
    • P
      rcu: Fix CONFIG_RCU_FAST_NO_HZ stall warning message · 86f343b5
      Paul E. McKenney 提交于
      The print_cpu_stall_fast_no_hz() function attempts to print -1 when
      the ->idle_gp_timer is not pending, but unsigned arithmetic causes it
      to instead print ULONG_MAX, which is 4294967295 on 32-bit systems and
      18446744073709551615 on 64-bit systems.  Neither of these are the most
      reader-friendly values, so this commit instead causes "timer not pending"
      to be printed when ->idle_gp_timer is not pending.
      Reported-by: NPaul Walmsley <paul@pwsan.com>
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      86f343b5
    • P
      rcu: Avoid rcu_print_detail_task_stall_rnp() segfault · 5fd4dc06
      Paul E. McKenney 提交于
      The rcu_print_detail_task_stall_rnp() function invokes
      rcu_preempt_blocked_readers_cgp() to verify that there are some preempted
      RCU readers blocking the current grace period outside of the protection
      of the rcu_node structure's ->lock.  This means that the last blocked
      reader might exit its RCU read-side critical section and remove itself
      from the ->blkd_tasks list before the ->lock is acquired, resulting in
      a segmentation fault when the subsequent code attempts to dereference
      the now-NULL gp_tasks pointer.
      
      This commit therefore moves the test under the lock.  This will not
      have measurable effect on lock contention because this code is invoked
      only when printing RCU CPU stall warnings, in other words, in the common
      case, never.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      5fd4dc06
    • P
      rcu: Apply for_each_rcu_flavor() to increment_cpu_stall_ticks() · 115f7a7c
      Paul E. McKenney 提交于
      The increment_cpu_stall_ticks() function listed each RCU flavor
      explicitly, with an ifdef to handle preemptible RCU.  This commit
      therefore applies for_each_rcu_flavor() to save a line of code.
      
      Because this commit switches from a code-based enumeration of the
      flavors of RCU to an rcu_state-list-based enumeration, it is no longer
      possible to apply __get_cpu_var() to the per-CPU rcu_data structures.
      We instead use __this_cpu_var() on the rcu_state structure's ->rda field
      that references the corresponding rcu_data structures.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      115f7a7c
    • P
      rcu: Fix obsolete rcu_initiate_boost() header comment · b065a853
      Paul E. McKenney 提交于
      Commit 1217ed1b (rcu: permit rcu_read_unlock() to be called while holding
      runqueue locks) made rcu_initiate_boost() restore irq state when releasing
      the rcu_node structure's ->lock, but failed to update the header comment
      accordingly.  This commit therefore brings the header comment up to date.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      b065a853
    • P
      rcu: Improve boost selection when moving tasks to root rcu_node · 5cc900cf
      Paul E. McKenney 提交于
      The rcu_preempt_offline_tasks() moves all tasks queued on a given leaf
      rcu_node structure to the root rcu_node, which is done when the last CPU
      corresponding the the leaf rcu_node structure goes offline.  Now that
      RCU-preempt's synchronize_rcu_expedited() implementation blocks CPU-hotplug
      operations during the initialization of each rcu_node structure's
      ->boost_tasks pointer, rcu_preempt_offline_tasks() can do a better job
      of setting the root rcu_node's ->boost_tasks pointer.
      
      The key point is that rcu_preempt_offline_tasks() runs as part of the
      CPU-hotplug process, so that a concurrent synchronize_rcu_expedited()
      is guaranteed to either have not started on the one hand (in which case
      there is no boosting on behalf of the expedited grace period) or to be
      completely initialized on the other (in which case, in the absence of
      other priority boosting, all ->boost_tasks pointers will be initialized).
      Therefore, if rcu_preempt_offline_tasks() finds that the ->boost_tasks
      pointer is equal to the ->exp_tasks pointer, it can be sure that it is
      correctly placed.
      
      In the case where there was boosting ongoing at the time that the
      synchronize_rcu_expedited() function started, different nodes might start
      boosting the tasks blocking the expedited grace period at different times.
      In this mixed case, the root node will either be boosting tasks for
      the expedited grace period already, or it will start as soon as it gets
      done boosting for the normal grace period -- but in this latter case,
      the root node's tasks needed to be boosted in any case.
      
      This commit therefore adds a check of the ->boost_tasks pointer against
      the ->exp_tasks pointer to the list that prevents updating ->boost_tasks.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      5cc900cf
    • P
      rcu: Properly initialize ->boost_tasks on CPU offline · 1e3fd2b3
      Paul E. McKenney 提交于
      When rcu_preempt_offline_tasks() clears tasks from a leaf rcu_node
      structure, it does not NULL out the structure's ->boost_tasks field.
      This commit therefore fixes this issue.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      1e3fd2b3