1. 28 1月, 2016 1 次提交
  2. 03 9月, 2015 1 次提交
  3. 28 8月, 2015 2 次提交
  4. 05 3月, 2015 1 次提交
  5. 18 4月, 2014 1 次提交
  6. 25 2月, 2014 2 次提交
    • F
      smp: Rename __smp_call_function_single() to smp_call_function_single_async() · c46fff2a
      Frederic Weisbecker 提交于
      The name __smp_call_function_single() doesn't tell much about the
      properties of this function, especially when compared to
      smp_call_function_single().
      
      The comments above the implementation are also misleading. The main
      point of this function is actually not to be able to embed the csd
      in an object. This is actually a requirement that result from the
      purpose of this function which is to raise an IPI asynchronously.
      
      As such it can be called with interrupts disabled. And this feature
      comes at the cost of the caller who then needs to serialize the
      IPIs on this csd.
      
      Lets rename the function and enhance the comments so that they reflect
      these properties.
      Suggested-by: NChristoph Hellwig <hch@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Jens Axboe <axboe@fb.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      c46fff2a
    • F
      smp: Remove wait argument from __smp_call_function_single() · fce8ad15
      Frederic Weisbecker 提交于
      The main point of calling __smp_call_function_single() is to send
      an IPI in a pure asynchronous way. By embedding a csd in an object,
      a caller can send the IPI without waiting for a previous one to complete
      as is required by smp_call_function_single() for example. As such,
      sending this kind of IPI can be safe even when irqs are disabled.
      
      This flexibility comes at the expense of the caller who then needs to
      synchronize the csd lifecycle by himself and make sure that IPIs on a
      single csd are serialized.
      
      This is how __smp_call_function_single() works when wait = 0 and this
      usecase is relevant.
      
      Now there don't seem to be any usecase with wait = 1 that can't be
      covered by smp_call_function_single() instead, which is safer. Lets look
      at the two possible scenario:
      
      1) The user calls __smp_call_function_single(wait = 1) on a csd embedded
         in an object. It looks like a nice and convenient pattern at the first
         sight because we can then retrieve the object from the IPI handler easily.
      
         But actually it is a waste of memory space in the object since the csd
         can be allocated from the stack by smp_call_function_single(wait = 1)
         and the object can be passed an the IPI argument.
      
         Besides that, embedding the csd in an object is more error prone
         because the caller must take care of the serialization of the IPIs
         for this csd.
      
      2) The user calls __smp_call_function_single(wait = 1) on a csd that
         is allocated on the stack. It's ok but smp_call_function_single()
         can do it as well and it already takes care of the allocation on the
         stack. Again it's more simple and less error prone.
      
      Therefore, using the underscore prepend API version with wait = 1
      is a bad pattern and a sign that the caller can do safer and more
      simple.
      
      There was a single user of that which has just been converted.
      So lets remove this option to discourage further users.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Jens Axboe <axboe@fb.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      fce8ad15
  7. 30 10月, 2013 1 次提交
  8. 30 8月, 2013 3 次提交
  9. 03 1月, 2013 1 次提交
  10. 18 8月, 2012 2 次提交
  11. 02 6月, 2012 2 次提交
    • C
      cpuidle: coupled: add parallel barrier function · 20ff51a3
      Colin Cross 提交于
      Adds cpuidle_coupled_parallel_barrier, which can be used by coupled
      cpuidle state enter functions to handle resynchronization after
      determining if any cpu needs to abort.  The normal use case will
      be:
      
      static bool abort_flag;
      static atomic_t abort_barrier;
      
      int arch_cpuidle_enter(struct cpuidle_device *dev, ...)
      {
      	if (arch_turn_off_irq_controller()) {
      	   	/* returns an error if an irq is pending and would be lost
      		   if idle continued and turned off power */
      		abort_flag = true;
      	}
      
      	cpuidle_coupled_parallel_barrier(dev, &abort_barrier);
      
      	if (abort_flag) {
      	   	/* One of the cpus didn't turn off it's irq controller */
      	   	arch_turn_on_irq_controller();
      		return -EINTR;
      	}
      
      	/* continue with idle */
      	...
      }
      
      This will cause all cpus to abort idle together if one of them needs
      to abort.
      Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Reviewed-by: NKevin Hilman <khilman@ti.com>
      Tested-by: NKevin Hilman <khilman@ti.com>
      Signed-off-by: NColin Cross <ccross@android.com>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      20ff51a3
    • C
      cpuidle: add support for states that affect multiple cpus · 4126c019
      Colin Cross 提交于
      On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
      cpus cannot be independently powered down, either due to
      sequencing restrictions (on Tegra 2, cpu 0 must be the last to
      power down), or due to HW bugs (on OMAP4460, a cpu powering up
      will corrupt the gic state unless the other cpu runs a work
      around).  Each cpu has a power state that it can enter without
      coordinating with the other cpu (usually Wait For Interrupt, or
      WFI), and one or more "coupled" power states that affect blocks
      shared between the cpus (L2 cache, interrupt controller, and
      sometimes the whole SoC).  Entering a coupled power state must
      be tightly controlled on both cpus.
      
      The easiest solution to implementing coupled cpu power states is
      to hotplug all but one cpu whenever possible, usually using a
      cpufreq governor that looks at cpu load to determine when to
      enable the secondary cpus.  This causes problems, as hotplug is an
      expensive operation, so the number of hotplug transitions must be
      minimized, leading to very slow response to loads, often on the
      order of seconds.
      
      This file implements an alternative solution, where each cpu will
      wait in the WFI state until all cpus are ready to enter a coupled
      state, at which point the coupled state function will be called
      on all cpus at approximately the same time.
      
      Once all cpus are ready to enter idle, they are woken by an smp
      cross call.  At this point, there is a chance that one of the
      cpus will find work to do, and choose not to enter idle.  A
      final pass is needed to guarantee that all cpus will call the
      power state enter function at the same time.  During this pass,
      each cpu will increment the ready counter, and continue once the
      ready counter matches the number of online coupled cpus.  If any
      cpu exits idle, the other cpus will decrement their counter and
      retry.
      
      To use coupled cpuidle states, a cpuidle driver must:
      
         Set struct cpuidle_device.coupled_cpus to the mask of all
         coupled cpus, usually the same as cpu_possible_mask if all cpus
         are part of the same cluster.  The coupled_cpus mask must be
         set in the struct cpuidle_device for each cpu.
      
         Set struct cpuidle_device.safe_state to a state that is not a
         coupled state.  This is usually WFI.
      
         Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
         state that affects multiple cpus.
      
         Provide a struct cpuidle_state.enter function for each state
         that affects multiple cpus.  This function is guaranteed to be
         called on all cpus at approximately the same time.  The driver
         should ensure that the cpus all abort together if any cpu tries
         to abort once the function is called.
      
      update1:
      
      cpuidle: coupled: fix count of online cpus
      
      online_count was never incremented on boot, and was also counting
      cpus that were not part of the coupled set.  Fix both issues by
      introducting a new function that counts online coupled cpus, and
      call it from register as well as the hotplug notifier.
      
      update2:
      
      cpuidle: coupled: fix decrementing ready count
      
      cpuidle_coupled_set_not_ready sometimes refuses to decrement the
      ready count in order to prevent a race condition.  This makes it
      unsuitable for use when finished with idle.  Add a new function
      cpuidle_coupled_set_done that decrements both the ready count and
      waiting count, and call it after idle is complete.
      
      Cc: Amit Kucheria <amit.kucheria@linaro.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Trinabh Gupta <g.trinabh@gmail.com>
      Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
      Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Reviewed-by: NKevin Hilman <khilman@ti.com>
      Tested-by: NKevin Hilman <khilman@ti.com>
      Signed-off-by: NColin Cross <ccross@android.com>
      Acked-by: NRafael J. Wysocki <rjw@sisk.pl>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      4126c019