1. 19 5月, 2015 1 次提交
  2. 15 5月, 2015 2 次提交
  3. 03 4月, 2015 1 次提交
  4. 06 3月, 2015 1 次提交
  5. 16 2月, 2015 1 次提交
    • R
      PM / sleep: Make it possible to quiesce timers during suspend-to-idle · 124cf911
      Rafael J. Wysocki 提交于
      The efficiency of suspend-to-idle depends on being able to keep CPUs
      in the deepest available idle states for as much time as possible.
      Ideally, they should only be brought out of idle by system wakeup
      interrupts.
      
      However, timer interrupts occurring periodically prevent that from
      happening and it is not practical to chase all of the "misbehaving"
      timers in a whack-a-mole fashion.  A much more effective approach is
      to suspend the local ticks for all CPUs and the entire timekeeping
      along the lines of what is done during full suspend, which also
      helps to keep suspend-to-idle and full suspend reasonably similar.
      
      The idea is to suspend the local tick on each CPU executing
      cpuidle_enter_freeze() and to make the last of them suspend the
      entire timekeeping.  That should prevent timer interrupts from
      triggering until an IO interrupt wakes up one of the CPUs.  It
      needs to be done with interrupts disabled on all of the CPUs,
      though, because otherwise the suspended clocksource might be
      accessed by an interrupt handler which might lead to fatal
      consequences.
      
      Unfortunately, the existing ->enter callbacks provided by cpuidle
      drivers generally cannot be used for implementing that, because some
      of them re-enable interrupts temporarily and some idle entry methods
      cause interrupts to be re-enabled automatically on exit.  Also some
      of these callbacks manipulate local clock event devices of the CPUs
      which really shouldn't be done after suspending their ticks.
      
      To overcome that difficulty, introduce a new cpuidle state callback,
      ->enter_freeze, that will be guaranteed (1) to keep interrupts
      disabled all the time (and return with interrupts disabled) and (2)
      not to touch the CPU timer devices.  Modify cpuidle_enter_freeze() to
      look for the deepest available idle state with ->enter_freeze present
      and to make the CPU execute that callback with suspended tick (and the
      last of the online CPUs to execute it with suspended timekeeping).
      Suggested-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      124cf911
  6. 14 2月, 2015 1 次提交
    • R
      PM / sleep: Re-implement suspend-to-idle handling · 38106313
      Rafael J. Wysocki 提交于
      In preparation for adding support for quiescing timers in the final
      stage of suspend-to-idle transitions, rework the freeze_enter()
      function making the system wait on a wakeup event, the freeze_wake()
      function terminating the suspend-to-idle loop and the mechanism by
      which deep idle states are entered during suspend-to-idle.
      
      First of all, introduce a simple state machine for suspend-to-idle
      and make the code in question use it.
      
      Second, prevent freeze_enter() from losing wakeup events due to race
      conditions and ensure that the number of online CPUs won't change
      while it is being executed.  In addition to that, make it force
      all of the CPUs re-enter the idle loop in case they are in idle
      states already (so they can enter deeper idle states if possible).
      
      Next, drop cpuidle_use_deepest_state() and replace use_deepest_state
      checks in cpuidle_select() and cpuidle_reflect() with a single
      suspend-to-idle state check in cpuidle_idle_call().
      
      Finally, introduce cpuidle_enter_freeze() that will simply find the
      deepest idle state available to the given CPU and enter it using
      cpuidle_enter().
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      38106313
  7. 17 12月, 2014 1 次提交
    • L
      cpuidle / ACPI: remove unused CPUIDLE_FLAG_TIME_INVALID · 62c4cf97
      Len Brown 提交于
      CPUIDLE_FLAG_TIME_INVALID is no longer checked
      by menu or ladder cpuidle governors, so don't
      bother setting or defining it.
      
      It was originally invented to account for the fact that
      acpi_safe_halt() enables interrupts to invoke HLT.
      That would allow interrupt service routines to be included
      in the last_idle duration measurements made in cpuidle_enter_state(),
      potentially returning a duration much larger than reality.
      
      But menu and ladder can gracefully handle erroneously large duration
      intervals without checking for CPUIDLE_FLAG_TIME_INVALID.
      Further, if they don't check CPUIDLE_FLAG_TIME_INVALID, they
      can also benefit from the instances when the duration interval
      is not erroneously large.
      Signed-off-by: NLen Brown <len.brown@intel.com>
      Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      62c4cf97
  8. 13 11月, 2014 1 次提交
    • D
      cpuidle: Invert CPUIDLE_FLAG_TIME_VALID logic · b82b6cca
      Daniel Lezcano 提交于
      The only place where the time is invalid is when the ACPI_CSTATE_FFH entry
      method is not set. Otherwise for all the drivers, the time can be correctly
      measured.
      
      Instead of duplicating the CPUIDLE_FLAG_TIME_VALID flag in all the drivers
      for all the states, just invert the logic by replacing it by the flag
      CPUIDLE_FLAG_TIME_INVALID, hence we can set this flag only for the acpi idle
      driver, remove the former flag from all the drivers and invert the logic with
      this flag in the different governor.
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      b82b6cca
  9. 28 5月, 2014 1 次提交
  10. 07 5月, 2014 1 次提交
  11. 01 5月, 2014 1 次提交
  12. 11 3月, 2014 3 次提交
  13. 30 10月, 2013 2 次提交
  14. 15 7月, 2013 2 次提交
  15. 11 6月, 2013 1 次提交
    • D
      cpuidle: simplify multiple driver support · 82467a5a
      Daniel Lezcano 提交于
      Commit bf4d1b5d (cpuidle: support multiple drivers) introduced support
      for using multiple cpuidle drivers at the same time.  It added a
      couple of new APIs to register the driver per CPU, but that led to
      some unnecessary code complexity related to the kernel config options
      deciding whether or not the multiple driver support is enabled.  The
      code has to work as it did before when the multiple driver support is
      not enabled and the multiple driver support has to be compatible with
      the previously existing API.
      
      Remove the new API, not used by any driver in the tree yet (but
      needed for the HMP cpuidle drivers that will be submitted soon), and
      add a new cpumask pointer to the cpuidle driver structure that will
      point to the mask of CPUs handled by the given driver.  That will
      allow the cpuidle_[un]register_driver() API to be used for the
      multiple driver support along with the cpuidle_[un]register()
      functions added recently.
      
      [rjw: Changelog]
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      82467a5a
  16. 23 4月, 2013 2 次提交
    • D
      cpuidle: make a single register function for all · 4c637b21
      Daniel Lezcano 提交于
      The usual scheme to initialize a cpuidle driver on a SMP is:
      
      	cpuidle_register_driver(drv);
      	for_each_possible_cpu(cpu) {
      		device = &per_cpu(cpuidle_dev, cpu);
      		cpuidle_register_device(device);
      	}
      
      This code is duplicated in each cpuidle driver.
      
      On UP systems, it is done this way:
      
      	cpuidle_register_driver(drv);
      	device = &per_cpu(cpuidle_dev, cpu);
      	cpuidle_register_device(device);
      
      On UP, the macro 'for_each_cpu' does one iteration:
      
      #define for_each_cpu(cpu, mask)                 \
              for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
      
      Hence, the initialization loop is the same for UP than SMP.
      
      Beside, we saw different bugs / mis-initialization / return code unchecked in
      the different drivers, the code is duplicated including bugs. After fixing all
      these ones, it appears the initialization pattern is the same for everyone.
      
      Please note, some drivers are doing dev->state_count = drv->state_count. This is
      not necessary because it is done by the cpuidle_enable_device function in the
      cpuidle framework. This is true, until you have the same states for all your
      devices. Otherwise, the 'low level' API should be used instead with the specific
      initialization for the driver.
      
      Let's add a wrapper function doing this initialization with a cpumask parameter
      for the coupled idle states and use it for all the drivers.
      
      That will save a lot of LOC, consolidate the code, and the modifications in the
      future could be done in a single place. Another benefit is the consolidation of
      the cpuidle_device variable which is now in the cpuidle framework and no longer
      spread accross the different arch specific drivers.
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      4c637b21
    • D
      cpuidle: remove en_core_tk_irqen flag · 554c06ba
      Daniel Lezcano 提交于
      The en_core_tk_irqen flag is set in all the cpuidle driver which
      means it is not necessary to specify this flag.
      
      Remove the flag and the code related to it.
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Acked-by: Kevin Hilman <khilman@linaro.org>  # for mach-omap2/*
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      554c06ba
  17. 18 4月, 2013 1 次提交
  18. 01 4月, 2013 2 次提交
  19. 12 2月, 2013 1 次提交
  20. 15 1月, 2013 1 次提交
  21. 15 11月, 2012 3 次提交
  22. 23 8月, 2012 1 次提交
    • A
      ARM: omap: allow building omap44xx without SMP · c7a9b09b
      Arnd Bergmann 提交于
      The new omap4 cpuidle implementation currently requires
      ARCH_NEEDS_CPU_IDLE_COUPLED, which only works on SMP.
      
      This patch makes it possible to build a non-SMP kernel
      for that platform. This is not normally desired for
      end-users but can be useful for testing.
      
      Without this patch, building rand-0y2jSKT results in:
      
      drivers/cpuidle/coupled.c: In function 'cpuidle_coupled_poke':
      drivers/cpuidle/coupled.c:317:3: error: implicit declaration of function '__smp_call_function_single' [-Werror=implicit-function-declaration]
      
      It's not clear if this patch is the best solution for
      the problem at hand. I have made sure that we can now
      build the kernel in all configurations, but that does
      not mean it will actually work on an OMAP44xx.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Cc: Kevin Hilman <khilman@ti.com>
      Cc: Tony Lindgren <tony@atomide.com>
      c7a9b09b
  23. 11 7月, 2012 1 次提交
    • P
      PM / cpuidle: System resume hang fix with cpuidle · 8651f97b
      Preeti U Murthy 提交于
      On certain bios, resume hangs if cpus are allowed to enter idle states
      during suspend [1].
      
      This was fixed in apci idle driver [2].But intel_idle driver does not
      have this fix. Thus instead of replicating the fix in both the idle
      drivers, or in more platform specific idle drivers if needed, the
      more general cpuidle infrastructure could handle this.
      
      A suspend callback in cpuidle_driver could handle this fix. But
      a cpuidle_driver provides only basic functionalities like platform idle
      state detection capability and mechanisms to support entry and exit
      into CPU idle states. All other cpuidle functions are found in the
      cpuidle generic infrastructure for good reason that all cpuidle
      drivers, irrepective of their platforms will support these functions.
      
      One option therefore would be to register a suspend callback in cpuidle
      which handles this fix. This could be called through a PM_SUSPEND_PREPARE
      notifier. But this is too generic a notfier for a driver to handle.
      
      Also, ideally the job of cpuidle is not to handle side effects of suspend.
      It should expose the interfaces which "handle cpuidle 'during' suspend"
      or any other operation, which the subsystems call during that respective
      operation.
      
      The fix demands that during suspend, no cpus should be allowed to enter
      deep C-states. The interface cpuidle_uninstall_idle_handler() in cpuidle
      ensures that. Not just that it also kicks all the cpus which are already
      in idle out of their idle states which was being done during cpu hotplug
      through a CPU_DYING_FROZEN callbacks.
      
      Now the question arises about when during suspend should
      cpuidle_uninstall_idle_handler() be called. Since we are dealing with
      drivers it seems best to call this function during dpm_suspend().
      Delaying the call till dpm_suspend_noirq() does no harm, as long as it is
      before cpu_hotplug_begin() to avoid race conditions with cpu hotpulg
      operations. In dpm_suspend_noirq(), it would be wise to place this call
      before suspend_device_irqs() to avoid ugly interactions with the same.
      
      Ananlogously, during resume.
      
      References:
      [1] https://bugs.launchpad.net/ubuntu/+source/linux/+bug/674075.
      [2] http://marc.info/?l=linux-pm&m=133958534231884&w=2Reported-and-tested-by: NDave Hansen <dave@linux.vnet.ibm.com>
      Signed-off-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com>
      Reviewed-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      8651f97b
  24. 06 7月, 2012 1 次提交
  25. 04 7月, 2012 3 次提交
    • R
      PM / Domains: Add preliminary support for cpuidle, v2 · cbc9ef02
      Rafael J. Wysocki 提交于
      On some systems there are CPU cores located in the same power
      domains as I/O devices.  Then, power can only be removed from the
      domain if all I/O devices in it are not in use and the CPU core
      is idle.  Add preliminary support for that to the generic PM domains
      framework.
      
      First, the platform is expected to provide a cpuidle driver with one
      extra state designated for use with the generic PM domains code.
      This state should be initially disabled and its exit_latency value
      should be set to whatever time is needed to bring up the CPU core
      itself after restoring power to it, not including the domain's
      power on latency.  Its .enter() callback should point to a procedure
      that will remove power from the domain containing the CPU core at
      the end of the CPU power transition.
      
      The remaining characteristics of the extra cpuidle state, referred to
      as the "domain" cpuidle state below, (e.g. power usage, target
      residency) should be populated in accordance with the properties of
      the hardware.
      
      Next, the platform should execute genpd_attach_cpuidle() on the PM
      domain containing the CPU core.  That will cause the generic PM
      domains framework to treat that domain in a special way such that:
      
       * When all devices in the domain have been suspended and it is about
         to be turned off, the states of the devices will be saved, but
         power will not be removed from the domain.  Instead, the "domain"
         cpuidle state will be enabled so that power can be removed from
         the domain when the CPU core is idle and the state has been chosen
         as the target by the cpuidle governor.
      
       * When the first I/O device in the domain is resumed and
         __pm_genpd_poweron(() is called for the first time after
         power has been removed from the domain, the "domain" cpuidle
         state will be disabled to avoid subsequent surprise power removals
         via cpuidle.
      
      The effective exit_latency value of the "domain" cpuidle state
      depends on the time needed to bring up the CPU core itself after
      restoring power to it as well as on the power on latency of the
      domain containing the CPU core.  Thus the "domain" cpuidle state's
      exit_latency has to be recomputed every time the domain's power on
      latency is updated, which may happen every time power is restored
      to the domain, if the measured power on latency is greater than
      the latency stored in the corresponding generic_pm_domain structure.
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Reviewed-by: NKevin Hilman <khilman@ti.com>
      cbc9ef02
    • R
      PM / cpuidle: Add driver reference counter · 6e797a07
      Rafael J. Wysocki 提交于
      Add a reference counter for the cpuidle driver, so that it can't
      be unregistered when it is in use.
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      6e797a07
    • S
      cpuidle: move field disable from per-driver to per-cpu · dc7fd275
      ShuoX Liu 提交于
      Andrew J.Schorr raises a question.  When he changes the disable setting on
      a single CPU, it affects all the other CPUs.  Basically, currently, the
      disable field is per-driver instead of per-cpu.  All the C states of the
      same driver are shared by all CPU in the same machine.
      
      The patch changes the `disable' field to per-cpu, so we could set this
      separately for each cpu.
      Signed-off-by: NShuoX Liu <shuox.liu@intel.com>
      Reported-by: NAndrew J.Schorr <aschorr@telemetry-investments.com>
      Reviewed-by: NYanmin Zhang <yanmin_zhang@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      dc7fd275
  26. 02 6月, 2012 2 次提交
    • C
      cpuidle: coupled: add parallel barrier function · 20ff51a3
      Colin Cross 提交于
      Adds cpuidle_coupled_parallel_barrier, which can be used by coupled
      cpuidle state enter functions to handle resynchronization after
      determining if any cpu needs to abort.  The normal use case will
      be:
      
      static bool abort_flag;
      static atomic_t abort_barrier;
      
      int arch_cpuidle_enter(struct cpuidle_device *dev, ...)
      {
      	if (arch_turn_off_irq_controller()) {
      	   	/* returns an error if an irq is pending and would be lost
      		   if idle continued and turned off power */
      		abort_flag = true;
      	}
      
      	cpuidle_coupled_parallel_barrier(dev, &abort_barrier);
      
      	if (abort_flag) {
      	   	/* One of the cpus didn't turn off it's irq controller */
      	   	arch_turn_on_irq_controller();
      		return -EINTR;
      	}
      
      	/* continue with idle */
      	...
      }
      
      This will cause all cpus to abort idle together if one of them needs
      to abort.
      Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Reviewed-by: NKevin Hilman <khilman@ti.com>
      Tested-by: NKevin Hilman <khilman@ti.com>
      Signed-off-by: NColin Cross <ccross@android.com>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      20ff51a3
    • C
      cpuidle: add support for states that affect multiple cpus · 4126c019
      Colin Cross 提交于
      On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
      cpus cannot be independently powered down, either due to
      sequencing restrictions (on Tegra 2, cpu 0 must be the last to
      power down), or due to HW bugs (on OMAP4460, a cpu powering up
      will corrupt the gic state unless the other cpu runs a work
      around).  Each cpu has a power state that it can enter without
      coordinating with the other cpu (usually Wait For Interrupt, or
      WFI), and one or more "coupled" power states that affect blocks
      shared between the cpus (L2 cache, interrupt controller, and
      sometimes the whole SoC).  Entering a coupled power state must
      be tightly controlled on both cpus.
      
      The easiest solution to implementing coupled cpu power states is
      to hotplug all but one cpu whenever possible, usually using a
      cpufreq governor that looks at cpu load to determine when to
      enable the secondary cpus.  This causes problems, as hotplug is an
      expensive operation, so the number of hotplug transitions must be
      minimized, leading to very slow response to loads, often on the
      order of seconds.
      
      This file implements an alternative solution, where each cpu will
      wait in the WFI state until all cpus are ready to enter a coupled
      state, at which point the coupled state function will be called
      on all cpus at approximately the same time.
      
      Once all cpus are ready to enter idle, they are woken by an smp
      cross call.  At this point, there is a chance that one of the
      cpus will find work to do, and choose not to enter idle.  A
      final pass is needed to guarantee that all cpus will call the
      power state enter function at the same time.  During this pass,
      each cpu will increment the ready counter, and continue once the
      ready counter matches the number of online coupled cpus.  If any
      cpu exits idle, the other cpus will decrement their counter and
      retry.
      
      To use coupled cpuidle states, a cpuidle driver must:
      
         Set struct cpuidle_device.coupled_cpus to the mask of all
         coupled cpus, usually the same as cpu_possible_mask if all cpus
         are part of the same cluster.  The coupled_cpus mask must be
         set in the struct cpuidle_device for each cpu.
      
         Set struct cpuidle_device.safe_state to a state that is not a
         coupled state.  This is usually WFI.
      
         Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
         state that affects multiple cpus.
      
         Provide a struct cpuidle_state.enter function for each state
         that affects multiple cpus.  This function is guaranteed to be
         called on all cpus at approximately the same time.  The driver
         should ensure that the cpus all abort together if any cpu tries
         to abort once the function is called.
      
      update1:
      
      cpuidle: coupled: fix count of online cpus
      
      online_count was never incremented on boot, and was also counting
      cpus that were not part of the coupled set.  Fix both issues by
      introducting a new function that counts online coupled cpus, and
      call it from register as well as the hotplug notifier.
      
      update2:
      
      cpuidle: coupled: fix decrementing ready count
      
      cpuidle_coupled_set_not_ready sometimes refuses to decrement the
      ready count in order to prevent a race condition.  This makes it
      unsuitable for use when finished with idle.  Add a new function
      cpuidle_coupled_set_done that decrements both the ready count and
      waiting count, and call it after idle is complete.
      
      Cc: Amit Kucheria <amit.kucheria@linaro.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Trinabh Gupta <g.trinabh@gmail.com>
      Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
      Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Reviewed-by: NKevin Hilman <khilman@ti.com>
      Tested-by: NKevin Hilman <khilman@ti.com>
      Signed-off-by: NColin Cross <ccross@android.com>
      Acked-by: NRafael J. Wysocki <rjw@sisk.pl>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      4126c019
  27. 30 3月, 2012 2 次提交