1. 18 3月, 2020 3 次提交
  2. 24 11月, 2019 1 次提交
    • R
      cpuidle: menu: Fix wakeup statistics updates for polling state · 27ab8f16
      Rafael J. Wysocki 提交于
      [ Upstream commit 5f26bdceb9c0a5e6c696aa2899d077cd3ae93413 ]
      
      If the CPU exits the "polling" state due to the time limit in the
      loop in poll_idle(), this is not a real wakeup and it just means
      that the "polling" state selection was not adequate.  The governor
      mispredicted short idle duration, but had a more suitable state been
      selected, the CPU might have spent more time in it.  In fact, there
      is no reason to expect that there would have been a wakeup event
      earlier than the next timer in that case.
      
      Handling such cases as regular wakeups in menu_update() may cause the
      menu governor to make suboptimal decisions going forward, but ignoring
      them altogether would not be correct either, because every time
      menu_select() is invoked, it makes a separate new attempt to predict
      the idle duration taking distinct time to the closest timer event as
      input and the outcomes of all those attempts should be recorded.
      
      For this reason, make menu_update() always assume that if the
      "polling" state was exited due to the time limit, the next proper
      wakeup event for the CPU would be the next timer event (not
      including the tick).
      
      Fixes: a37b969a "cpuidle: poll_state: Add time limit to poll_idle()"
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      27ab8f16
  3. 31 5月, 2018 1 次提交
  4. 06 4月, 2018 1 次提交
    • R
      cpuidle: Return nohz hint from cpuidle_select() · 45f1ff59
      Rafael J. Wysocki 提交于
      Add a new pointer argument to cpuidle_select() and to the ->select
      cpuidle governor callback to allow a boolean value indicating
      whether or not the tick should be stopped before entering the
      selected state to be returned from there.
      
      Make the ladder governor ignore that pointer (to preserve its
      current behavior) and make the menu governor return 'false" through
      it if:
       (1) the idle exit latency is constrained at 0, or
       (2) the selected state is a polling one, or
       (3) the expected idle period duration is within the tick period
           range.
      
      In addition to that, the correction factor computations in the menu
      governor need to take the possibility that the tick may not be
      stopped into account to avoid artificially small correction factor
      values.  To that end, add a mechanism to record tick wakeups, as
      suggested by Peter Zijlstra, and use it to modify the menu_update()
      behavior when tick wakeup occurs.  Namely, if the CPU is woken up by
      the tick and the return value of tick_nohz_get_sleep_length() is not
      within the tick boundary, the predicted idle duration is likely too
      short, so make menu_update() try to compensate for that by updating
      the governor statistics as though the CPU was idle for a long time.
      
      Since the value returned through the new argument pointer of
      cpuidle_select() is not used by its caller yet, this change by
      itself is not expected to alter the functionality of the code.
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      45f1ff59
  5. 29 3月, 2018 1 次提交
    • R
      PM: cpuidle/suspend: Add s2idle usage and time state attributes · 64bdff69
      Rafael J. Wysocki 提交于
      Add a new attribute group called "s2idle" under the sysfs directory
      of each cpuidle state that supports the ->enter_s2idle callback
      and put two new attributes, "usage" and "time", into that group to
      represent the number of times the given state was requested for
      suspend-to-idle and the total time spent in suspend-to-idle after
      requesting that state, respectively.
      
      That will allow diagnostic information related to suspend-to-idle
      to be collected without enabling advanced debug features and
      analyzing dmesg output.
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      64bdff69
  6. 12 2月, 2018 1 次提交
  7. 02 1月, 2018 1 次提交
    • P
      cpuidle: Add new macro to enter a retention idle state · db50a74d
      Prashanth Prakash 提交于
      If a CPU is entering a low power idle state where it doesn't lose any
      context, then there is no need to call cpu_pm_enter()/cpu_pm_exit().
      Add a new macro(CPU_PM_CPU_IDLE_ENTER_RETENTION) to be used by cpuidle
      drivers when they are entering retention state. By not calling
      cpu_pm_enter and cpu_pm_exit we reduce the latency involved in
      entering and exiting the retention idle states.
      
      CPU_PM_CPU_IDLE_ENTER_RETENTION assumes that no state is lost and
      hence CPU PM notifiers will not be called. We may need a broader
      change if we need to support partial retention states effeciently.
      
      On ARM64 based Qualcomm Server Platform we measured below overhead for
      for calling cpu_pm_enter and cpu_pm_exit for retention states.
      
      workload: stress --hdd #CPUs --hdd-bytes 32M  -t 30
              Average overhead of cpu_pm_enter - 1.2us
              Average overhead of cpu_pm_exit  - 3.1us
      Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NSudeep Holla <sudeep.holla@arm.com>
      Signed-off-by: NPrashanth Prakash <pprakash@codeaurora.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      db50a74d
  8. 30 8月, 2017 3 次提交
  9. 11 8月, 2017 1 次提交
  10. 31 1月, 2017 1 次提交
  11. 29 11月, 2016 1 次提交
  12. 21 10月, 2016 1 次提交
  13. 22 7月, 2016 1 次提交
  14. 03 6月, 2016 1 次提交
    • C
      cpuidle: Do not access cpuidle_devices when !CONFIG_CPU_IDLE · 9bd616e3
      Catalin Marinas 提交于
      The cpuidle_devices per-CPU variable is only defined when CPU_IDLE is
      enabled. Commit c8cc7d4d ("sched/idle: Reorganize the idle loop")
      removed the #ifdef CONFIG_CPU_IDLE around cpuidle_idle_call() with the
      compiler optimising away __this_cpu_read(cpuidle_devices). However, with
      CONFIG_UBSAN && !CONFIG_CPU_IDLE, this optimisation no longer happens
      and the kernel fails to link since cpuidle_devices is not defined.
      
      This patch introduces an accessor function for the current CPU cpuidle
      device (returning NULL when !CONFIG_CPU_IDLE) and uses it in
      cpuidle_idle_call().
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: 4.5+ <stable@vger.kernel.org> # 4.5+
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      9bd616e3
  15. 28 8月, 2015 1 次提交
  16. 19 5月, 2015 1 次提交
  17. 15 5月, 2015 2 次提交
  18. 03 4月, 2015 1 次提交
  19. 06 3月, 2015 1 次提交
  20. 16 2月, 2015 1 次提交
    • R
      PM / sleep: Make it possible to quiesce timers during suspend-to-idle · 124cf911
      Rafael J. Wysocki 提交于
      The efficiency of suspend-to-idle depends on being able to keep CPUs
      in the deepest available idle states for as much time as possible.
      Ideally, they should only be brought out of idle by system wakeup
      interrupts.
      
      However, timer interrupts occurring periodically prevent that from
      happening and it is not practical to chase all of the "misbehaving"
      timers in a whack-a-mole fashion.  A much more effective approach is
      to suspend the local ticks for all CPUs and the entire timekeeping
      along the lines of what is done during full suspend, which also
      helps to keep suspend-to-idle and full suspend reasonably similar.
      
      The idea is to suspend the local tick on each CPU executing
      cpuidle_enter_freeze() and to make the last of them suspend the
      entire timekeeping.  That should prevent timer interrupts from
      triggering until an IO interrupt wakes up one of the CPUs.  It
      needs to be done with interrupts disabled on all of the CPUs,
      though, because otherwise the suspended clocksource might be
      accessed by an interrupt handler which might lead to fatal
      consequences.
      
      Unfortunately, the existing ->enter callbacks provided by cpuidle
      drivers generally cannot be used for implementing that, because some
      of them re-enable interrupts temporarily and some idle entry methods
      cause interrupts to be re-enabled automatically on exit.  Also some
      of these callbacks manipulate local clock event devices of the CPUs
      which really shouldn't be done after suspending their ticks.
      
      To overcome that difficulty, introduce a new cpuidle state callback,
      ->enter_freeze, that will be guaranteed (1) to keep interrupts
      disabled all the time (and return with interrupts disabled) and (2)
      not to touch the CPU timer devices.  Modify cpuidle_enter_freeze() to
      look for the deepest available idle state with ->enter_freeze present
      and to make the CPU execute that callback with suspended tick (and the
      last of the online CPUs to execute it with suspended timekeeping).
      Suggested-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      124cf911
  21. 14 2月, 2015 1 次提交
    • R
      PM / sleep: Re-implement suspend-to-idle handling · 38106313
      Rafael J. Wysocki 提交于
      In preparation for adding support for quiescing timers in the final
      stage of suspend-to-idle transitions, rework the freeze_enter()
      function making the system wait on a wakeup event, the freeze_wake()
      function terminating the suspend-to-idle loop and the mechanism by
      which deep idle states are entered during suspend-to-idle.
      
      First of all, introduce a simple state machine for suspend-to-idle
      and make the code in question use it.
      
      Second, prevent freeze_enter() from losing wakeup events due to race
      conditions and ensure that the number of online CPUs won't change
      while it is being executed.  In addition to that, make it force
      all of the CPUs re-enter the idle loop in case they are in idle
      states already (so they can enter deeper idle states if possible).
      
      Next, drop cpuidle_use_deepest_state() and replace use_deepest_state
      checks in cpuidle_select() and cpuidle_reflect() with a single
      suspend-to-idle state check in cpuidle_idle_call().
      
      Finally, introduce cpuidle_enter_freeze() that will simply find the
      deepest idle state available to the given CPU and enter it using
      cpuidle_enter().
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      38106313
  22. 17 12月, 2014 1 次提交
    • L
      cpuidle / ACPI: remove unused CPUIDLE_FLAG_TIME_INVALID · 62c4cf97
      Len Brown 提交于
      CPUIDLE_FLAG_TIME_INVALID is no longer checked
      by menu or ladder cpuidle governors, so don't
      bother setting or defining it.
      
      It was originally invented to account for the fact that
      acpi_safe_halt() enables interrupts to invoke HLT.
      That would allow interrupt service routines to be included
      in the last_idle duration measurements made in cpuidle_enter_state(),
      potentially returning a duration much larger than reality.
      
      But menu and ladder can gracefully handle erroneously large duration
      intervals without checking for CPUIDLE_FLAG_TIME_INVALID.
      Further, if they don't check CPUIDLE_FLAG_TIME_INVALID, they
      can also benefit from the instances when the duration interval
      is not erroneously large.
      Signed-off-by: NLen Brown <len.brown@intel.com>
      Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      62c4cf97
  23. 13 11月, 2014 1 次提交
    • D
      cpuidle: Invert CPUIDLE_FLAG_TIME_VALID logic · b82b6cca
      Daniel Lezcano 提交于
      The only place where the time is invalid is when the ACPI_CSTATE_FFH entry
      method is not set. Otherwise for all the drivers, the time can be correctly
      measured.
      
      Instead of duplicating the CPUIDLE_FLAG_TIME_VALID flag in all the drivers
      for all the states, just invert the logic by replacing it by the flag
      CPUIDLE_FLAG_TIME_INVALID, hence we can set this flag only for the acpi idle
      driver, remove the former flag from all the drivers and invert the logic with
      this flag in the different governor.
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      b82b6cca
  24. 28 5月, 2014 1 次提交
  25. 07 5月, 2014 1 次提交
  26. 01 5月, 2014 1 次提交
  27. 11 3月, 2014 3 次提交
  28. 30 10月, 2013 2 次提交
  29. 15 7月, 2013 2 次提交
  30. 11 6月, 2013 1 次提交
    • D
      cpuidle: simplify multiple driver support · 82467a5a
      Daniel Lezcano 提交于
      Commit bf4d1b5d (cpuidle: support multiple drivers) introduced support
      for using multiple cpuidle drivers at the same time.  It added a
      couple of new APIs to register the driver per CPU, but that led to
      some unnecessary code complexity related to the kernel config options
      deciding whether or not the multiple driver support is enabled.  The
      code has to work as it did before when the multiple driver support is
      not enabled and the multiple driver support has to be compatible with
      the previously existing API.
      
      Remove the new API, not used by any driver in the tree yet (but
      needed for the HMP cpuidle drivers that will be submitted soon), and
      add a new cpumask pointer to the cpuidle driver structure that will
      point to the mask of CPUs handled by the given driver.  That will
      allow the cpuidle_[un]register_driver() API to be used for the
      multiple driver support along with the cpuidle_[un]register()
      functions added recently.
      
      [rjw: Changelog]
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      82467a5a
  31. 23 4月, 2013 1 次提交
    • D
      cpuidle: make a single register function for all · 4c637b21
      Daniel Lezcano 提交于
      The usual scheme to initialize a cpuidle driver on a SMP is:
      
      	cpuidle_register_driver(drv);
      	for_each_possible_cpu(cpu) {
      		device = &per_cpu(cpuidle_dev, cpu);
      		cpuidle_register_device(device);
      	}
      
      This code is duplicated in each cpuidle driver.
      
      On UP systems, it is done this way:
      
      	cpuidle_register_driver(drv);
      	device = &per_cpu(cpuidle_dev, cpu);
      	cpuidle_register_device(device);
      
      On UP, the macro 'for_each_cpu' does one iteration:
      
      #define for_each_cpu(cpu, mask)                 \
              for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
      
      Hence, the initialization loop is the same for UP than SMP.
      
      Beside, we saw different bugs / mis-initialization / return code unchecked in
      the different drivers, the code is duplicated including bugs. After fixing all
      these ones, it appears the initialization pattern is the same for everyone.
      
      Please note, some drivers are doing dev->state_count = drv->state_count. This is
      not necessary because it is done by the cpuidle_enable_device function in the
      cpuidle framework. This is true, until you have the same states for all your
      devices. Otherwise, the 'low level' API should be used instead with the specific
      initialization for the driver.
      
      Let's add a wrapper function doing this initialization with a cpumask parameter
      for the coupled idle states and use it for all the drivers.
      
      That will save a lot of LOC, consolidate the code, and the modifications in the
      future could be done in a single place. Another benefit is the consolidation of
      the cpuidle_device variable which is now in the cpuidle framework and no longer
      spread accross the different arch specific drivers.
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      4c637b21