1. 06 4月, 2018 1 次提交
    • R
      cpuidle: Return nohz hint from cpuidle_select() · 45f1ff59
      Rafael J. Wysocki 提交于
      Add a new pointer argument to cpuidle_select() and to the ->select
      cpuidle governor callback to allow a boolean value indicating
      whether or not the tick should be stopped before entering the
      selected state to be returned from there.
      
      Make the ladder governor ignore that pointer (to preserve its
      current behavior) and make the menu governor return 'false" through
      it if:
       (1) the idle exit latency is constrained at 0, or
       (2) the selected state is a polling one, or
       (3) the expected idle period duration is within the tick period
           range.
      
      In addition to that, the correction factor computations in the menu
      governor need to take the possibility that the tick may not be
      stopped into account to avoid artificially small correction factor
      values.  To that end, add a mechanism to record tick wakeups, as
      suggested by Peter Zijlstra, and use it to modify the menu_update()
      behavior when tick wakeup occurs.  Namely, if the CPU is woken up by
      the tick and the return value of tick_nohz_get_sleep_length() is not
      within the tick boundary, the predicted idle duration is likely too
      short, so make menu_update() try to compensate for that by updating
      the governor statistics as though the CPU was idle for a long time.
      
      Since the value returned through the new argument pointer of
      cpuidle_select() is not used by its caller yet, this change by
      itself is not expected to alter the functionality of the code.
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      45f1ff59
  2. 29 3月, 2018 1 次提交
    • R
      PM: cpuidle/suspend: Add s2idle usage and time state attributes · 64bdff69
      Rafael J. Wysocki 提交于
      Add a new attribute group called "s2idle" under the sysfs directory
      of each cpuidle state that supports the ->enter_s2idle callback
      and put two new attributes, "usage" and "time", into that group to
      represent the number of times the given state was requested for
      suspend-to-idle and the total time spent in suspend-to-idle after
      requesting that state, respectively.
      
      That will allow diagnostic information related to suspend-to-idle
      to be collected without enabling advanced debug features and
      analyzing dmesg output.
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      64bdff69
  3. 09 11月, 2017 2 次提交
  4. 28 9月, 2017 1 次提交
  5. 11 8月, 2017 1 次提交
  6. 15 5月, 2017 1 次提交
  7. 02 5月, 2017 1 次提交
  8. 02 3月, 2017 1 次提交
  9. 06 12月, 2016 1 次提交
  10. 29 11月, 2016 1 次提交
  11. 04 7月, 2016 1 次提交
    • S
      cpuidle: Fix last_residency division · dbd1b8ea
      Shreyas B. Prabhu 提交于
      Snooze is a poll idle state in powernv and pseries platforms. Snooze
      has a timeout so that if a CPU stays in snooze for more than target
      residency of the next available idle state, then it would exit
      thereby giving chance to the cpuidle governor to re-evaluate and
      promote the CPU to a deeper idle state. Therefore whenever snooze
      exits due to this timeout, its last_residency will be target_residency
      of the next deeper state.
      
      Commit e93e59ce "cpuidle: Replace ktime_get() with local_clock()"
      changed the math around last_residency calculation. Specifically,
      while converting last_residency value from nano- to microseconds, it
      carries out right shift by 10. Because of that, in snooze timeout
      exit scenarios last_residency calculated is roughly 2.3% less than
      target_residency of the next available state. This pattern is picked
      up by get_typical_interval() in the menu governor and therefore
      expected_interval in menu_select() is frequently less than the
      target_residency of any state other than snooze.
      
      Due to this we are entering snooze at a higher rate, thereby
      affecting the single thread performance.
      
      Fix this by using more precise division via ktime_us_delta().
      
      Fixes: e93e59ce "cpuidle: Replace ktime_get() with local_clock()"
      Reported-by: NAnton Blanchard <anton@samba.org>
      Bisected-by: NShilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>
      Signed-off-by: NShreyas B. Prabhu <shreyas@linux.vnet.ibm.com>
      Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Acked-by: NBalbir Singh <bsingharora@gmail.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      dbd1b8ea
  12. 18 5月, 2016 1 次提交
    • D
      cpuidle: Fix cpuidle_state_is_coupled() argument in cpuidle_enter() · e7387da5
      Daniel Lezcano 提交于
      Commit 0b89e9aa (cpuidle: delay enabling interrupts until all
      coupled CPUs leave idle) rightfully fixed a regression by letting
      the coupled idle state framework to handle local interrupt enabling
      when the CPU is exiting an idle state.
      
      The current code checks if the idle state is coupled and, if so, it
      will let the coupled code to enable interrupts. This way, it can
      decrement the ready-count before handling the interrupt. This
      mechanism prevents the other CPUs from waiting for a CPU which is
      handling interrupts.
      
      But the check is done against the state index returned by the back
      end driver's ->enter functions which could be different from the
      initial index passed as parameter to the cpuidle_enter_state()
      function.
      
       entered_state = target_state->enter(dev, drv, index);
      
       [ ... ]
      
       if (!cpuidle_state_is_coupled(drv, entered_state))
      	local_irq_enable();
      
       [ ... ]
      
      If the 'index' is referring to a coupled idle state but the
      'entered_state' is *not* coupled, then the interrupts are enabled
      again. All CPUs blocked on the sync barrier may busy loop longer
      if the CPU has interrupts to handle before decrementing the
      ready-count. That's consuming more energy than saving.
      
      Fixes: 0b89e9aa (cpuidle: delay enabling interrupts until all coupled CPUs leave idle)
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Cc: 3.15+ <stable@vger.kernel.org> # 3.15+
      [ rjw: Subject & changelog ]
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      e7387da5
  13. 26 4月, 2016 1 次提交
    • D
      cpuidle: Replace ktime_get() with local_clock() · e93e59ce
      Daniel Lezcano 提交于
      The ktime_get() can have a non negligeable overhead, use local_clock()
      instead.
      
      In order to test the difference between ktime_get() and local_clock(),
      a quick hack has been added to trigger, via debugfs, 10000 times a
      call to ktime_get() and local_clock() and measure the elapsed time.
      
      Then the average value, the min and max is computed for each call.
      
      From userspace, the test above was called 100 times every 2 seconds.
      
      So, ktime_get() and local_clock() have been called 1000000 times in
      total.
      
      The results are:
      
      ktime_get():
      ============
       * average: 101 ns (stddev: 27.4)
       * maximum: 38313 ns
       * minimum: 65 ns
      
      local_clock():
      ==============
       * average: 60 ns (stddev: 9.8)
       * maximum: 13487 ns
       * minimum: 46 ns
      
      The local_clock() is faster and more stable.
      
      Even if it is a drop in the ocean, changing the ktime_get() by the
      local_clock() allows to save 80ns at idle time (entry + exit). And
      in some circumstances, especially when there are several CPUs racing
      for the clock access, we save tens of microseconds.
      
      The idle duration resulting from a diff is converted from nanosec to
      microsec. This could be done with integer division (div 1000) - which is
      an expensive operation or by 10 bits shifting (div 1024) - which is fast
      but unprecise.
      
      The following table gives some results at the limits.
      
       ------------------------------------------
      |   nsec   |   div(1000)   |   div(1024)   |
       ------------------------------------------
      |   1e3    |        1 usec |      976 nsec |
       ------------------------------------------
      |   1e6    |     1000 usec |      976 usec |
       ------------------------------------------
      |   1e9    |  1000000 usec |   976562 usec |
       ------------------------------------------
      
      There is a linear deviation of 2.34%. This loss of precision is acceptable
      in the context of the resulting diff which is used for statistics. These
      ones are processed to guess estimate an approximation of the duration of the
      next idle period which ends up into an idle state selection. The selection
      criteria takes into account the next duration based on large intervals,
      represented by the idle state's target residency.
      
      The 2^10 division is enough because the approximation regarding the 1e3
      division is lost in all the approximations done for the next idle duration
      computation.
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      [ rjw: Subject ]
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      e93e59ce
  14. 09 4月, 2016 1 次提交
  15. 22 1月, 2016 1 次提交
  16. 19 1月, 2016 1 次提交
  17. 28 8月, 2015 1 次提交
  18. 21 7月, 2015 1 次提交
  19. 10 7月, 2015 1 次提交
  20. 30 5月, 2015 1 次提交
    • R
      cpuidle: Do not use CPUIDLE_DRIVER_STATE_START in cpuidle.c · 7d51d979
      Rafael J. Wysocki 提交于
      The CPUIDLE_DRIVER_STATE_START symbol is defined as 1 only if
      CONFIG_ARCH_HAS_CPU_RELAX is set, otherwise it is defined as 0.
      However, if CONFIG_ARCH_HAS_CPU_RELAX is set, the first (index 0)
      entry in the cpuidle driver's table of states is overwritten with
      the default "poll" entry by the core.  The "state" defined by the
      "poll" entry doesn't provide ->enter_dead and ->enter_freeze
      callbacks and its exit_latency is 0.
      
      For this reason, it is not necessary to use CPUIDLE_DRIVER_STATE_START
      in cpuidle_play_dead() (->enter_dead is NULL, so the "poll state"
      will be skipped by the loop).
      
      It also is arguably unuseful to return states with exit_latency
      equal to 0 from find_deepest_state(), so the function can be modified
      to start the loop from index 0 and the "poll state" will be skipped by
      it as a result of the check against latency_req.
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Reviewed-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com>
      7d51d979
  21. 19 5月, 2015 1 次提交
  22. 15 5月, 2015 3 次提交
  23. 10 5月, 2015 1 次提交
  24. 05 5月, 2015 1 次提交
  25. 29 4月, 2015 1 次提交
  26. 03 4月, 2015 1 次提交
  27. 06 3月, 2015 1 次提交
  28. 01 3月, 2015 2 次提交
  29. 16 2月, 2015 1 次提交
    • R
      PM / sleep: Make it possible to quiesce timers during suspend-to-idle · 124cf911
      Rafael J. Wysocki 提交于
      The efficiency of suspend-to-idle depends on being able to keep CPUs
      in the deepest available idle states for as much time as possible.
      Ideally, they should only be brought out of idle by system wakeup
      interrupts.
      
      However, timer interrupts occurring periodically prevent that from
      happening and it is not practical to chase all of the "misbehaving"
      timers in a whack-a-mole fashion.  A much more effective approach is
      to suspend the local ticks for all CPUs and the entire timekeeping
      along the lines of what is done during full suspend, which also
      helps to keep suspend-to-idle and full suspend reasonably similar.
      
      The idea is to suspend the local tick on each CPU executing
      cpuidle_enter_freeze() and to make the last of them suspend the
      entire timekeeping.  That should prevent timer interrupts from
      triggering until an IO interrupt wakes up one of the CPUs.  It
      needs to be done with interrupts disabled on all of the CPUs,
      though, because otherwise the suspended clocksource might be
      accessed by an interrupt handler which might lead to fatal
      consequences.
      
      Unfortunately, the existing ->enter callbacks provided by cpuidle
      drivers generally cannot be used for implementing that, because some
      of them re-enable interrupts temporarily and some idle entry methods
      cause interrupts to be re-enabled automatically on exit.  Also some
      of these callbacks manipulate local clock event devices of the CPUs
      which really shouldn't be done after suspending their ticks.
      
      To overcome that difficulty, introduce a new cpuidle state callback,
      ->enter_freeze, that will be guaranteed (1) to keep interrupts
      disabled all the time (and return with interrupts disabled) and (2)
      not to touch the CPU timer devices.  Modify cpuidle_enter_freeze() to
      look for the deepest available idle state with ->enter_freeze present
      and to make the CPU execute that callback with suspended tick (and the
      last of the online CPUs to execute it with suspended timekeeping).
      Suggested-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      124cf911
  30. 14 2月, 2015 1 次提交
    • R
      PM / sleep: Re-implement suspend-to-idle handling · 38106313
      Rafael J. Wysocki 提交于
      In preparation for adding support for quiescing timers in the final
      stage of suspend-to-idle transitions, rework the freeze_enter()
      function making the system wait on a wakeup event, the freeze_wake()
      function terminating the suspend-to-idle loop and the mechanism by
      which deep idle states are entered during suspend-to-idle.
      
      First of all, introduce a simple state machine for suspend-to-idle
      and make the code in question use it.
      
      Second, prevent freeze_enter() from losing wakeup events due to race
      conditions and ensure that the number of online CPUs won't change
      while it is being executed.  In addition to that, make it force
      all of the CPUs re-enter the idle loop in case they are in idle
      states already (so they can enter deeper idle states if possible).
      
      Next, drop cpuidle_use_deepest_state() and replace use_deepest_state
      checks in cpuidle_select() and cpuidle_reflect() with a single
      suspend-to-idle state check in cpuidle_idle_call().
      
      Finally, introduce cpuidle_enter_freeze() that will simply find the
      deepest idle state available to the given CPU and enter it using
      cpuidle_enter().
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      38106313
  31. 24 9月, 2014 1 次提交
    • D
      sched: Let the scheduler see CPU idle states · 442bf3aa
      Daniel Lezcano 提交于
      When the cpu enters idle, it stores the cpuidle state pointer in its
      struct rq instance which in turn could be used to make a better decision
      when balancing tasks.
      
      As soon as the cpu exits its idle state, the struct rq reference is
      cleared.
      
      There are a couple of situations where the idle state pointer could be changed
      while it is being consulted:
      
      1. For x86/acpi with dynamic c-states, when a laptop switches from battery
         to AC that could result on removing the deeper idle state. The acpi driver
         triggers:
      	'acpi_processor_cst_has_changed'
      		'cpuidle_pause_and_lock'
      			'cpuidle_uninstall_idle_handler'
      				'kick_all_cpus_sync'.
      
      All cpus will exit their idle state and the pointed object will be set to
      NULL.
      
      2. The cpuidle driver is unloaded. Logically that could happen but not
      in practice because the drivers are always compiled in and 95% of them are
      not coded to unregister themselves.  In any case, the unloading code must
      call 'cpuidle_unregister_device', that calls 'cpuidle_pause_and_lock'
      leading to 'kick_all_cpus_sync' as mentioned above.
      
      A race can happen if we use the pointer and then one of these two scenarios
      occurs at the same moment.
      
      In order to be safe, the idle state pointer stored in the rq must be
      used inside a rcu_read_lock section where we are protected with the
      'rcu_barrier' in the 'cpuidle_uninstall_idle_handler' function. The
      idle_get_state() and idle_put_state() accessors should be used to that
      effect.
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: linux-pm@vger.kernel.org
      Cc: linaro-kernel@lists.linaro.org
      Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/n/tip-@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      442bf3aa
  32. 19 9月, 2014 1 次提交
  33. 09 7月, 2014 1 次提交
  34. 07 5月, 2014 1 次提交
  35. 01 5月, 2014 1 次提交
  36. 12 3月, 2014 1 次提交
    • P
      cpuidle: delay enabling interrupts until all coupled CPUs leave idle · 0b89e9aa
      Paul Burton 提交于
      As described by a comment at the end of cpuidle_enter_state_coupled it
      can be inefficient for coupled idle states to return with IRQs enabled
      since they may proceed to service an interrupt instead of clearing the
      coupled idle state. Until they have finished & cleared the idle state
      all CPUs coupled with them will spin rather than being able to enter a
      safe idle state.
      
      Commits e1689795 "cpuidle: Add common time keeping and irq
      enabling" and 554c06ba "cpuidle: remove en_core_tk_irqen flag" led
      to the cpuidle_enter_state enabling interrupts for all idle states,
      including coupled ones, making this inefficiency unavoidable by drivers
      & the local_irq_enable near the end of cpuidle_enter_state_coupled
      redundant. This patch avoids enabling interrupts in cpuidle_enter_state
      after a coupled state has been entered, allowing them to remain disabled
      until all coupled CPUs have exited the idle state and
      cpuidle_enter_state_coupled re-enables them.
      
      Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      0b89e9aa