1. 15 5月, 2015 1 次提交
  2. 10 5月, 2015 1 次提交
  3. 05 5月, 2015 1 次提交
  4. 29 4月, 2015 1 次提交
  5. 17 4月, 2015 1 次提交
  6. 03 4月, 2015 2 次提交
  7. 24 3月, 2015 6 次提交
  8. 18 3月, 2015 1 次提交
  9. 14 3月, 2015 2 次提交
    • S
      cpuidle: mvebu: Update cpuidle thresholds for Armada XP SOCs · ce6031c8
      Sebastien Rannou 提交于
      Originally, the thresholds used in the cpuidle driver for Armada SOCs
      were temporarily chosen, leaving room for improvements.
      
      This commit updates the thresholds for the Armada XP SOCs with values
      that positively impact performances:
      
                                      without patch  with patch   vendor kernel
       - iperf localhost (gbit/sec)   ~3.7           ~6.4         ~5.4
       - ioping tmpfs (iops)          ~163k          ~206k        ~179k
       - ioping tmpfs (mib/s)         ~636           ~805         ~699
      
      The idle power consumption is negatively impacted (proportionally less
      than the performance gain), and we are still performing better than
      the vendor kernel here:
      
                                      without patch   with patch  vendor kernel
       - power consumption idle (W)   ~2.4            ~3.2        ~4.4
       - power consumption busy (W)   ~8.6            ~8.3        ~8.6
      
      There is still room for improvement regarding the value of these
      thresholds, they were chosen to mimic the vendor kernel.
      
      This patch only impacts Armada XP SOCs and was tested on Online Labs
      C1 boards. A similar approach can be taken to improve the performances
      of the Armada 370 and Armada 38x SOCs.
      
      Thanks a lot to Thomas Petazzoni, Gregory Clement and Willy Tarreau
      for the discussions and tips around this topic.
      Signed-off-by: NSebastien Rannou <mxs@sbrk.org>
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Acked-by: NGregory CLEMENT <gregory.clement@free-electrons.com>
      ce6031c8
    • G
      cpuidle: mvebu: Fix the CPU PM notifier usage · 43b68879
      Gregory CLEMENT 提交于
      As stated in kernel/cpu_pm.c, "Platform is responsible for ensuring
      that cpu_pm_enter is not called twice on the same CPU before
      cpu_pm_exit is called.". In the current code in case of failure when
      calling mvebu_v7_cpu_suspend, the function cpu_pm_exit() is never
      called whereas cpu_pm_enter() was called just before.
      
      This patch moves the cpu_pm_exit() in order to balance the
      cpu_pm_enter() calls.
      
      Cc: stable@vger.kernel.org
      Reported-by: NFulvio Benini <fbf@libero.it>
      Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com>
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      43b68879
  10. 06 3月, 2015 1 次提交
  11. 05 3月, 2015 1 次提交
  12. 01 3月, 2015 2 次提交
  13. 20 2月, 2015 1 次提交
  14. 18 2月, 2015 1 次提交
  15. 16 2月, 2015 1 次提交
    • R
      PM / sleep: Make it possible to quiesce timers during suspend-to-idle · 124cf911
      Rafael J. Wysocki 提交于
      The efficiency of suspend-to-idle depends on being able to keep CPUs
      in the deepest available idle states for as much time as possible.
      Ideally, they should only be brought out of idle by system wakeup
      interrupts.
      
      However, timer interrupts occurring periodically prevent that from
      happening and it is not practical to chase all of the "misbehaving"
      timers in a whack-a-mole fashion.  A much more effective approach is
      to suspend the local ticks for all CPUs and the entire timekeeping
      along the lines of what is done during full suspend, which also
      helps to keep suspend-to-idle and full suspend reasonably similar.
      
      The idea is to suspend the local tick on each CPU executing
      cpuidle_enter_freeze() and to make the last of them suspend the
      entire timekeeping.  That should prevent timer interrupts from
      triggering until an IO interrupt wakes up one of the CPUs.  It
      needs to be done with interrupts disabled on all of the CPUs,
      though, because otherwise the suspended clocksource might be
      accessed by an interrupt handler which might lead to fatal
      consequences.
      
      Unfortunately, the existing ->enter callbacks provided by cpuidle
      drivers generally cannot be used for implementing that, because some
      of them re-enable interrupts temporarily and some idle entry methods
      cause interrupts to be re-enabled automatically on exit.  Also some
      of these callbacks manipulate local clock event devices of the CPUs
      which really shouldn't be done after suspending their ticks.
      
      To overcome that difficulty, introduce a new cpuidle state callback,
      ->enter_freeze, that will be guaranteed (1) to keep interrupts
      disabled all the time (and return with interrupts disabled) and (2)
      not to touch the CPU timer devices.  Modify cpuidle_enter_freeze() to
      look for the deepest available idle state with ->enter_freeze present
      and to make the CPU execute that callback with suspended tick (and the
      last of the online CPUs to execute it with suspended timekeeping).
      Suggested-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      124cf911
  16. 14 2月, 2015 1 次提交
    • R
      PM / sleep: Re-implement suspend-to-idle handling · 38106313
      Rafael J. Wysocki 提交于
      In preparation for adding support for quiescing timers in the final
      stage of suspend-to-idle transitions, rework the freeze_enter()
      function making the system wait on a wakeup event, the freeze_wake()
      function terminating the suspend-to-idle loop and the mechanism by
      which deep idle states are entered during suspend-to-idle.
      
      First of all, introduce a simple state machine for suspend-to-idle
      and make the code in question use it.
      
      Second, prevent freeze_enter() from losing wakeup events due to race
      conditions and ensure that the number of online CPUs won't change
      while it is being executed.  In addition to that, make it force
      all of the CPUs re-enter the idle loop in case they are in idle
      states already (so they can enter deeper idle states if possible).
      
      Next, drop cpuidle_use_deepest_state() and replace use_deepest_state
      checks in cpuidle_select() and cpuidle_reflect() with a single
      suspend-to-idle state check in cpuidle_idle_call().
      
      Finally, introduce cpuidle_enter_freeze() that will simply find the
      deepest idle state available to the given CPU and enter it using
      cpuidle_enter().
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      38106313
  17. 30 1月, 2015 1 次提交
    • B
      cpuidle: exynos: add coupled cpuidle support for exynos4210 · 712eddf7
      Bartlomiej Zolnierkiewicz 提交于
      The following patch adds coupled cpuidle support for Exynos4210 to
      an existing cpuidle-exynos driver.  As a result it enables AFTR mode
      to be used by default on Exynos4210 without the need to hot unplug
      CPU1 first.
      
      The patch is heavily based on earlier cpuidle-exynos4210 driver from
      Daniel Lezcano:
      
      http://www.spinics.net/lists/linux-samsung-soc/msg28134.html
      
      Changes from Daniel's code include:
      - porting code to current kernels
      - fixing it to work on my setup (by using S5P_INFORM register
        instead of S5P_VA_SYSRAM one on Revison 1.1 and retrying poking
        CPU1 out of the BOOT ROM if necessary)
      - fixing rare lockup caused by waiting for CPU1 to get stuck in
        the BOOT ROM (CPU hotplug code in arch/arm/mach-exynos/platsmp.c
        doesn't require this and works fine)
      - moving Exynos specific code to arch/arm/mach-exynos/pm.c
      - using cpu_boot_reg_base() helper instead of BOOT_VECTOR macro
      - using exynos_cpu_*() helpers instead of accessing registers
        directly
      - using arch_send_wakeup_ipi_mask() instead of dsb_sev()
        (this matches CPU hotplug code in arch/arm/mach-exynos/platsmp.c)
      - integrating separate exynos4210-cpuidle driver into existing
        exynos-cpuidle one
      
      Cc: Colin Cross <ccross@google.com>
      Cc: Kukjin Kim <kgene.kim@samsung.com>
      Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
      Cc: Tomasz Figa <tomasz.figa@gmail.com>
      Signed-off-by: NBartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Acked-by: NKyungmin Park <kyungmin.park@samsung.com>
      Signed-off-by: NKukjin Kim <kgene@kernel.org>
      712eddf7
  18. 27 1月, 2015 1 次提交
    • L
      arm64: kernel: remove ARM64_CPU_SUSPEND config option · af3cfdbf
      Lorenzo Pieralisi 提交于
      ARM64_CPU_SUSPEND config option was introduced to make code providing
      context save/restore selectable only on platforms requiring power
      management capabilities.
      
      Currently ARM64_CPU_SUSPEND depends on the PM_SLEEP config option which
      in turn is set by the SUSPEND config option.
      
      The introduction of CPU_IDLE for arm64 requires that code configured
      by ARM64_CPU_SUSPEND (context save/restore) should be compiled in
      in order to enable the CPU idle driver to rely on CPU operations
      carrying out context save/restore.
      
      The ARM64_CPUIDLE config option (ARM64 generic idle driver) is therefore
      forced to select ARM64_CPU_SUSPEND, even if there may be (ie PM_SLEEP)
      failed dependencies, which is not a clean way of handling the kernel
      configuration option.
      
      For these reasons, this patch removes the ARM64_CPU_SUSPEND config option
      and makes the context save/restore dependent on CPU_PM, which is selected
      whenever either SUSPEND or CPU_IDLE are configured, cleaning up dependencies
      in the process.
      
      This way, code previously configured through ARM64_CPU_SUSPEND is
      compiled in whenever a power management subsystem requires it to be
      present in the kernel (SUSPEND || CPU_IDLE), which is the behaviour
      expected on ARM64 kernels.
      
      The cpu_suspend and cpu_init_idle CPU operations are added only if
      CPU_IDLE is selected, since they are CPU_IDLE specific methods and
      should be grouped and defined accordingly.
      
      PSCI CPU operations are updated to reflect the introduced changes.
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
      Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      af3cfdbf
  19. 23 1月, 2015 1 次提交
    • S
      drivers: cpuidle: Don't initialize big.LITTLE driver if MCPM is unavailable · 194fe6f2
      Sudeep Holla 提交于
      If big.LITTLE driver is initialized even when MCPM is unavailable,
      we get the below warning the first time cpu tries to enter deeper
      C-states.
      
      ------------[ cut here ]------------
      WARNING: CPU: 4 PID: 0 at kernel/arch/arm/common/mcpm_entry.c:130 mcpm_cpu_suspend+0x6d/0x74()
      Modules linked in:
      CPU: 4 PID: 0 Comm: swapper/4 Not tainted 3.19.0-rc3-00007-gaf5a2cb1ad5c-dirty #11
      Hardware name: ARM-Versatile Express
      [<c0013fa5>] (unwind_backtrace) from [<c001084d>] (show_stack+0x11/0x14)
      [<c001084d>] (show_stack) from [<c04fe7f1>] (dump_stack+0x6d/0x78)
      [<c04fe7f1>] (dump_stack) from [<c0020645>] (warn_slowpath_common+0x69/0x90)
      [<c0020645>] (warn_slowpath_common) from [<c00206db>] (warn_slowpath_null+0x17/0x1c)
      [<c00206db>] (warn_slowpath_null) from [<c001cbdd>] (mcpm_cpu_suspend+0x6d/0x74)
      [<c001cbdd>] (mcpm_cpu_suspend) from [<c03c6919>] (bl_powerdown_finisher+0x21/0x24)
      [<c03c6919>] (bl_powerdown_finisher) from [<c001218d>] (cpu_suspend_abort+0x1/0x14)
      [<c001218d>] (cpu_suspend_abort) from [<00000000>] (  (null))
      ---[ end trace d098e3fd00000008 ]---
      
      This patch fixes the issue by checking for the availability of MCPM
      before initializing the big.LITTLE cpuidle driver
      Signed-off-by: NSudeep Holla <sudeep.holla@arm.com>
      Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      194fe6f2
  20. 17 12月, 2014 2 次提交
    • L
      cpuidle: ladder: Better idle duration measurement without using CPUIDLE_FLAG_TIME_INVALID · b73026b9
      Len Brown 提交于
      When the ladder governor sees the CPUIDLE_FLAG_TIME_INVALID flag,
      it unconditionally causes a state promotion by setting last_residency
      to a number higher than the state's promotion_time:
      
      last_residency = last_state->threshold.promotion_time + 1
      
      It does this for fear that cpuidle_get_last_residency()
      will be in-accurate, because cpuidle_enter_state() invoked
      a state with CPUIDLE_FLAG_TIME_INVALID.
      
      But the only state with CPUIDLE_FLAG_TIME_INVALID is
      acpi_safe_halt(), which may return well after its actual
      idle duration because it enables interrupts, so cpuidle_enter_state()
      also measures interrupt service time.
      
      So what?  In ladder, a huge invalid last_residency has exactly
      the same effect as the current code -- it unconditionally
      causes a state promotion.
      
      In the case where the idle residency plus measured interrupt
      handling time is less than the state's demotion_time -- we should
      use that timestamp to give ladder a chance to demote, rather than
      unconditionally promoting.
      
      This can be done by simply ignoring the CPUIDLE_FLAG_TIME_INVALID,
      and using the "invalid" time, as it is either equal to what we are
      doing today, or better.
      Signed-off-by: NLen Brown <len.brown@intel.com>
      Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      b73026b9
    • L
      cpuidle: menu: Better idle duration measurement without using CPUIDLE_FLAG_TIME_INVALID · 4108b3d9
      Len Brown 提交于
      When menu sees CPUIDLE_FLAG_TIME_INVALID, it ignores its timestamps,
      and assumes that idle lasted as long as the time till next predicted
      timer expiration.
      
      But if an interrupt was seen and serviced before that duration,
      it would actually be more accurate to use the measured time
      rather than rounding up to the next predicted timer expiration.
      
      And if an interrupt is seen and serviced such that the mesured time
      exceeds the time till next predicted timer expiration, then
      truncating to that expiration is the right thing to do --
      since we can never stay idle past that timer expiration.
      
      So the code can do a better job without
      checking for CPUIDLE_FLAG_TIME_INVALID.
      Signed-off-by: NLen Brown <len.brown@intel.com>
      Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Reviewed-by: NTuukka Tikkanen <tuukka.tikkanen@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      4108b3d9
  21. 15 12月, 2014 2 次提交
    • S
      powernv/cpuidle: Redesign idle states management · 7cba160a
      Shreyas B. Prabhu 提交于
      Deep idle states like sleep and winkle are per core idle states. A core
      enters these states only when all the threads enter either the
      particular idle state or a deeper one. There are tasks like fastsleep
      hardware bug workaround and hypervisor core state save which have to be
      done only by the last thread of the core entering deep idle state and
      similarly tasks like timebase resync, hypervisor core register restore
      that have to be done only by the first thread waking up from these
      state.
      
      The current idle state management does not have a way to distinguish the
      first/last thread of the core waking/entering idle states. Tasks like
      timebase resync are done for all the threads. This is not only is
      suboptimal, but can cause functionality issues when subcores and kvm is
      involved.
      
      This patch adds the necessary infrastructure to track idle states of
      threads in a per-core structure. It uses this info to perform tasks like
      fastsleep workaround and timebase resync only once per core.
      Signed-off-by: NShreyas B. Prabhu <shreyas@linux.vnet.ibm.com>
      Originally-by: NPreeti U. Murthy <preeti@linux.vnet.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
      Cc: linux-pm@vger.kernel.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      7cba160a
    • S
      powerpc/powernv: Enable Offline CPUs to enter deep idle states · 8eb8ac89
      Shreyas B. Prabhu 提交于
      The secondary threads should enter deep idle states so as to gain maximum
      powersavings when the entire core is offline. To do so the offline path
      must be made aware of the available deepest idle state. Hence probe the
      device tree for the possible idle states in powernv core code and
      expose the deepest idle state through flags.
      
      Since the  device tree is probed by the cpuidle driver as well, move
      the parameters required to discover the idle states into an appropriate
      common place to both the driver and the powernv core code.
      
      Another point is that fastsleep idle state may require workarounds in
      the kernel to function properly. This workaround is introduced in the
      subsequent patches. However neither the cpuidle driver or the hotplug
      path need be bothered about this workaround.
      
      They will be taken care of by the core powernv code.
      Originally-by: NSrivatsa S. Bhat <srivatsa@mit.edu>
      Signed-off-by: NPreeti U. Murthy <preeti@linux.vnet.ibm.com>
      Signed-off-by: NShreyas B. Prabhu <shreyas@linux.vnet.ibm.com>
      Reviewed-by: NPaul Mackerras <paulus@samba.org>
      
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
      Cc: linux-pm@vger.kernel.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      8eb8ac89
  22. 19 11月, 2014 3 次提交
  23. 13 11月, 2014 1 次提交
    • D
      cpuidle: Invert CPUIDLE_FLAG_TIME_VALID logic · b82b6cca
      Daniel Lezcano 提交于
      The only place where the time is invalid is when the ACPI_CSTATE_FFH entry
      method is not set. Otherwise for all the drivers, the time can be correctly
      measured.
      
      Instead of duplicating the CPUIDLE_FLAG_TIME_VALID flag in all the drivers
      for all the states, just invert the logic by replacing it by the flag
      CPUIDLE_FLAG_TIME_INVALID, hence we can set this flag only for the acpi idle
      driver, remove the former flag from all the drivers and invert the logic with
      this flag in the different governor.
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      b82b6cca
  24. 24 10月, 2014 1 次提交
  25. 21 10月, 2014 1 次提交
  26. 20 10月, 2014 1 次提交
  27. 25 9月, 2014 2 次提交