提交 8b9951ed 编写于 作者: P Prashanth Prakash 提交者: Catalin Marinas

ARM64 / cpuidle: Use new cpuidle macro for entering retention state

CPU_PM_CPU_IDLE_ENTER_RETENTION skips calling cpu_pm_enter() and
cpu_pm_exit(). By not calling cpu_pm functions in idle entry/exit
paths we can reduce the latency involved in entering and exiting
the low power idle state.

On ARM64 based Qualcomm server platform we measured below overhead
for calling cpu_pm_enter and cpu_pm_exit for retention states.

workload: stress --hdd #CPUs --hdd-bytes 32M  -t 30
	Average overhead of cpu_pm_enter - 1.2us
	Average overhead of cpu_pm_exit  - 3.1us
Acked-by: NSudeep Holla <sudeep.holla@arm.com>
Signed-off-by: NPrashanth Prakash <pprakash@codeaurora.org>
Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
上级 db50a74d
......@@ -47,6 +47,8 @@ int arm_cpuidle_suspend(int index)
#include <acpi/processor.h>
#define ARM64_LPI_IS_RETENTION_STATE(arch_flags) (!(arch_flags))
int acpi_processor_ffh_lpi_probe(unsigned int cpu)
{
return arm_cpuidle_init(cpu);
......@@ -54,6 +56,10 @@ int acpi_processor_ffh_lpi_probe(unsigned int cpu)
int acpi_processor_ffh_lpi_enter(struct acpi_lpi_state *lpi)
{
return CPU_PM_CPU_IDLE_ENTER(arm_cpuidle_suspend, lpi->index);
if (ARM64_LPI_IS_RETENTION_STATE(lpi->arch_flags))
return CPU_PM_CPU_IDLE_ENTER_RETENTION(arm_cpuidle_suspend,
lpi->index);
else
return CPU_PM_CPU_IDLE_ENTER(arm_cpuidle_suspend, lpi->index);
}
#endif
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册