提交 8f1073ed 编写于 作者: R Rafael J. Wysocki

Merge branch 'pm-qos'

* pm-qos: (30 commits)
  PM: QoS: annotate data races in pm_qos_*_value()
  Documentation: power: fix pm_qos_interface.rst format warning
  PM: QoS: Make CPU latency QoS depend on CONFIG_CPU_IDLE
  Documentation: PM: QoS: Update to reflect previous code changes
  PM: QoS: Update file information comments
  PM: QoS: Drop PM_QOS_CPU_DMA_LATENCY and rename related functions
  sound: Call cpu_latency_qos_*() instead of pm_qos_*()
  drivers: usb: Call cpu_latency_qos_*() instead of pm_qos_*()
  drivers: tty: Call cpu_latency_qos_*() instead of pm_qos_*()
  drivers: spi: Call cpu_latency_qos_*() instead of pm_qos_*()
  drivers: net: Call cpu_latency_qos_*() instead of pm_qos_*()
  drivers: mmc: Call cpu_latency_qos_*() instead of pm_qos_*()
  drivers: media: Call cpu_latency_qos_*() instead of pm_qos_*()
  drivers: hsi: Call cpu_latency_qos_*() instead of pm_qos_*()
  drm: i915: Call cpu_latency_qos_*() instead of pm_qos_*()
  x86: platform: iosf_mbi: Call cpu_latency_qos_*() instead of pm_qos_*()
  cpuidle: Call cpu_latency_qos_limit() instead of pm_qos_request()
  PM: QoS: Add CPU latency QoS API wrappers
  PM: QoS: Adjust pm_qos_request() signature and reorder pm_qos.h
  PM: QoS: Simplify definitions of CPU latency QoS trace events
  ...
...@@ -583,20 +583,17 @@ Power Management Quality of Service for CPUs ...@@ -583,20 +583,17 @@ Power Management Quality of Service for CPUs
The power management quality of service (PM QoS) framework in the Linux kernel The power management quality of service (PM QoS) framework in the Linux kernel
allows kernel code and user space processes to set constraints on various allows kernel code and user space processes to set constraints on various
energy-efficiency features of the kernel to prevent performance from dropping energy-efficiency features of the kernel to prevent performance from dropping
below a required level. The PM QoS constraints can be set globally, in below a required level.
predefined categories referred to as PM QoS classes, or against individual
devices.
CPU idle time management can be affected by PM QoS in two ways, through the CPU idle time management can be affected by PM QoS in two ways, through the
global constraint in the ``PM_QOS_CPU_DMA_LATENCY`` class and through the global CPU latency limit and through the resume latency constraints for
resume latency constraints for individual CPUs. Kernel code (e.g. device individual CPUs. Kernel code (e.g. device drivers) can set both of them with
drivers) can set both of them with the help of special internal interfaces the help of special internal interfaces provided by the PM QoS framework. User
provided by the PM QoS framework. User space can modify the former by opening space can modify the former by opening the :file:`cpu_dma_latency` special
the :file:`cpu_dma_latency` special device file under :file:`/dev/` and writing device file under :file:`/dev/` and writing a binary value (interpreted as a
a binary value (interpreted as a signed 32-bit integer) to it. In turn, the signed 32-bit integer) to it. In turn, the resume latency constraint for a CPU
resume latency constraint for a CPU can be modified by user space by writing a can be modified from user space by writing a string (representing a signed
string (representing a signed 32-bit integer) to the 32-bit integer) to the :file:`power/pm_qos_resume_latency_us` file under
:file:`power/pm_qos_resume_latency_us` file under
:file:`/sys/devices/system/cpu/cpu<N>/` in ``sysfs``, where the CPU number :file:`/sys/devices/system/cpu/cpu<N>/` in ``sysfs``, where the CPU number
``<N>`` is allocated at the system initialization time. Negative values ``<N>`` is allocated at the system initialization time. Negative values
will be rejected in both cases and, also in both cases, the written integer will be rejected in both cases and, also in both cases, the written integer
...@@ -605,32 +602,34 @@ number will be interpreted as a requested PM QoS constraint in microseconds. ...@@ -605,32 +602,34 @@ number will be interpreted as a requested PM QoS constraint in microseconds.
The requested value is not automatically applied as a new constraint, however, The requested value is not automatically applied as a new constraint, however,
as it may be less restrictive (greater in this particular case) than another as it may be less restrictive (greater in this particular case) than another
constraint previously requested by someone else. For this reason, the PM QoS constraint previously requested by someone else. For this reason, the PM QoS
framework maintains a list of requests that have been made so far in each framework maintains a list of requests that have been made so far for the
global class and for each device, aggregates them and applies the effective global CPU latency limit and for each individual CPU, aggregates them and
(minimum in this particular case) value as the new constraint. applies the effective (minimum in this particular case) value as the new
constraint.
In fact, opening the :file:`cpu_dma_latency` special device file causes a new In fact, opening the :file:`cpu_dma_latency` special device file causes a new
PM QoS request to be created and added to the priority list of requests in the PM QoS request to be created and added to a global priority list of CPU latency
``PM_QOS_CPU_DMA_LATENCY`` class and the file descriptor coming from the limit requests and the file descriptor coming from the "open" operation
"open" operation represents that request. If that file descriptor is then represents that request. If that file descriptor is then used for writing, the
used for writing, the number written to it will be associated with the PM QoS number written to it will be associated with the PM QoS request represented by
request represented by it as a new requested constraint value. Next, the it as a new requested limit value. Next, the priority list mechanism will be
priority list mechanism will be used to determine the new effective value of used to determine the new effective value of the entire list of requests and
the entire list of requests and that effective value will be set as a new that effective value will be set as a new CPU latency limit. Thus requesting a
constraint. Thus setting a new requested constraint value will only change the new limit value will only change the real limit if the effective "list" value is
real constraint if the effective "list" value is affected by it. In particular, affected by it, which is the case if it is the minimum of the requested values
for the ``PM_QOS_CPU_DMA_LATENCY`` class it only affects the real constraint if in the list.
it is the minimum of the requested constraints in the list. The process holding
a file descriptor obtained by opening the :file:`cpu_dma_latency` special device The process holding a file descriptor obtained by opening the
file controls the PM QoS request associated with that file descriptor, but it :file:`cpu_dma_latency` special device file controls the PM QoS request
controls this particular PM QoS request only. associated with that file descriptor, but it controls this particular PM QoS
request only.
Closing the :file:`cpu_dma_latency` special device file or, more precisely, the Closing the :file:`cpu_dma_latency` special device file or, more precisely, the
file descriptor obtained while opening it, causes the PM QoS request associated file descriptor obtained while opening it, causes the PM QoS request associated
with that file descriptor to be removed from the ``PM_QOS_CPU_DMA_LATENCY`` with that file descriptor to be removed from the global priority list of CPU
class priority list and destroyed. If that happens, the priority list mechanism latency limit requests and destroyed. If that happens, the priority list
will be used, again, to determine the new effective value for the whole list mechanism will be used again, to determine the new effective value for the whole
and that value will become the new real constraint. list and that value will become the new limit.
In turn, for each CPU there is one resume latency PM QoS request associated with In turn, for each CPU there is one resume latency PM QoS request associated with
the :file:`power/pm_qos_resume_latency_us` file under the :file:`power/pm_qos_resume_latency_us` file under
...@@ -647,10 +646,10 @@ CPU in question every time the list of requests is updated this way or another ...@@ -647,10 +646,10 @@ CPU in question every time the list of requests is updated this way or another
(there may be other requests coming from kernel code in that list). (there may be other requests coming from kernel code in that list).
CPU idle time governors are expected to regard the minimum of the global CPU idle time governors are expected to regard the minimum of the global
effective ``PM_QOS_CPU_DMA_LATENCY`` class constraint and the effective (effective) CPU latency limit and the effective resume latency constraint for
resume latency constraint for the given CPU as the upper limit for the exit the given CPU as the upper limit for the exit latency of the idle states that
latency of the idle states they can select for that CPU. They should never they are allowed to select for that CPU. They should never select any idle
select any idle states with exit latency beyond that limit. states with exit latency beyond that limit.
Idle States Control Via Kernel Command Line Idle States Control Via Kernel Command Line
......
...@@ -7,86 +7,78 @@ performance expectations by drivers, subsystems and user space applications on ...@@ -7,86 +7,78 @@ performance expectations by drivers, subsystems and user space applications on
one of the parameters. one of the parameters.
Two different PM QoS frameworks are available: Two different PM QoS frameworks are available:
1. PM QoS classes for cpu_dma_latency * CPU latency QoS.
2. The per-device PM QoS framework provides the API to manage the * The per-device PM QoS framework provides the API to manage the
per-device latency constraints and PM QoS flags. per-device latency constraints and PM QoS flags.
Each parameters have defined units: The latency unit used in the PM QoS framework is the microsecond (usec).
* latency: usec
* timeout: usec
* throughput: kbs (kilo bit / sec)
* memory bandwidth: mbs (mega bit / sec)
1. PM QoS framework 1. PM QoS framework
=================== ===================
The infrastructure exposes multiple misc device nodes one per implemented A global list of CPU latency QoS requests is maintained along with an aggregated
parameter. The set of parameters implement is defined by pm_qos_power_init() (effective) target value. The aggregated target value is updated with changes
and pm_qos_params.h. This is done because having the available parameters to the request list or elements of the list. For CPU latency QoS, the
being runtime configurable or changeable from a driver was seen as too easy to aggregated target value is simply the min of the request values held in the list
abuse. elements.
For each parameter a list of performance requests is maintained along with
an aggregated target value. The aggregated target value is updated with
changes to the request list or elements of the list. Typically the
aggregated target value is simply the max or min of the request values held
in the parameter list elements.
Note: the aggregated target value is implemented as an atomic variable so that Note: the aggregated target value is implemented as an atomic variable so that
reading the aggregated value does not require any locking mechanism. reading the aggregated value does not require any locking mechanism.
From kernel space the use of this interface is simple:
From kernel mode the use of this interface is simple: void cpu_latency_qos_add_request(handle, target_value):
Will insert an element into the CPU latency QoS list with the target value.
void pm_qos_add_request(handle, param_class, target_value): Upon change to this list the new target is recomputed and any registered
Will insert an element into the list for that identified PM QoS class with the notifiers are called only if the target value is now different.
target value. Upon change to this list the new target is recomputed and any Clients of PM QoS need to save the returned handle for future use in other
registered notifiers are called only if the target value is now different. PM QoS API functions.
Clients of pm_qos need to save the returned handle for future use in other
pm_qos API functions.
void pm_qos_update_request(handle, new_target_value): void cpu_latency_qos_update_request(handle, new_target_value):
Will update the list element pointed to by the handle with the new target Will update the list element pointed to by the handle with the new target
value and recompute the new aggregated target, calling the notification tree value and recompute the new aggregated target, calling the notification tree
if the target is changed. if the target is changed.
void pm_qos_remove_request(handle): void cpu_latency_qos_remove_request(handle):
Will remove the element. After removal it will update the aggregate target Will remove the element. After removal it will update the aggregate target
and call the notification tree if the target was changed as a result of and call the notification tree if the target was changed as a result of
removing the request. removing the request.
int pm_qos_request(param_class): int cpu_latency_qos_limit():
Returns the aggregated value for a given PM QoS class. Returns the aggregated value for the CPU latency QoS.
int cpu_latency_qos_request_active(handle):
Returns if the request is still active, i.e. it has not been removed from the
CPU latency QoS list.
int pm_qos_request_active(handle): int cpu_latency_qos_add_notifier(notifier):
Returns if the request is still active, i.e. it has not been removed from a Adds a notification callback function to the CPU latency QoS. The callback is
PM QoS class constraints list. called when the aggregated value for the CPU latency QoS is changed.
int pm_qos_add_notifier(param_class, notifier): int cpu_latency_qos_remove_notifier(notifier):
Adds a notification callback function to the PM QoS class. The callback is Removes the notification callback function from the CPU latency QoS.
called when the aggregated value for the PM QoS class is changed.
int pm_qos_remove_notifier(int param_class, notifier):
Removes the notification callback function for the PM QoS class.
From user space:
From user mode: The infrastructure exposes one device node, /dev/cpu_dma_latency, for the CPU
latency QoS.
Only processes can register a pm_qos request. To provide for automatic Only processes can register a PM QoS request. To provide for automatic
cleanup of a process, the interface requires the process to register its cleanup of a process, the interface requires the process to register its
parameter requests in the following way: parameter requests as follows.
To register the default pm_qos target for the specific parameter, the process To register the default PM QoS target for the CPU latency QoS, the process must
must open /dev/cpu_dma_latency open /dev/cpu_dma_latency.
As long as the device node is held open that process has a registered As long as the device node is held open that process has a registered
request on the parameter. request on the parameter.
To change the requested target value the process needs to write an s32 value to To change the requested target value, the process needs to write an s32 value to
the open device node. Alternatively the user mode program could write a hex the open device node. Alternatively, it can write a hex string for the value
string for the value using 10 char long format e.g. "0x12345678". This using the 10 char long format e.g. "0x12345678". This translates to a
translates to a pm_qos_update_request call. cpu_latency_qos_update_request() call.
To remove the user mode request for a target value simply close the device To remove the user mode request for a target value simply close the device
node. node.
......
...@@ -73,16 +73,6 @@ The second parameter is the power domain target state. ...@@ -73,16 +73,6 @@ The second parameter is the power domain target state.
================ ================
The PM QoS events are used for QoS add/update/remove request and for The PM QoS events are used for QoS add/update/remove request and for
target/flags update. target/flags update.
::
pm_qos_add_request "pm_qos_class=%s value=%d"
pm_qos_update_request "pm_qos_class=%s value=%d"
pm_qos_remove_request "pm_qos_class=%s value=%d"
pm_qos_update_request_timeout "pm_qos_class=%s value=%d, timeout_us=%ld"
The first parameter gives the QoS class name (e.g. "CPU_DMA_LATENCY").
The second parameter is value to be added/updated/removed.
The third parameter is timeout value in usec.
:: ::
pm_qos_update_target "action=%s prev_value=%d curr_value=%d" pm_qos_update_target "action=%s prev_value=%d curr_value=%d"
...@@ -92,7 +82,7 @@ The first parameter gives the QoS action name (e.g. "ADD_REQ"). ...@@ -92,7 +82,7 @@ The first parameter gives the QoS action name (e.g. "ADD_REQ").
The second parameter is the previous QoS value. The second parameter is the previous QoS value.
The third parameter is the current QoS value to update. The third parameter is the current QoS value to update.
And, there are also events used for device PM QoS add/update/remove request. There are also events used for device PM QoS add/update/remove request.
:: ::
dev_pm_qos_add_request "device=%s type=%s new_value=%d" dev_pm_qos_add_request "device=%s type=%s new_value=%d"
...@@ -103,3 +93,12 @@ The first parameter gives the device name which tries to add/update/remove ...@@ -103,3 +93,12 @@ The first parameter gives the device name which tries to add/update/remove
QoS requests. QoS requests.
The second parameter gives the request type (e.g. "DEV_PM_QOS_RESUME_LATENCY"). The second parameter gives the request type (e.g. "DEV_PM_QOS_RESUME_LATENCY").
The third parameter is value to be added/updated/removed. The third parameter is value to be added/updated/removed.
And, there are events used for CPU latency QoS add/update/remove request.
::
pm_qos_add_request "value=%d"
pm_qos_update_request "value=%d"
pm_qos_remove_request "value=%d"
The parameter is the value to be added/updated/removed.
...@@ -265,7 +265,7 @@ static void iosf_mbi_reset_semaphore(void) ...@@ -265,7 +265,7 @@ static void iosf_mbi_reset_semaphore(void)
iosf_mbi_sem_address, 0, PUNIT_SEMAPHORE_BIT)) iosf_mbi_sem_address, 0, PUNIT_SEMAPHORE_BIT))
dev_err(&mbi_pdev->dev, "Error P-Unit semaphore reset failed\n"); dev_err(&mbi_pdev->dev, "Error P-Unit semaphore reset failed\n");
pm_qos_update_request(&iosf_mbi_pm_qos, PM_QOS_DEFAULT_VALUE); cpu_latency_qos_update_request(&iosf_mbi_pm_qos, PM_QOS_DEFAULT_VALUE);
blocking_notifier_call_chain(&iosf_mbi_pmic_bus_access_notifier, blocking_notifier_call_chain(&iosf_mbi_pmic_bus_access_notifier,
MBI_PMIC_BUS_ACCESS_END, NULL); MBI_PMIC_BUS_ACCESS_END, NULL);
...@@ -301,8 +301,8 @@ static void iosf_mbi_reset_semaphore(void) ...@@ -301,8 +301,8 @@ static void iosf_mbi_reset_semaphore(void)
* 4) When CPU cores enter C6 or C7 the P-Unit needs to talk to the PMIC * 4) When CPU cores enter C6 or C7 the P-Unit needs to talk to the PMIC
* if this happens while the kernel itself is accessing the PMIC I2C bus * if this happens while the kernel itself is accessing the PMIC I2C bus
* the SoC hangs. * the SoC hangs.
* As the third step we call pm_qos_update_request() to disallow the CPU * As the third step we call cpu_latency_qos_update_request() to disallow the
* to enter C6 or C7. * CPU to enter C6 or C7.
* *
* 5) The P-Unit has a PMIC bus semaphore which we can request to stop * 5) The P-Unit has a PMIC bus semaphore which we can request to stop
* autonomous P-Unit tasks from accessing the PMIC I2C bus while we hold it. * autonomous P-Unit tasks from accessing the PMIC I2C bus while we hold it.
...@@ -338,7 +338,7 @@ int iosf_mbi_block_punit_i2c_access(void) ...@@ -338,7 +338,7 @@ int iosf_mbi_block_punit_i2c_access(void)
* requires the P-Unit to talk to the PMIC and if this happens while * requires the P-Unit to talk to the PMIC and if this happens while
* we're holding the semaphore, the SoC hangs. * we're holding the semaphore, the SoC hangs.
*/ */
pm_qos_update_request(&iosf_mbi_pm_qos, 0); cpu_latency_qos_update_request(&iosf_mbi_pm_qos, 0);
/* host driver writes to side band semaphore register */ /* host driver writes to side band semaphore register */
ret = iosf_mbi_write(BT_MBI_UNIT_PMC, MBI_REG_WRITE, ret = iosf_mbi_write(BT_MBI_UNIT_PMC, MBI_REG_WRITE,
...@@ -547,8 +547,7 @@ static int __init iosf_mbi_init(void) ...@@ -547,8 +547,7 @@ static int __init iosf_mbi_init(void)
{ {
iosf_debugfs_init(); iosf_debugfs_init();
pm_qos_add_request(&iosf_mbi_pm_qos, PM_QOS_CPU_DMA_LATENCY, cpu_latency_qos_add_request(&iosf_mbi_pm_qos, PM_QOS_DEFAULT_VALUE);
PM_QOS_DEFAULT_VALUE);
return pci_register_driver(&iosf_mbi_pci_driver); return pci_register_driver(&iosf_mbi_pci_driver);
} }
...@@ -561,7 +560,7 @@ static void __exit iosf_mbi_exit(void) ...@@ -561,7 +560,7 @@ static void __exit iosf_mbi_exit(void)
pci_dev_put(mbi_pdev); pci_dev_put(mbi_pdev);
mbi_pdev = NULL; mbi_pdev = NULL;
pm_qos_remove_request(&iosf_mbi_pm_qos); cpu_latency_qos_remove_request(&iosf_mbi_pm_qos);
} }
module_init(iosf_mbi_init); module_init(iosf_mbi_init);
......
...@@ -736,53 +736,15 @@ int cpuidle_register(struct cpuidle_driver *drv, ...@@ -736,53 +736,15 @@ int cpuidle_register(struct cpuidle_driver *drv,
} }
EXPORT_SYMBOL_GPL(cpuidle_register); EXPORT_SYMBOL_GPL(cpuidle_register);
#ifdef CONFIG_SMP
/*
* This function gets called when a part of the kernel has a new latency
* requirement. This means we need to get all processors out of their C-state,
* and then recalculate a new suitable C-state. Just do a cross-cpu IPI; that
* wakes them all right up.
*/
static int cpuidle_latency_notify(struct notifier_block *b,
unsigned long l, void *v)
{
wake_up_all_idle_cpus();
return NOTIFY_OK;
}
static struct notifier_block cpuidle_latency_notifier = {
.notifier_call = cpuidle_latency_notify,
};
static inline void latency_notifier_init(struct notifier_block *n)
{
pm_qos_add_notifier(PM_QOS_CPU_DMA_LATENCY, n);
}
#else /* CONFIG_SMP */
#define latency_notifier_init(x) do { } while (0)
#endif /* CONFIG_SMP */
/** /**
* cpuidle_init - core initializer * cpuidle_init - core initializer
*/ */
static int __init cpuidle_init(void) static int __init cpuidle_init(void)
{ {
int ret;
if (cpuidle_disabled()) if (cpuidle_disabled())
return -ENODEV; return -ENODEV;
ret = cpuidle_add_interface(cpu_subsys.dev_root); return cpuidle_add_interface(cpu_subsys.dev_root);
if (ret)
return ret;
latency_notifier_init(&cpuidle_latency_notifier);
return 0;
} }
module_param(off, int, 0444); module_param(off, int, 0444);
......
...@@ -109,9 +109,9 @@ int cpuidle_register_governor(struct cpuidle_governor *gov) ...@@ -109,9 +109,9 @@ int cpuidle_register_governor(struct cpuidle_governor *gov)
*/ */
s64 cpuidle_governor_latency_req(unsigned int cpu) s64 cpuidle_governor_latency_req(unsigned int cpu)
{ {
int global_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
struct device *device = get_cpu_device(cpu); struct device *device = get_cpu_device(cpu);
int device_req = dev_pm_qos_raw_resume_latency(device); int device_req = dev_pm_qos_raw_resume_latency(device);
int global_req = cpu_latency_qos_limit();
if (device_req > global_req) if (device_req > global_req)
device_req = global_req; device_req = global_req;
......
...@@ -1360,7 +1360,7 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp, ...@@ -1360,7 +1360,7 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp,
* lowest possible wakeup latency and so prevent the cpu from going into * lowest possible wakeup latency and so prevent the cpu from going into
* deep sleep states. * deep sleep states.
*/ */
pm_qos_update_request(&i915->pm_qos, 0); cpu_latency_qos_update_request(&i915->pm_qos, 0);
intel_dp_check_edp(intel_dp); intel_dp_check_edp(intel_dp);
...@@ -1488,7 +1488,7 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp, ...@@ -1488,7 +1488,7 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp,
ret = recv_bytes; ret = recv_bytes;
out: out:
pm_qos_update_request(&i915->pm_qos, PM_QOS_DEFAULT_VALUE); cpu_latency_qos_update_request(&i915->pm_qos, PM_QOS_DEFAULT_VALUE);
if (vdd) if (vdd)
edp_panel_vdd_off(intel_dp, false); edp_panel_vdd_off(intel_dp, false);
......
...@@ -505,8 +505,7 @@ static int i915_driver_early_probe(struct drm_i915_private *dev_priv) ...@@ -505,8 +505,7 @@ static int i915_driver_early_probe(struct drm_i915_private *dev_priv)
mutex_init(&dev_priv->backlight_lock); mutex_init(&dev_priv->backlight_lock);
mutex_init(&dev_priv->sb_lock); mutex_init(&dev_priv->sb_lock);
pm_qos_add_request(&dev_priv->sb_qos, cpu_latency_qos_add_request(&dev_priv->sb_qos, PM_QOS_DEFAULT_VALUE);
PM_QOS_CPU_DMA_LATENCY, PM_QOS_DEFAULT_VALUE);
mutex_init(&dev_priv->av_mutex); mutex_init(&dev_priv->av_mutex);
mutex_init(&dev_priv->wm.wm_mutex); mutex_init(&dev_priv->wm.wm_mutex);
...@@ -571,7 +570,7 @@ static void i915_driver_late_release(struct drm_i915_private *dev_priv) ...@@ -571,7 +570,7 @@ static void i915_driver_late_release(struct drm_i915_private *dev_priv)
vlv_free_s0ix_state(dev_priv); vlv_free_s0ix_state(dev_priv);
i915_workqueues_cleanup(dev_priv); i915_workqueues_cleanup(dev_priv);
pm_qos_remove_request(&dev_priv->sb_qos); cpu_latency_qos_remove_request(&dev_priv->sb_qos);
mutex_destroy(&dev_priv->sb_lock); mutex_destroy(&dev_priv->sb_lock);
} }
...@@ -1229,8 +1228,7 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv) ...@@ -1229,8 +1228,7 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
} }
} }
pm_qos_add_request(&dev_priv->pm_qos, PM_QOS_CPU_DMA_LATENCY, cpu_latency_qos_add_request(&dev_priv->pm_qos, PM_QOS_DEFAULT_VALUE);
PM_QOS_DEFAULT_VALUE);
intel_gt_init_workarounds(dev_priv); intel_gt_init_workarounds(dev_priv);
...@@ -1276,7 +1274,7 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv) ...@@ -1276,7 +1274,7 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
err_msi: err_msi:
if (pdev->msi_enabled) if (pdev->msi_enabled)
pci_disable_msi(pdev); pci_disable_msi(pdev);
pm_qos_remove_request(&dev_priv->pm_qos); cpu_latency_qos_remove_request(&dev_priv->pm_qos);
err_mem_regions: err_mem_regions:
intel_memory_regions_driver_release(dev_priv); intel_memory_regions_driver_release(dev_priv);
err_ggtt: err_ggtt:
...@@ -1299,7 +1297,7 @@ static void i915_driver_hw_remove(struct drm_i915_private *dev_priv) ...@@ -1299,7 +1297,7 @@ static void i915_driver_hw_remove(struct drm_i915_private *dev_priv)
if (pdev->msi_enabled) if (pdev->msi_enabled)
pci_disable_msi(pdev); pci_disable_msi(pdev);
pm_qos_remove_request(&dev_priv->pm_qos); cpu_latency_qos_remove_request(&dev_priv->pm_qos);
} }
/** /**
......
...@@ -60,7 +60,7 @@ static void __vlv_punit_get(struct drm_i915_private *i915) ...@@ -60,7 +60,7 @@ static void __vlv_punit_get(struct drm_i915_private *i915)
* to the Valleyview P-unit and not all sideband communications. * to the Valleyview P-unit and not all sideband communications.
*/ */
if (IS_VALLEYVIEW(i915)) { if (IS_VALLEYVIEW(i915)) {
pm_qos_update_request(&i915->sb_qos, 0); cpu_latency_qos_update_request(&i915->sb_qos, 0);
on_each_cpu(ping, NULL, 1); on_each_cpu(ping, NULL, 1);
} }
} }
...@@ -68,7 +68,8 @@ static void __vlv_punit_get(struct drm_i915_private *i915) ...@@ -68,7 +68,8 @@ static void __vlv_punit_get(struct drm_i915_private *i915)
static void __vlv_punit_put(struct drm_i915_private *i915) static void __vlv_punit_put(struct drm_i915_private *i915)
{ {
if (IS_VALLEYVIEW(i915)) if (IS_VALLEYVIEW(i915))
pm_qos_update_request(&i915->sb_qos, PM_QOS_DEFAULT_VALUE); cpu_latency_qos_update_request(&i915->sb_qos,
PM_QOS_DEFAULT_VALUE);
iosf_mbi_punit_release(); iosf_mbi_punit_release();
} }
......
...@@ -965,14 +965,13 @@ static int cs_hsi_buf_config(struct cs_hsi_iface *hi, ...@@ -965,14 +965,13 @@ static int cs_hsi_buf_config(struct cs_hsi_iface *hi,
if (old_state != hi->iface_state) { if (old_state != hi->iface_state) {
if (hi->iface_state == CS_STATE_CONFIGURED) { if (hi->iface_state == CS_STATE_CONFIGURED) {
pm_qos_add_request(&hi->pm_qos_req, cpu_latency_qos_add_request(&hi->pm_qos_req,
PM_QOS_CPU_DMA_LATENCY,
CS_QOS_LATENCY_FOR_DATA_USEC); CS_QOS_LATENCY_FOR_DATA_USEC);
local_bh_disable(); local_bh_disable();
cs_hsi_read_on_data(hi); cs_hsi_read_on_data(hi);
local_bh_enable(); local_bh_enable();
} else if (old_state == CS_STATE_CONFIGURED) { } else if (old_state == CS_STATE_CONFIGURED) {
pm_qos_remove_request(&hi->pm_qos_req); cpu_latency_qos_remove_request(&hi->pm_qos_req);
} }
} }
return r; return r;
...@@ -1075,8 +1074,8 @@ static void cs_hsi_stop(struct cs_hsi_iface *hi) ...@@ -1075,8 +1074,8 @@ static void cs_hsi_stop(struct cs_hsi_iface *hi)
WARN_ON(!cs_state_idle(hi->control_state)); WARN_ON(!cs_state_idle(hi->control_state));
WARN_ON(!cs_state_idle(hi->data_state)); WARN_ON(!cs_state_idle(hi->data_state));
if (pm_qos_request_active(&hi->pm_qos_req)) if (cpu_latency_qos_request_active(&hi->pm_qos_req))
pm_qos_remove_request(&hi->pm_qos_req); cpu_latency_qos_remove_request(&hi->pm_qos_req);
spin_lock_bh(&hi->lock); spin_lock_bh(&hi->lock);
cs_hsi_free_data(hi); cs_hsi_free_data(hi);
......
...@@ -1008,8 +1008,7 @@ int saa7134_vb2_start_streaming(struct vb2_queue *vq, unsigned int count) ...@@ -1008,8 +1008,7 @@ int saa7134_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
*/ */
if ((dmaq == &dev->video_q && !vb2_is_streaming(&dev->vbi_vbq)) || if ((dmaq == &dev->video_q && !vb2_is_streaming(&dev->vbi_vbq)) ||
(dmaq == &dev->vbi_q && !vb2_is_streaming(&dev->video_vbq))) (dmaq == &dev->vbi_q && !vb2_is_streaming(&dev->video_vbq)))
pm_qos_add_request(&dev->qos_request, cpu_latency_qos_add_request(&dev->qos_request, 20);
PM_QOS_CPU_DMA_LATENCY, 20);
dmaq->seq_nr = 0; dmaq->seq_nr = 0;
return 0; return 0;
...@@ -1024,7 +1023,7 @@ void saa7134_vb2_stop_streaming(struct vb2_queue *vq) ...@@ -1024,7 +1023,7 @@ void saa7134_vb2_stop_streaming(struct vb2_queue *vq)
if ((dmaq == &dev->video_q && !vb2_is_streaming(&dev->vbi_vbq)) || if ((dmaq == &dev->video_q && !vb2_is_streaming(&dev->vbi_vbq)) ||
(dmaq == &dev->vbi_q && !vb2_is_streaming(&dev->video_vbq))) (dmaq == &dev->vbi_q && !vb2_is_streaming(&dev->video_vbq)))
pm_qos_remove_request(&dev->qos_request); cpu_latency_qos_remove_request(&dev->qos_request);
} }
static const struct vb2_ops vb2_qops = { static const struct vb2_ops vb2_qops = {
......
...@@ -646,7 +646,7 @@ static int viacam_vb2_start_streaming(struct vb2_queue *vq, unsigned int count) ...@@ -646,7 +646,7 @@ static int viacam_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
* requirement which will keep the CPU out of the deeper sleep * requirement which will keep the CPU out of the deeper sleep
* states. * states.
*/ */
pm_qos_add_request(&cam->qos_request, PM_QOS_CPU_DMA_LATENCY, 50); cpu_latency_qos_add_request(&cam->qos_request, 50);
viacam_start_engine(cam); viacam_start_engine(cam);
return 0; return 0;
out: out:
...@@ -662,7 +662,7 @@ static void viacam_vb2_stop_streaming(struct vb2_queue *vq) ...@@ -662,7 +662,7 @@ static void viacam_vb2_stop_streaming(struct vb2_queue *vq)
struct via_camera *cam = vb2_get_drv_priv(vq); struct via_camera *cam = vb2_get_drv_priv(vq);
struct via_buffer *buf, *tmp; struct via_buffer *buf, *tmp;
pm_qos_remove_request(&cam->qos_request); cpu_latency_qos_remove_request(&cam->qos_request);
viacam_stop_engine(cam); viacam_stop_engine(cam);
list_for_each_entry_safe(buf, tmp, &cam->buffer_queue, queue) { list_for_each_entry_safe(buf, tmp, &cam->buffer_queue, queue) {
......
...@@ -1452,8 +1452,7 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev) ...@@ -1452,8 +1452,7 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
pdev->id_entry->driver_data; pdev->id_entry->driver_data;
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
pm_qos_add_request(&imx_data->pm_qos_req, cpu_latency_qos_add_request(&imx_data->pm_qos_req, 0);
PM_QOS_CPU_DMA_LATENCY, 0);
imx_data->clk_ipg = devm_clk_get(&pdev->dev, "ipg"); imx_data->clk_ipg = devm_clk_get(&pdev->dev, "ipg");
if (IS_ERR(imx_data->clk_ipg)) { if (IS_ERR(imx_data->clk_ipg)) {
...@@ -1572,7 +1571,7 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev) ...@@ -1572,7 +1571,7 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
clk_disable_unprepare(imx_data->clk_per); clk_disable_unprepare(imx_data->clk_per);
free_sdhci: free_sdhci:
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
pm_qos_remove_request(&imx_data->pm_qos_req); cpu_latency_qos_remove_request(&imx_data->pm_qos_req);
sdhci_pltfm_free(pdev); sdhci_pltfm_free(pdev);
return err; return err;
} }
...@@ -1595,7 +1594,7 @@ static int sdhci_esdhc_imx_remove(struct platform_device *pdev) ...@@ -1595,7 +1594,7 @@ static int sdhci_esdhc_imx_remove(struct platform_device *pdev)
clk_disable_unprepare(imx_data->clk_ahb); clk_disable_unprepare(imx_data->clk_ahb);
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
pm_qos_remove_request(&imx_data->pm_qos_req); cpu_latency_qos_remove_request(&imx_data->pm_qos_req);
sdhci_pltfm_free(pdev); sdhci_pltfm_free(pdev);
...@@ -1667,7 +1666,7 @@ static int sdhci_esdhc_runtime_suspend(struct device *dev) ...@@ -1667,7 +1666,7 @@ static int sdhci_esdhc_runtime_suspend(struct device *dev)
clk_disable_unprepare(imx_data->clk_ahb); clk_disable_unprepare(imx_data->clk_ahb);
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
pm_qos_remove_request(&imx_data->pm_qos_req); cpu_latency_qos_remove_request(&imx_data->pm_qos_req);
return ret; return ret;
} }
...@@ -1680,8 +1679,7 @@ static int sdhci_esdhc_runtime_resume(struct device *dev) ...@@ -1680,8 +1679,7 @@ static int sdhci_esdhc_runtime_resume(struct device *dev)
int err; int err;
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
pm_qos_add_request(&imx_data->pm_qos_req, cpu_latency_qos_add_request(&imx_data->pm_qos_req, 0);
PM_QOS_CPU_DMA_LATENCY, 0);
err = clk_prepare_enable(imx_data->clk_ahb); err = clk_prepare_enable(imx_data->clk_ahb);
if (err) if (err)
...@@ -1714,7 +1712,7 @@ static int sdhci_esdhc_runtime_resume(struct device *dev) ...@@ -1714,7 +1712,7 @@ static int sdhci_esdhc_runtime_resume(struct device *dev)
clk_disable_unprepare(imx_data->clk_ahb); clk_disable_unprepare(imx_data->clk_ahb);
remove_pm_qos_request: remove_pm_qos_request:
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
pm_qos_remove_request(&imx_data->pm_qos_req); cpu_latency_qos_remove_request(&imx_data->pm_qos_req);
return err; return err;
} }
#endif #endif
......
...@@ -3280,9 +3280,9 @@ static void e1000_configure_rx(struct e1000_adapter *adapter) ...@@ -3280,9 +3280,9 @@ static void e1000_configure_rx(struct e1000_adapter *adapter)
dev_info(&adapter->pdev->dev, dev_info(&adapter->pdev->dev,
"Some CPU C-states have been disabled in order to enable jumbo frames\n"); "Some CPU C-states have been disabled in order to enable jumbo frames\n");
pm_qos_update_request(&adapter->pm_qos_req, lat); cpu_latency_qos_update_request(&adapter->pm_qos_req, lat);
} else { } else {
pm_qos_update_request(&adapter->pm_qos_req, cpu_latency_qos_update_request(&adapter->pm_qos_req,
PM_QOS_DEFAULT_VALUE); PM_QOS_DEFAULT_VALUE);
} }
...@@ -4636,8 +4636,7 @@ int e1000e_open(struct net_device *netdev) ...@@ -4636,8 +4636,7 @@ int e1000e_open(struct net_device *netdev)
e1000_update_mng_vlan(adapter); e1000_update_mng_vlan(adapter);
/* DMA latency requirement to workaround jumbo issue */ /* DMA latency requirement to workaround jumbo issue */
pm_qos_add_request(&adapter->pm_qos_req, PM_QOS_CPU_DMA_LATENCY, cpu_latency_qos_add_request(&adapter->pm_qos_req, PM_QOS_DEFAULT_VALUE);
PM_QOS_DEFAULT_VALUE);
/* before we allocate an interrupt, we must be ready to handle it. /* before we allocate an interrupt, we must be ready to handle it.
* Setting DEBUG_SHIRQ in the kernel makes it fire an interrupt * Setting DEBUG_SHIRQ in the kernel makes it fire an interrupt
...@@ -4679,7 +4678,7 @@ int e1000e_open(struct net_device *netdev) ...@@ -4679,7 +4678,7 @@ int e1000e_open(struct net_device *netdev)
return 0; return 0;
err_req_irq: err_req_irq:
pm_qos_remove_request(&adapter->pm_qos_req); cpu_latency_qos_remove_request(&adapter->pm_qos_req);
e1000e_release_hw_control(adapter); e1000e_release_hw_control(adapter);
e1000_power_down_phy(adapter); e1000_power_down_phy(adapter);
e1000e_free_rx_resources(adapter->rx_ring); e1000e_free_rx_resources(adapter->rx_ring);
...@@ -4743,7 +4742,7 @@ int e1000e_close(struct net_device *netdev) ...@@ -4743,7 +4742,7 @@ int e1000e_close(struct net_device *netdev)
!test_bit(__E1000_TESTING, &adapter->state)) !test_bit(__E1000_TESTING, &adapter->state))
e1000e_release_hw_control(adapter); e1000e_release_hw_control(adapter);
pm_qos_remove_request(&adapter->pm_qos_req); cpu_latency_qos_remove_request(&adapter->pm_qos_req);
pm_runtime_put_sync(&pdev->dev); pm_runtime_put_sync(&pdev->dev);
......
...@@ -1052,11 +1052,11 @@ static int ath10k_download_fw(struct ath10k *ar) ...@@ -1052,11 +1052,11 @@ static int ath10k_download_fw(struct ath10k *ar)
} }
memset(&latency_qos, 0, sizeof(latency_qos)); memset(&latency_qos, 0, sizeof(latency_qos));
pm_qos_add_request(&latency_qos, PM_QOS_CPU_DMA_LATENCY, 0); cpu_latency_qos_add_request(&latency_qos, 0);
ret = ath10k_bmi_fast_download(ar, address, data, data_len); ret = ath10k_bmi_fast_download(ar, address, data, data_len);
pm_qos_remove_request(&latency_qos); cpu_latency_qos_remove_request(&latency_qos);
return ret; return ret;
} }
......
...@@ -1730,7 +1730,7 @@ static int ipw2100_up(struct ipw2100_priv *priv, int deferred) ...@@ -1730,7 +1730,7 @@ static int ipw2100_up(struct ipw2100_priv *priv, int deferred)
/* the ipw2100 hardware really doesn't want power management delays /* the ipw2100 hardware really doesn't want power management delays
* longer than 175usec * longer than 175usec
*/ */
pm_qos_update_request(&ipw2100_pm_qos_req, 175); cpu_latency_qos_update_request(&ipw2100_pm_qos_req, 175);
/* If the interrupt is enabled, turn it off... */ /* If the interrupt is enabled, turn it off... */
spin_lock_irqsave(&priv->low_lock, flags); spin_lock_irqsave(&priv->low_lock, flags);
...@@ -1875,7 +1875,8 @@ static void ipw2100_down(struct ipw2100_priv *priv) ...@@ -1875,7 +1875,8 @@ static void ipw2100_down(struct ipw2100_priv *priv)
ipw2100_disable_interrupts(priv); ipw2100_disable_interrupts(priv);
spin_unlock_irqrestore(&priv->low_lock, flags); spin_unlock_irqrestore(&priv->low_lock, flags);
pm_qos_update_request(&ipw2100_pm_qos_req, PM_QOS_DEFAULT_VALUE); cpu_latency_qos_update_request(&ipw2100_pm_qos_req,
PM_QOS_DEFAULT_VALUE);
/* We have to signal any supplicant if we are disassociating */ /* We have to signal any supplicant if we are disassociating */
if (associated) if (associated)
...@@ -6566,8 +6567,7 @@ static int __init ipw2100_init(void) ...@@ -6566,8 +6567,7 @@ static int __init ipw2100_init(void)
printk(KERN_INFO DRV_NAME ": %s, %s\n", DRV_DESCRIPTION, DRV_VERSION); printk(KERN_INFO DRV_NAME ": %s, %s\n", DRV_DESCRIPTION, DRV_VERSION);
printk(KERN_INFO DRV_NAME ": %s\n", DRV_COPYRIGHT); printk(KERN_INFO DRV_NAME ": %s\n", DRV_COPYRIGHT);
pm_qos_add_request(&ipw2100_pm_qos_req, PM_QOS_CPU_DMA_LATENCY, cpu_latency_qos_add_request(&ipw2100_pm_qos_req, PM_QOS_DEFAULT_VALUE);
PM_QOS_DEFAULT_VALUE);
ret = pci_register_driver(&ipw2100_pci_driver); ret = pci_register_driver(&ipw2100_pci_driver);
if (ret) if (ret)
...@@ -6594,7 +6594,7 @@ static void __exit ipw2100_exit(void) ...@@ -6594,7 +6594,7 @@ static void __exit ipw2100_exit(void)
&driver_attr_debug_level); &driver_attr_debug_level);
#endif #endif
pci_unregister_driver(&ipw2100_pci_driver); pci_unregister_driver(&ipw2100_pci_driver);
pm_qos_remove_request(&ipw2100_pm_qos_req); cpu_latency_qos_remove_request(&ipw2100_pm_qos_req);
} }
module_init(ipw2100_init); module_init(ipw2100_init);
......
...@@ -484,7 +484,7 @@ static int fsl_qspi_clk_prep_enable(struct fsl_qspi *q) ...@@ -484,7 +484,7 @@ static int fsl_qspi_clk_prep_enable(struct fsl_qspi *q)
} }
if (needs_wakeup_wait_mode(q)) if (needs_wakeup_wait_mode(q))
pm_qos_add_request(&q->pm_qos_req, PM_QOS_CPU_DMA_LATENCY, 0); cpu_latency_qos_add_request(&q->pm_qos_req, 0);
return 0; return 0;
} }
...@@ -492,7 +492,7 @@ static int fsl_qspi_clk_prep_enable(struct fsl_qspi *q) ...@@ -492,7 +492,7 @@ static int fsl_qspi_clk_prep_enable(struct fsl_qspi *q)
static void fsl_qspi_clk_disable_unprep(struct fsl_qspi *q) static void fsl_qspi_clk_disable_unprep(struct fsl_qspi *q)
{ {
if (needs_wakeup_wait_mode(q)) if (needs_wakeup_wait_mode(q))
pm_qos_remove_request(&q->pm_qos_req); cpu_latency_qos_remove_request(&q->pm_qos_req);
clk_disable_unprepare(q->clk); clk_disable_unprepare(q->clk);
clk_disable_unprepare(q->clk_en); clk_disable_unprepare(q->clk_en);
......
...@@ -569,7 +569,7 @@ static void omap8250_uart_qos_work(struct work_struct *work) ...@@ -569,7 +569,7 @@ static void omap8250_uart_qos_work(struct work_struct *work)
struct omap8250_priv *priv; struct omap8250_priv *priv;
priv = container_of(work, struct omap8250_priv, qos_work); priv = container_of(work, struct omap8250_priv, qos_work);
pm_qos_update_request(&priv->pm_qos_request, priv->latency); cpu_latency_qos_update_request(&priv->pm_qos_request, priv->latency);
} }
#ifdef CONFIG_SERIAL_8250_DMA #ifdef CONFIG_SERIAL_8250_DMA
...@@ -1222,10 +1222,9 @@ static int omap8250_probe(struct platform_device *pdev) ...@@ -1222,10 +1222,9 @@ static int omap8250_probe(struct platform_device *pdev)
DEFAULT_CLK_SPEED); DEFAULT_CLK_SPEED);
} }
priv->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; priv->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
priv->calc_latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; priv->calc_latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
pm_qos_add_request(&priv->pm_qos_request, PM_QOS_CPU_DMA_LATENCY, cpu_latency_qos_add_request(&priv->pm_qos_request, priv->latency);
priv->latency);
INIT_WORK(&priv->qos_work, omap8250_uart_qos_work); INIT_WORK(&priv->qos_work, omap8250_uart_qos_work);
spin_lock_init(&priv->rx_dma_lock); spin_lock_init(&priv->rx_dma_lock);
...@@ -1295,7 +1294,7 @@ static int omap8250_remove(struct platform_device *pdev) ...@@ -1295,7 +1294,7 @@ static int omap8250_remove(struct platform_device *pdev)
pm_runtime_put_sync(&pdev->dev); pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev); pm_runtime_disable(&pdev->dev);
serial8250_unregister_port(priv->line); serial8250_unregister_port(priv->line);
pm_qos_remove_request(&priv->pm_qos_request); cpu_latency_qos_remove_request(&priv->pm_qos_request);
device_init_wakeup(&pdev->dev, false); device_init_wakeup(&pdev->dev, false);
return 0; return 0;
} }
...@@ -1445,7 +1444,7 @@ static int omap8250_runtime_suspend(struct device *dev) ...@@ -1445,7 +1444,7 @@ static int omap8250_runtime_suspend(struct device *dev)
if (up->dma && up->dma->rxchan) if (up->dma && up->dma->rxchan)
omap_8250_rx_dma_flush(up); omap_8250_rx_dma_flush(up);
priv->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; priv->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
schedule_work(&priv->qos_work); schedule_work(&priv->qos_work);
return 0; return 0;
......
...@@ -831,7 +831,7 @@ static void serial_omap_uart_qos_work(struct work_struct *work) ...@@ -831,7 +831,7 @@ static void serial_omap_uart_qos_work(struct work_struct *work)
struct uart_omap_port *up = container_of(work, struct uart_omap_port, struct uart_omap_port *up = container_of(work, struct uart_omap_port,
qos_work); qos_work);
pm_qos_update_request(&up->pm_qos_request, up->latency); cpu_latency_qos_update_request(&up->pm_qos_request, up->latency);
} }
static void static void
...@@ -1722,10 +1722,9 @@ static int serial_omap_probe(struct platform_device *pdev) ...@@ -1722,10 +1722,9 @@ static int serial_omap_probe(struct platform_device *pdev)
DEFAULT_CLK_SPEED); DEFAULT_CLK_SPEED);
} }
up->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; up->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
up->calc_latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; up->calc_latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
pm_qos_add_request(&up->pm_qos_request, cpu_latency_qos_add_request(&up->pm_qos_request, up->latency);
PM_QOS_CPU_DMA_LATENCY, up->latency);
INIT_WORK(&up->qos_work, serial_omap_uart_qos_work); INIT_WORK(&up->qos_work, serial_omap_uart_qos_work);
platform_set_drvdata(pdev, up); platform_set_drvdata(pdev, up);
...@@ -1759,7 +1758,7 @@ static int serial_omap_probe(struct platform_device *pdev) ...@@ -1759,7 +1758,7 @@ static int serial_omap_probe(struct platform_device *pdev)
pm_runtime_dont_use_autosuspend(&pdev->dev); pm_runtime_dont_use_autosuspend(&pdev->dev);
pm_runtime_put_sync(&pdev->dev); pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev); pm_runtime_disable(&pdev->dev);
pm_qos_remove_request(&up->pm_qos_request); cpu_latency_qos_remove_request(&up->pm_qos_request);
device_init_wakeup(up->dev, false); device_init_wakeup(up->dev, false);
err_rs485: err_rs485:
err_port_line: err_port_line:
...@@ -1777,7 +1776,7 @@ static int serial_omap_remove(struct platform_device *dev) ...@@ -1777,7 +1776,7 @@ static int serial_omap_remove(struct platform_device *dev)
pm_runtime_dont_use_autosuspend(up->dev); pm_runtime_dont_use_autosuspend(up->dev);
pm_runtime_put_sync(up->dev); pm_runtime_put_sync(up->dev);
pm_runtime_disable(up->dev); pm_runtime_disable(up->dev);
pm_qos_remove_request(&up->pm_qos_request); cpu_latency_qos_remove_request(&up->pm_qos_request);
device_init_wakeup(&dev->dev, false); device_init_wakeup(&dev->dev, false);
return 0; return 0;
...@@ -1869,7 +1868,7 @@ static int serial_omap_runtime_suspend(struct device *dev) ...@@ -1869,7 +1868,7 @@ static int serial_omap_runtime_suspend(struct device *dev)
serial_omap_enable_wakeup(up, true); serial_omap_enable_wakeup(up, true);
up->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; up->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
schedule_work(&up->qos_work); schedule_work(&up->qos_work);
return 0; return 0;
......
...@@ -393,8 +393,7 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev) ...@@ -393,8 +393,7 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
} }
if (pdata.flags & CI_HDRC_PMQOS) if (pdata.flags & CI_HDRC_PMQOS)
pm_qos_add_request(&data->pm_qos_req, cpu_latency_qos_add_request(&data->pm_qos_req, 0);
PM_QOS_CPU_DMA_LATENCY, 0);
ret = imx_get_clks(dev); ret = imx_get_clks(dev);
if (ret) if (ret)
...@@ -478,7 +477,7 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev) ...@@ -478,7 +477,7 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
/* don't overwrite original ret (cf. EPROBE_DEFER) */ /* don't overwrite original ret (cf. EPROBE_DEFER) */
regulator_disable(data->hsic_pad_regulator); regulator_disable(data->hsic_pad_regulator);
if (pdata.flags & CI_HDRC_PMQOS) if (pdata.flags & CI_HDRC_PMQOS)
pm_qos_remove_request(&data->pm_qos_req); cpu_latency_qos_remove_request(&data->pm_qos_req);
data->ci_pdev = NULL; data->ci_pdev = NULL;
return ret; return ret;
} }
...@@ -499,7 +498,7 @@ static int ci_hdrc_imx_remove(struct platform_device *pdev) ...@@ -499,7 +498,7 @@ static int ci_hdrc_imx_remove(struct platform_device *pdev)
if (data->ci_pdev) { if (data->ci_pdev) {
imx_disable_unprepare_clks(&pdev->dev); imx_disable_unprepare_clks(&pdev->dev);
if (data->plat_data->flags & CI_HDRC_PMQOS) if (data->plat_data->flags & CI_HDRC_PMQOS)
pm_qos_remove_request(&data->pm_qos_req); cpu_latency_qos_remove_request(&data->pm_qos_req);
if (data->hsic_pad_regulator) if (data->hsic_pad_regulator)
regulator_disable(data->hsic_pad_regulator); regulator_disable(data->hsic_pad_regulator);
} }
...@@ -527,7 +526,7 @@ static int __maybe_unused imx_controller_suspend(struct device *dev) ...@@ -527,7 +526,7 @@ static int __maybe_unused imx_controller_suspend(struct device *dev)
imx_disable_unprepare_clks(dev); imx_disable_unprepare_clks(dev);
if (data->plat_data->flags & CI_HDRC_PMQOS) if (data->plat_data->flags & CI_HDRC_PMQOS)
pm_qos_remove_request(&data->pm_qos_req); cpu_latency_qos_remove_request(&data->pm_qos_req);
data->in_lpm = true; data->in_lpm = true;
...@@ -547,8 +546,7 @@ static int __maybe_unused imx_controller_resume(struct device *dev) ...@@ -547,8 +546,7 @@ static int __maybe_unused imx_controller_resume(struct device *dev)
} }
if (data->plat_data->flags & CI_HDRC_PMQOS) if (data->plat_data->flags & CI_HDRC_PMQOS)
pm_qos_add_request(&data->pm_qos_req, cpu_latency_qos_add_request(&data->pm_qos_req, 0);
PM_QOS_CPU_DMA_LATENCY, 0);
ret = imx_prepare_enable_clks(dev); ret = imx_prepare_enable_clks(dev);
if (ret) if (ret)
......
/* SPDX-License-Identifier: GPL-2.0 */ /* SPDX-License-Identifier: GPL-2.0 */
#ifndef _LINUX_PM_QOS_H /*
#define _LINUX_PM_QOS_H * Definitions related to Power Management Quality of Service (PM QoS).
/* interface for the pm_qos_power infrastructure of the linux kernel.
* *
* Copyright (C) 2020 Intel Corporation
*
* Authors:
* Mark Gross <mgross@linux.intel.com> * Mark Gross <mgross@linux.intel.com>
* Rafael J. Wysocki <rafael.j.wysocki@intel.com>
*/ */
#ifndef _LINUX_PM_QOS_H
#define _LINUX_PM_QOS_H
#include <linux/plist.h> #include <linux/plist.h>
#include <linux/notifier.h> #include <linux/notifier.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/workqueue.h>
enum {
PM_QOS_RESERVED = 0,
PM_QOS_CPU_DMA_LATENCY,
/* insert new class ID */
PM_QOS_NUM_CLASSES,
};
enum pm_qos_flags_status { enum pm_qos_flags_status {
PM_QOS_FLAGS_UNDEFINED = -1, PM_QOS_FLAGS_UNDEFINED = -1,
...@@ -29,7 +27,7 @@ enum pm_qos_flags_status { ...@@ -29,7 +27,7 @@ enum pm_qos_flags_status {
#define PM_QOS_LATENCY_ANY S32_MAX #define PM_QOS_LATENCY_ANY S32_MAX
#define PM_QOS_LATENCY_ANY_NS ((s64)PM_QOS_LATENCY_ANY * NSEC_PER_USEC) #define PM_QOS_LATENCY_ANY_NS ((s64)PM_QOS_LATENCY_ANY * NSEC_PER_USEC)
#define PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC) #define PM_QOS_CPU_LATENCY_DEFAULT_VALUE (2000 * USEC_PER_SEC)
#define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE PM_QOS_LATENCY_ANY #define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE PM_QOS_LATENCY_ANY
#define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT PM_QOS_LATENCY_ANY #define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT PM_QOS_LATENCY_ANY
#define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS PM_QOS_LATENCY_ANY_NS #define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS PM_QOS_LATENCY_ANY_NS
...@@ -40,22 +38,10 @@ enum pm_qos_flags_status { ...@@ -40,22 +38,10 @@ enum pm_qos_flags_status {
#define PM_QOS_FLAG_NO_POWER_OFF (1 << 0) #define PM_QOS_FLAG_NO_POWER_OFF (1 << 0)
struct pm_qos_request {
struct plist_node node;
int pm_qos_class;
struct delayed_work work; /* for pm_qos_update_request_timeout */
};
struct pm_qos_flags_request {
struct list_head node;
s32 flags; /* Do not change to 64 bit */
};
enum pm_qos_type { enum pm_qos_type {
PM_QOS_UNITIALIZED, PM_QOS_UNITIALIZED,
PM_QOS_MAX, /* return the largest value */ PM_QOS_MAX, /* return the largest value */
PM_QOS_MIN, /* return the smallest value */ PM_QOS_MIN, /* return the smallest value */
PM_QOS_SUM /* return the sum */
}; };
/* /*
...@@ -72,6 +58,16 @@ struct pm_qos_constraints { ...@@ -72,6 +58,16 @@ struct pm_qos_constraints {
struct blocking_notifier_head *notifiers; struct blocking_notifier_head *notifiers;
}; };
struct pm_qos_request {
struct plist_node node;
struct pm_qos_constraints *qos;
};
struct pm_qos_flags_request {
struct list_head node;
s32 flags; /* Do not change to 64 bit */
};
struct pm_qos_flags { struct pm_qos_flags {
struct list_head list; struct list_head list;
s32 effective_flags; /* Do not change to 64 bit */ s32 effective_flags; /* Do not change to 64 bit */
...@@ -140,24 +136,31 @@ static inline int dev_pm_qos_request_active(struct dev_pm_qos_request *req) ...@@ -140,24 +136,31 @@ static inline int dev_pm_qos_request_active(struct dev_pm_qos_request *req)
return req->dev != NULL; return req->dev != NULL;
} }
s32 pm_qos_read_value(struct pm_qos_constraints *c);
int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node, int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
enum pm_qos_req_action action, int value); enum pm_qos_req_action action, int value);
bool pm_qos_update_flags(struct pm_qos_flags *pqf, bool pm_qos_update_flags(struct pm_qos_flags *pqf,
struct pm_qos_flags_request *req, struct pm_qos_flags_request *req,
enum pm_qos_req_action action, s32 val); enum pm_qos_req_action action, s32 val);
void pm_qos_add_request(struct pm_qos_request *req, int pm_qos_class,
s32 value); #ifdef CONFIG_CPU_IDLE
void pm_qos_update_request(struct pm_qos_request *req, s32 cpu_latency_qos_limit(void);
s32 new_value); bool cpu_latency_qos_request_active(struct pm_qos_request *req);
void pm_qos_update_request_timeout(struct pm_qos_request *req, void cpu_latency_qos_add_request(struct pm_qos_request *req, s32 value);
s32 new_value, unsigned long timeout_us); void cpu_latency_qos_update_request(struct pm_qos_request *req, s32 new_value);
void pm_qos_remove_request(struct pm_qos_request *req); void cpu_latency_qos_remove_request(struct pm_qos_request *req);
#else
int pm_qos_request(int pm_qos_class); static inline s32 cpu_latency_qos_limit(void) { return INT_MAX; }
int pm_qos_add_notifier(int pm_qos_class, struct notifier_block *notifier); static inline bool cpu_latency_qos_request_active(struct pm_qos_request *req)
int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier); {
int pm_qos_request_active(struct pm_qos_request *req); return false;
s32 pm_qos_read_value(struct pm_qos_constraints *c); }
static inline void cpu_latency_qos_add_request(struct pm_qos_request *req,
s32 value) {}
static inline void cpu_latency_qos_update_request(struct pm_qos_request *req,
s32 new_value) {}
static inline void cpu_latency_qos_remove_request(struct pm_qos_request *req) {}
#endif
#ifdef CONFIG_PM #ifdef CONFIG_PM
enum pm_qos_flags_status __dev_pm_qos_flags(struct device *dev, s32 mask); enum pm_qos_flags_status __dev_pm_qos_flags(struct device *dev, s32 mask);
......
...@@ -359,75 +359,50 @@ DEFINE_EVENT(power_domain, power_domain_target, ...@@ -359,75 +359,50 @@ DEFINE_EVENT(power_domain, power_domain_target,
); );
/* /*
* The pm qos events are used for pm qos update * CPU latency QoS events used for global CPU latency QoS list updates
*/ */
DECLARE_EVENT_CLASS(pm_qos_request, DECLARE_EVENT_CLASS(cpu_latency_qos_request,
TP_PROTO(int pm_qos_class, s32 value), TP_PROTO(s32 value),
TP_ARGS(pm_qos_class, value), TP_ARGS(value),
TP_STRUCT__entry( TP_STRUCT__entry(
__field( int, pm_qos_class )
__field( s32, value ) __field( s32, value )
), ),
TP_fast_assign( TP_fast_assign(
__entry->pm_qos_class = pm_qos_class;
__entry->value = value; __entry->value = value;
), ),
TP_printk("pm_qos_class=%s value=%d", TP_printk("CPU_DMA_LATENCY value=%d",
__print_symbolic(__entry->pm_qos_class,
{ PM_QOS_CPU_DMA_LATENCY, "CPU_DMA_LATENCY" }),
__entry->value) __entry->value)
); );
DEFINE_EVENT(pm_qos_request, pm_qos_add_request, DEFINE_EVENT(cpu_latency_qos_request, pm_qos_add_request,
TP_PROTO(int pm_qos_class, s32 value), TP_PROTO(s32 value),
TP_ARGS(pm_qos_class, value) TP_ARGS(value)
); );
DEFINE_EVENT(pm_qos_request, pm_qos_update_request, DEFINE_EVENT(cpu_latency_qos_request, pm_qos_update_request,
TP_PROTO(int pm_qos_class, s32 value), TP_PROTO(s32 value),
TP_ARGS(pm_qos_class, value) TP_ARGS(value)
); );
DEFINE_EVENT(pm_qos_request, pm_qos_remove_request, DEFINE_EVENT(cpu_latency_qos_request, pm_qos_remove_request,
TP_PROTO(int pm_qos_class, s32 value), TP_PROTO(s32 value),
TP_ARGS(pm_qos_class, value) TP_ARGS(value)
);
TRACE_EVENT(pm_qos_update_request_timeout,
TP_PROTO(int pm_qos_class, s32 value, unsigned long timeout_us),
TP_ARGS(pm_qos_class, value, timeout_us),
TP_STRUCT__entry(
__field( int, pm_qos_class )
__field( s32, value )
__field( unsigned long, timeout_us )
),
TP_fast_assign(
__entry->pm_qos_class = pm_qos_class;
__entry->value = value;
__entry->timeout_us = timeout_us;
),
TP_printk("pm_qos_class=%s value=%d, timeout_us=%ld",
__print_symbolic(__entry->pm_qos_class,
{ PM_QOS_CPU_DMA_LATENCY, "CPU_DMA_LATENCY" }),
__entry->value, __entry->timeout_us)
); );
/*
* General PM QoS events used for updates of PM QoS request lists
*/
DECLARE_EVENT_CLASS(pm_qos_update, DECLARE_EVENT_CLASS(pm_qos_update,
TP_PROTO(enum pm_qos_req_action action, int prev_value, int curr_value), TP_PROTO(enum pm_qos_req_action action, int prev_value, int curr_value),
......
此差异已折叠。
...@@ -748,11 +748,11 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream, ...@@ -748,11 +748,11 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
snd_pcm_timer_resolution_change(substream); snd_pcm_timer_resolution_change(substream);
snd_pcm_set_state(substream, SNDRV_PCM_STATE_SETUP); snd_pcm_set_state(substream, SNDRV_PCM_STATE_SETUP);
if (pm_qos_request_active(&substream->latency_pm_qos_req)) if (cpu_latency_qos_request_active(&substream->latency_pm_qos_req))
pm_qos_remove_request(&substream->latency_pm_qos_req); cpu_latency_qos_remove_request(&substream->latency_pm_qos_req);
if ((usecs = period_to_usecs(runtime)) >= 0) if ((usecs = period_to_usecs(runtime)) >= 0)
pm_qos_add_request(&substream->latency_pm_qos_req, cpu_latency_qos_add_request(&substream->latency_pm_qos_req,
PM_QOS_CPU_DMA_LATENCY, usecs); usecs);
return 0; return 0;
_error: _error:
/* hardware might be unusable from this time, /* hardware might be unusable from this time,
...@@ -821,7 +821,7 @@ static int snd_pcm_hw_free(struct snd_pcm_substream *substream) ...@@ -821,7 +821,7 @@ static int snd_pcm_hw_free(struct snd_pcm_substream *substream)
return -EBADFD; return -EBADFD;
result = do_hw_free(substream); result = do_hw_free(substream);
snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN); snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN);
pm_qos_remove_request(&substream->latency_pm_qos_req); cpu_latency_qos_remove_request(&substream->latency_pm_qos_req);
return result; return result;
} }
...@@ -2599,8 +2599,8 @@ void snd_pcm_release_substream(struct snd_pcm_substream *substream) ...@@ -2599,8 +2599,8 @@ void snd_pcm_release_substream(struct snd_pcm_substream *substream)
substream->ops->close(substream); substream->ops->close(substream);
substream->hw_opened = 0; substream->hw_opened = 0;
} }
if (pm_qos_request_active(&substream->latency_pm_qos_req)) if (cpu_latency_qos_request_active(&substream->latency_pm_qos_req))
pm_qos_remove_request(&substream->latency_pm_qos_req); cpu_latency_qos_remove_request(&substream->latency_pm_qos_req);
if (substream->pcm_release) { if (substream->pcm_release) {
substream->pcm_release(substream); substream->pcm_release(substream);
substream->pcm_release = NULL; substream->pcm_release = NULL;
......
...@@ -325,8 +325,7 @@ int sst_context_init(struct intel_sst_drv *ctx) ...@@ -325,8 +325,7 @@ int sst_context_init(struct intel_sst_drv *ctx)
ret = -ENOMEM; ret = -ENOMEM;
goto do_free_mem; goto do_free_mem;
} }
pm_qos_add_request(ctx->qos, PM_QOS_CPU_DMA_LATENCY, cpu_latency_qos_add_request(ctx->qos, PM_QOS_DEFAULT_VALUE);
PM_QOS_DEFAULT_VALUE);
dev_dbg(ctx->dev, "Requesting FW %s now...\n", ctx->firmware_name); dev_dbg(ctx->dev, "Requesting FW %s now...\n", ctx->firmware_name);
ret = request_firmware_nowait(THIS_MODULE, true, ctx->firmware_name, ret = request_firmware_nowait(THIS_MODULE, true, ctx->firmware_name,
...@@ -364,7 +363,7 @@ void sst_context_cleanup(struct intel_sst_drv *ctx) ...@@ -364,7 +363,7 @@ void sst_context_cleanup(struct intel_sst_drv *ctx)
sysfs_remove_group(&ctx->dev->kobj, &sst_fw_version_attr_group); sysfs_remove_group(&ctx->dev->kobj, &sst_fw_version_attr_group);
flush_scheduled_work(); flush_scheduled_work();
destroy_workqueue(ctx->post_msg_wq); destroy_workqueue(ctx->post_msg_wq);
pm_qos_remove_request(ctx->qos); cpu_latency_qos_remove_request(ctx->qos);
kfree(ctx->fw_sg_list.src); kfree(ctx->fw_sg_list.src);
kfree(ctx->fw_sg_list.dst); kfree(ctx->fw_sg_list.dst);
ctx->fw_sg_list.list_len = 0; ctx->fw_sg_list.list_len = 0;
......
...@@ -412,7 +412,7 @@ int sst_load_fw(struct intel_sst_drv *sst_drv_ctx) ...@@ -412,7 +412,7 @@ int sst_load_fw(struct intel_sst_drv *sst_drv_ctx)
return -ENOMEM; return -ENOMEM;
/* Prevent C-states beyond C6 */ /* Prevent C-states beyond C6 */
pm_qos_update_request(sst_drv_ctx->qos, 0); cpu_latency_qos_update_request(sst_drv_ctx->qos, 0);
sst_drv_ctx->sst_state = SST_FW_LOADING; sst_drv_ctx->sst_state = SST_FW_LOADING;
...@@ -442,7 +442,7 @@ int sst_load_fw(struct intel_sst_drv *sst_drv_ctx) ...@@ -442,7 +442,7 @@ int sst_load_fw(struct intel_sst_drv *sst_drv_ctx)
restore: restore:
/* Re-enable Deeper C-states beyond C6 */ /* Re-enable Deeper C-states beyond C6 */
pm_qos_update_request(sst_drv_ctx->qos, PM_QOS_DEFAULT_VALUE); cpu_latency_qos_update_request(sst_drv_ctx->qos, PM_QOS_DEFAULT_VALUE);
sst_free_block(sst_drv_ctx, block); sst_free_block(sst_drv_ctx, block);
dev_dbg(sst_drv_ctx->dev, "fw load successful!!!\n"); dev_dbg(sst_drv_ctx->dev, "fw load successful!!!\n");
......
...@@ -112,7 +112,7 @@ static void omap_dmic_dai_shutdown(struct snd_pcm_substream *substream, ...@@ -112,7 +112,7 @@ static void omap_dmic_dai_shutdown(struct snd_pcm_substream *substream,
mutex_lock(&dmic->mutex); mutex_lock(&dmic->mutex);
pm_qos_remove_request(&dmic->pm_qos_req); cpu_latency_qos_remove_request(&dmic->pm_qos_req);
if (!dai->active) if (!dai->active)
dmic->active = 0; dmic->active = 0;
...@@ -230,8 +230,9 @@ static int omap_dmic_dai_prepare(struct snd_pcm_substream *substream, ...@@ -230,8 +230,9 @@ static int omap_dmic_dai_prepare(struct snd_pcm_substream *substream,
struct omap_dmic *dmic = snd_soc_dai_get_drvdata(dai); struct omap_dmic *dmic = snd_soc_dai_get_drvdata(dai);
u32 ctrl; u32 ctrl;
if (pm_qos_request_active(&dmic->pm_qos_req)) if (cpu_latency_qos_request_active(&dmic->pm_qos_req))
pm_qos_update_request(&dmic->pm_qos_req, dmic->latency); cpu_latency_qos_update_request(&dmic->pm_qos_req,
dmic->latency);
/* Configure uplink threshold */ /* Configure uplink threshold */
omap_dmic_write(dmic, OMAP_DMIC_FIFO_CTRL_REG, dmic->threshold); omap_dmic_write(dmic, OMAP_DMIC_FIFO_CTRL_REG, dmic->threshold);
......
...@@ -836,10 +836,10 @@ static void omap_mcbsp_dai_shutdown(struct snd_pcm_substream *substream, ...@@ -836,10 +836,10 @@ static void omap_mcbsp_dai_shutdown(struct snd_pcm_substream *substream,
int stream2 = tx ? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK; int stream2 = tx ? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK;
if (mcbsp->latency[stream2]) if (mcbsp->latency[stream2])
pm_qos_update_request(&mcbsp->pm_qos_req, cpu_latency_qos_update_request(&mcbsp->pm_qos_req,
mcbsp->latency[stream2]); mcbsp->latency[stream2]);
else if (mcbsp->latency[stream1]) else if (mcbsp->latency[stream1])
pm_qos_remove_request(&mcbsp->pm_qos_req); cpu_latency_qos_remove_request(&mcbsp->pm_qos_req);
mcbsp->latency[stream1] = 0; mcbsp->latency[stream1] = 0;
...@@ -863,10 +863,10 @@ static int omap_mcbsp_dai_prepare(struct snd_pcm_substream *substream, ...@@ -863,10 +863,10 @@ static int omap_mcbsp_dai_prepare(struct snd_pcm_substream *substream,
if (!latency || mcbsp->latency[stream1] < latency) if (!latency || mcbsp->latency[stream1] < latency)
latency = mcbsp->latency[stream1]; latency = mcbsp->latency[stream1];
if (pm_qos_request_active(pm_qos_req)) if (cpu_latency_qos_request_active(pm_qos_req))
pm_qos_update_request(pm_qos_req, latency); cpu_latency_qos_update_request(pm_qos_req, latency);
else if (latency) else if (latency)
pm_qos_add_request(pm_qos_req, PM_QOS_CPU_DMA_LATENCY, latency); cpu_latency_qos_add_request(pm_qos_req, latency);
return 0; return 0;
} }
...@@ -1434,8 +1434,8 @@ static int asoc_mcbsp_remove(struct platform_device *pdev) ...@@ -1434,8 +1434,8 @@ static int asoc_mcbsp_remove(struct platform_device *pdev)
if (mcbsp->pdata->ops && mcbsp->pdata->ops->free) if (mcbsp->pdata->ops && mcbsp->pdata->ops->free)
mcbsp->pdata->ops->free(mcbsp->id); mcbsp->pdata->ops->free(mcbsp->id);
if (pm_qos_request_active(&mcbsp->pm_qos_req)) if (cpu_latency_qos_request_active(&mcbsp->pm_qos_req))
pm_qos_remove_request(&mcbsp->pm_qos_req); cpu_latency_qos_remove_request(&mcbsp->pm_qos_req);
if (mcbsp->pdata->buffer_size) if (mcbsp->pdata->buffer_size)
sysfs_remove_group(&mcbsp->dev->kobj, &additional_attr_group); sysfs_remove_group(&mcbsp->dev->kobj, &additional_attr_group);
......
...@@ -281,10 +281,10 @@ static void omap_mcpdm_dai_shutdown(struct snd_pcm_substream *substream, ...@@ -281,10 +281,10 @@ static void omap_mcpdm_dai_shutdown(struct snd_pcm_substream *substream,
} }
if (mcpdm->latency[stream2]) if (mcpdm->latency[stream2])
pm_qos_update_request(&mcpdm->pm_qos_req, cpu_latency_qos_update_request(&mcpdm->pm_qos_req,
mcpdm->latency[stream2]); mcpdm->latency[stream2]);
else if (mcpdm->latency[stream1]) else if (mcpdm->latency[stream1])
pm_qos_remove_request(&mcpdm->pm_qos_req); cpu_latency_qos_remove_request(&mcpdm->pm_qos_req);
mcpdm->latency[stream1] = 0; mcpdm->latency[stream1] = 0;
...@@ -386,10 +386,10 @@ static int omap_mcpdm_prepare(struct snd_pcm_substream *substream, ...@@ -386,10 +386,10 @@ static int omap_mcpdm_prepare(struct snd_pcm_substream *substream,
if (!latency || mcpdm->latency[stream1] < latency) if (!latency || mcpdm->latency[stream1] < latency)
latency = mcpdm->latency[stream1]; latency = mcpdm->latency[stream1];
if (pm_qos_request_active(pm_qos_req)) if (cpu_latency_qos_request_active(pm_qos_req))
pm_qos_update_request(pm_qos_req, latency); cpu_latency_qos_update_request(pm_qos_req, latency);
else if (latency) else if (latency)
pm_qos_add_request(pm_qos_req, PM_QOS_CPU_DMA_LATENCY, latency); cpu_latency_qos_add_request(pm_qos_req, latency);
if (!omap_mcpdm_active(mcpdm)) { if (!omap_mcpdm_active(mcpdm)) {
omap_mcpdm_start(mcpdm); omap_mcpdm_start(mcpdm);
...@@ -451,8 +451,8 @@ static int omap_mcpdm_remove(struct snd_soc_dai *dai) ...@@ -451,8 +451,8 @@ static int omap_mcpdm_remove(struct snd_soc_dai *dai)
free_irq(mcpdm->irq, (void *)mcpdm); free_irq(mcpdm->irq, (void *)mcpdm);
pm_runtime_disable(mcpdm->dev); pm_runtime_disable(mcpdm->dev);
if (pm_qos_request_active(&mcpdm->pm_qos_req)) if (cpu_latency_qos_request_active(&mcpdm->pm_qos_req))
pm_qos_remove_request(&mcpdm->pm_qos_req); cpu_latency_qos_remove_request(&mcpdm->pm_qos_req);
return 0; return 0;
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册