- 13 2月, 2012 1 次提交
-
-
由 Jean Pihet 提交于
The PM QoS feature originally didn't depend on CONFIG_PM, which was mistakenly changed by commit e8db0be1 PM QoS: Move and rename the implementation files Later, commit d020283d PM / QoS: CPU C-state breakage with PM Qos change partially fixed that by introducing a static inline definition of pm_qos_request(), but that still didn't allow user space to use the PM QoS interface if CONFIG_PM was unset (which had been possible before). For this reason, remove the dependency of PM QoS on CONFIG_PM to make it work (as intended) with CONFIG_PM unset. [rjw: Replaced the original changelog with a new one.] Signed-off-by: NJean Pihet <j-pihet@ti.com> Reported-by: NVenkatesh Pallipadi <venki@google.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 05 2月, 2012 1 次提交
-
-
由 Venkatesh Pallipadi 提交于
Looks like change "PM QoS: Move and rename the implementation files" merged during the 3.2 development cycle made PM QoS depend on CONFIG_PM which depends on (PM_SLEEP || PM_RUNTIME). That breaks CPU C-states with kernels not having these CONFIGs, causing CPUs to spend time in Polling loop idle instead of going into deep C-states, consuming way way more power. This is with either acpi idle or intel idle enabled. Either CONFIG_PM should be enabled with any pm_qos users or the !CONFIG_PM pm_qos_request() should return sane defaults not to break the existing users. Here's is the patch for the latter option. [rjw: Modified the changelog slightly.] Signed-off-by: NVenkatesh Pallipadi <venki@google.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Cc: stable@vger.kernel.org
-
- 30 1月, 2012 1 次提交
-
-
由 Alex Frid 提交于
- Replace class ID #define with enumeration - Loop through PM QoS objects during initialization (rather than initializing them one-by-one) Signed-off-by: NAlex Frid <afrid@nvidia.com> Reviewed-by: NAntti Miettinen <amiettinen@nvidia.com> Reviewed-by: NDiwakar Tundlam <dtundlam@nvidia.com> Reviewed-by: NScott Williams <scwilliams@nvidia.com> Reviewed-by: NYu-Huan Hsu <yhsu@nvidia.com> Acked-by: Nmarkgross <markgross@thegnar.org> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 26 12月, 2011 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Some devices, like the I2C controller on SH7372, are not necessary for providing power to their children or forwarding wakeup signals (and generally interrupts) from them. They are only needed by their children when there's some data to transfer, so they may be suspended for the majority of time and resumed on demand, when the children have data to send or receive. For this purpose, however, their power.ignore_children flags have to be set, or the PM core wouldn't allow them to be suspended while their children were active. Unfortunately, in some situations it may take too much time to resume such devices so that they can assist their children in transferring data. For example, if such a device belongs to a PM domain which goes to the "power off" state when that device is suspended, it may take too much time to restore power to the domain in response to the request from one of the device's children. In that case, if the parent's resume time is critical, the domain should stay in the "power on" state, although it still may be desirable to power manage the parent itself (e.g. by manipulating its clock). In general, device PM QoS may be used to address this problem. Namely, if the device's children added PM QoS latency constraints for it, they would be able to prevent it from being put into an overly deep low-power state. However, in some cases the devices needing to be serviced are not the immediate children of a "children-ignoring" device, but its grandchildren or even less direct descendants. In those cases, the entity wanting to add a PM QoS request for a given device's ancestor that ignores its children will have to find it in the first place, so introduce a new helper function that may be used to achieve that. This function, dev_pm_qos_add_ancestor_request(), will search for the first ancestor of the given device whose power.ignore_children flag is set and will add a device PM QoS latency request for that ancestor on behalf of the caller. The request added this way may be removed with the help of dev_pm_qos_remove_request() in the future, like any other device PM QoS latency request. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 02 12月, 2011 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Make the runtime PM core use device PM QoS constraints to check if it is allowed to suspend a given device, so that an error code is returned if the device's own PM QoS constraint is negative or one of its children has already been suspended for too long. If this is not the case, the maximum estimated time the device is allowed to be suspended, computed as the minimum of the device's PM QoS constraint and the PM QoS constraints of its children (reduced by the difference between the current time and their suspend times) is stored in a new device's PM field power.max_time_suspended_ns that can be used by the device's subsystem or PM domain to decide whether or not to put the device into lower-power (and presumably higher-latency) states later (if the constraint is 0, which means "no constraint", the power.max_time_suspended_ns is set to -1). Additionally, the time of execution of the subsystem-level .runtime_suspend() callback for the device is recorded in the new power.suspend_time field for later use by the device's subsystem or PM domain along with power.max_time_suspended_ns (it also is used by the core code when the device's parent is suspended). Introduce a new helper function, pm_runtime_update_max_time_suspended(), allowing subsystems and PM domains (or device drivers) to update the power.max_time_suspended_ns field, for example after changing the power state of a suspended device. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 05 10月, 2011 1 次提交
-
-
由 Rafael J. Wysocki 提交于
To read the current PM QoS value for a given device we need to make sure that the device's power.constraints object won't be removed while we're doing that. For this reason, put the operation under dev->power.lock and acquire the lock around the initialization and removal of power.constraints. Moreover, since we're using the value of power.constraints to determine whether or not the object is present, the power.constraints_state field isn't necessary any more and may be removed. However, dev_pm_qos_add_request() needs to check if the device is being removed from the system before allocating a new PM QoS constraints object for it, so make it use the power.power_state field of struct device for this purpose. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 25 8月, 2011 6 次提交
-
-
由 Jean Pihet 提交于
Add a global notification chain that gets called upon changes to the aggregated constraint value for any device. The notification callbacks are passing the full constraint request data in order for the callees to have access to it. The current use is for the platform low-level code to access the target device of the constraint. Signed-off-by: NJean Pihet <j-pihet@ti.com> Reviewed-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Jean Pihet 提交于
Implement the per-device PM QoS constraints by creating a device PM QoS API, which calls the PM QoS constraints management core code. The per-device latency constraints data strctures are stored in the device dev_pm_info struct. The device PM code calls the init and destroy of the per-device constraints data struct in order to support the dynamic insertion and removal of the devices in the system. To minimize the data usage by the per-device constraints, the data struct is only allocated at the first call to dev_pm_qos_add_request. The data is later free'd when the device is removed from the system. A global mutex protects the constraints users from the data being allocated and free'd. Signed-off-by: NJean Pihet <j-pihet@ti.com> Reviewed-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Jean Pihet 提交于
In preparation for the per-device constratins support: - rename update_target to pm_qos_update_target - generalize and export pm_qos_update_target for usage by the upcoming per-device latency constraints framework: * operate on struct pm_qos_constraints for constraints management, * introduce an 'action' parameter for constraints add/update/remove, * the return value indicates if the aggregated constraint value has changed, - update the internal code to operate on struct pm_qos_constraints - add a NULL pointer check in the API functions Signed-off-by: NJean Pihet <j-pihet@ti.com> Reviewed-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Jean Pihet 提交于
In preparation for the per-device constratins support, re-organize the data strctures: - add a struct pm_qos_constraints which contains the constraints related data - update struct pm_qos_object contents to the PM QoS internal object data. Add a pointer to struct pm_qos_constraints - update the internal code to use the new data structs. Signed-off-by: NJean Pihet <j-pihet@ti.com> Reviewed-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Jean Pihet 提交于
- Misc fixes to improve code readability: * rename struct pm_qos_request_list to struct pm_qos_request, * rename pm_qos_req parameter to req in internal code, consistenly use req in the API parameters, * update the in-kernel API callers to the new parameters names, * rename of fields names (requests, list, node, constraints) Signed-off-by: NJean Pihet <j-pihet@ti.com> Acked-by: Nmarkgross <markgross@thegnar.org> Reviewed-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Jean Pihet 提交于
The PM QoS implementation files are better named kernel/power/qos.c and include/linux/pm_qos.h. The PM QoS support is compiled under the CONFIG_PM option. Signed-off-by: NJean Pihet <j-pihet@ti.com> Acked-by: Nmarkgross <markgross@thegnar.org> Reviewed-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 29 5月, 2011 1 次提交
-
-
由 Tim Chen 提交于
Thanks to the reviews and comments by Rafael, James, Mark and Andi. Here's version 2 of the patch incorporating your comments and also some update to my previous patch comments. I noticed that before entering idle state, the menu idle governor will look up the current pm_qos target value according to the list of qos requests received. This look up currently needs the acquisition of a lock to access the list of qos requests to find the qos target value, slowing down the entrance into idle state due to contention by multiple cpus to access this list. The contention is severe when there are a lot of cpus waking and going into idle. For example, for a simple workload that has 32 pair of processes ping ponging messages to each other, where 64 cpu cores are active in test system, I see the following profile with 37.82% of cpu cycles spent in contention of pm_qos_lock: - 37.82% swapper [kernel.kallsyms] [k] _raw_spin_lock_irqsave - _raw_spin_lock_irqsave - 95.65% pm_qos_request menu_select cpuidle_idle_call - cpu_idle 99.98% start_secondary A better approach will be to cache the updated pm_qos target value so reading it does not require lock acquisition as in the patch below. With this patch the contention for pm_qos_lock is removed and I saw a 2.2X increase in throughput for my message passing workload. cc: stable@kernel.org Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com> Acked-by: NAndi Kleen <ak@linux.intel.com> Acked-by: NJames Bottomley <James.Bottomley@suse.de> Acked-by: Nmark gross <markgross@thegnar.org> Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 19 7月, 2010 1 次提交
-
-
由 James Bottomley 提交于
All current users of pm_qos_add_request() have the ability to supply the memory required by the pm_qos routines, so make them do this and eliminate the kmalloc() with pm_qos_add_request(). This has the double benefit of making the call never fail and allowing it to be called from atomic context. Signed-off-by: NJames Bottomley <James.Bottomley@suse.de> Signed-off-by: Nmark gross <markgross@thegnar.org> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 11 5月, 2010 1 次提交
-
-
由 Mark Gross 提交于
This patch changes the string based list management to a handle base implementation to help with the hot path use of pm-qos, it also renames much of the API to use "request" as opposed to "requirement" that was used in the initial implementation. I did this because request more accurately represents what it actually does. Also, I added a string based ABI for users wanting to use a string interface. So if the user writes 0xDDDDDDDD formatted hex it will be accepted by the interface. (someone asked me for it and I don't think it hurts anything.) This patch updates some documentation input I got from Randy. Signed-off-by: Nmarkgross <mgross@linux.intel.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 06 8月, 2008 1 次提交
-
-
由 Richard Hughes 提交于
A documentation cleanup patch. With a minor tweak to clarify units for kbs. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Nmark gross <mgross@linux.intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 2月, 2008 1 次提交
-
-
由 Mark Gross 提交于
The following patch is a generalization of the latency.c implementation done by Arjan last year. It provides infrastructure for more than one parameter, and exposes a user mode interface for processes to register pm_qos expectations of processes. This interface provides a kernel and user mode interface for registering performance expectations by drivers, subsystems and user space applications on one of the parameters. Currently we have {cpu_dma_latency, network_latency, network_throughput} as the initial set of pm_qos parameters. The infrastructure exposes multiple misc device nodes one per implemented parameter. The set of parameters implement is defined by pm_qos_power_init() and pm_qos_params.h. This is done because having the available parameters being runtime configurable or changeable from a driver was seen as too easy to abuse. For each parameter a list of performance requirements is maintained along with an aggregated target value. The aggregated target value is updated with changes to the requirement list or elements of the list. Typically the aggregated target value is simply the max or min of the requirement values held in the parameter list elements. >From kernel mode the use of this interface is simple: pm_qos_add_requirement(param_id, name, target_value): Will insert a named element in the list for that identified PM_QOS parameter with the target value. Upon change to this list the new target is recomputed and any registered notifiers are called only if the target value is now different. pm_qos_update_requirement(param_id, name, new_target_value): Will search the list identified by the param_id for the named list element and then update its target value, calling the notification tree if the aggregated target is changed. with that name is already registered. pm_qos_remove_requirement(param_id, name): Will search the identified list for the named element and remove it, after removal it will update the aggregate target and call the notification tree if the target was changed as a result of removing the named requirement. >From user mode: Only processes can register a pm_qos requirement. To provide for automatic cleanup for process the interface requires the process to register its parameter requirements in the following way: To register the default pm_qos target for the specific parameter, the process must open one of /dev/[cpu_dma_latency, network_latency, network_throughput] As long as the device node is held open that process has a registered requirement on the parameter. The name of the requirement is "process_<PID>" derived from the current->pid from within the open system call. To change the requested target value the process needs to write a s32 value to the open device node. This translates to a pm_qos_update_requirement call. To remove the user mode request for a target value simply close the device node. [akpm@linux-foundation.org: fix warnings] [akpm@linux-foundation.org: fix build] [akpm@linux-foundation.org: fix build again] [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Nmark gross <mgross@linux.intel.com> Cc: "John W. Linville" <linville@tuxdriver.com> Cc: Len Brown <lenb@kernel.org> Cc: Jaroslav Kysela <perex@suse.cz> Cc: Takashi Iwai <tiwai@suse.de> Cc: Arjan van de Ven <arjan@infradead.org> Cc: Venki Pallipadi <venkatesh.pallipadi@intel.com> Cc: Adam Belay <abelay@novell.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-