- 25 8月, 2011 5 次提交
-
-
由 Jean Pihet 提交于
Implement the per-device PM QoS constraints by creating a device PM QoS API, which calls the PM QoS constraints management core code. The per-device latency constraints data strctures are stored in the device dev_pm_info struct. The device PM code calls the init and destroy of the per-device constraints data struct in order to support the dynamic insertion and removal of the devices in the system. To minimize the data usage by the per-device constraints, the data struct is only allocated at the first call to dev_pm_qos_add_request. The data is later free'd when the device is removed from the system. A global mutex protects the constraints users from the data being allocated and free'd. Signed-off-by: NJean Pihet <j-pihet@ti.com> Reviewed-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Jean Pihet 提交于
In preparation for the per-device constratins support: - rename update_target to pm_qos_update_target - generalize and export pm_qos_update_target for usage by the upcoming per-device latency constraints framework: * operate on struct pm_qos_constraints for constraints management, * introduce an 'action' parameter for constraints add/update/remove, * the return value indicates if the aggregated constraint value has changed, - update the internal code to operate on struct pm_qos_constraints - add a NULL pointer check in the API functions Signed-off-by: NJean Pihet <j-pihet@ti.com> Reviewed-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Jean Pihet 提交于
In preparation for the per-device constratins support, re-organize the data strctures: - add a struct pm_qos_constraints which contains the constraints related data - update struct pm_qos_object contents to the PM QoS internal object data. Add a pointer to struct pm_qos_constraints - update the internal code to use the new data structs. Signed-off-by: NJean Pihet <j-pihet@ti.com> Reviewed-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Jean Pihet 提交于
- Misc fixes to improve code readability: * rename struct pm_qos_request_list to struct pm_qos_request, * rename pm_qos_req parameter to req in internal code, consistenly use req in the API parameters, * update the in-kernel API callers to the new parameters names, * rename of fields names (requests, list, node, constraints) Signed-off-by: NJean Pihet <j-pihet@ti.com> Acked-by: Nmarkgross <markgross@thegnar.org> Reviewed-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Jean Pihet 提交于
The PM QoS implementation files are better named kernel/power/qos.c and include/linux/pm_qos.h. The PM QoS support is compiled under the CONFIG_PM option. Signed-off-by: NJean Pihet <j-pihet@ti.com> Acked-by: Nmarkgross <markgross@thegnar.org> Reviewed-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 29 5月, 2011 1 次提交
-
-
由 Tim Chen 提交于
Thanks to the reviews and comments by Rafael, James, Mark and Andi. Here's version 2 of the patch incorporating your comments and also some update to my previous patch comments. I noticed that before entering idle state, the menu idle governor will look up the current pm_qos target value according to the list of qos requests received. This look up currently needs the acquisition of a lock to access the list of qos requests to find the qos target value, slowing down the entrance into idle state due to contention by multiple cpus to access this list. The contention is severe when there are a lot of cpus waking and going into idle. For example, for a simple workload that has 32 pair of processes ping ponging messages to each other, where 64 cpu cores are active in test system, I see the following profile with 37.82% of cpu cycles spent in contention of pm_qos_lock: - 37.82% swapper [kernel.kallsyms] [k] _raw_spin_lock_irqsave - _raw_spin_lock_irqsave - 95.65% pm_qos_request menu_select cpuidle_idle_call - cpu_idle 99.98% start_secondary A better approach will be to cache the updated pm_qos target value so reading it does not require lock acquisition as in the patch below. With this patch the contention for pm_qos_lock is removed and I saw a 2.2X increase in throughput for my message passing workload. cc: stable@kernel.org Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com> Acked-by: NAndi Kleen <ak@linux.intel.com> Acked-by: NJames Bottomley <James.Bottomley@suse.de> Acked-by: Nmark gross <markgross@thegnar.org> Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 19 7月, 2010 1 次提交
-
-
由 James Bottomley 提交于
All current users of pm_qos_add_request() have the ability to supply the memory required by the pm_qos routines, so make them do this and eliminate the kmalloc() with pm_qos_add_request(). This has the double benefit of making the call never fail and allowing it to be called from atomic context. Signed-off-by: NJames Bottomley <James.Bottomley@suse.de> Signed-off-by: Nmark gross <markgross@thegnar.org> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 11 5月, 2010 1 次提交
-
-
由 Mark Gross 提交于
This patch changes the string based list management to a handle base implementation to help with the hot path use of pm-qos, it also renames much of the API to use "request" as opposed to "requirement" that was used in the initial implementation. I did this because request more accurately represents what it actually does. Also, I added a string based ABI for users wanting to use a string interface. So if the user writes 0xDDDDDDDD formatted hex it will be accepted by the interface. (someone asked me for it and I don't think it hurts anything.) This patch updates some documentation input I got from Randy. Signed-off-by: Nmarkgross <mgross@linux.intel.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 06 8月, 2008 1 次提交
-
-
由 Richard Hughes 提交于
A documentation cleanup patch. With a minor tweak to clarify units for kbs. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Nmark gross <mgross@linux.intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 2月, 2008 1 次提交
-
-
由 Mark Gross 提交于
The following patch is a generalization of the latency.c implementation done by Arjan last year. It provides infrastructure for more than one parameter, and exposes a user mode interface for processes to register pm_qos expectations of processes. This interface provides a kernel and user mode interface for registering performance expectations by drivers, subsystems and user space applications on one of the parameters. Currently we have {cpu_dma_latency, network_latency, network_throughput} as the initial set of pm_qos parameters. The infrastructure exposes multiple misc device nodes one per implemented parameter. The set of parameters implement is defined by pm_qos_power_init() and pm_qos_params.h. This is done because having the available parameters being runtime configurable or changeable from a driver was seen as too easy to abuse. For each parameter a list of performance requirements is maintained along with an aggregated target value. The aggregated target value is updated with changes to the requirement list or elements of the list. Typically the aggregated target value is simply the max or min of the requirement values held in the parameter list elements. >From kernel mode the use of this interface is simple: pm_qos_add_requirement(param_id, name, target_value): Will insert a named element in the list for that identified PM_QOS parameter with the target value. Upon change to this list the new target is recomputed and any registered notifiers are called only if the target value is now different. pm_qos_update_requirement(param_id, name, new_target_value): Will search the list identified by the param_id for the named list element and then update its target value, calling the notification tree if the aggregated target is changed. with that name is already registered. pm_qos_remove_requirement(param_id, name): Will search the identified list for the named element and remove it, after removal it will update the aggregate target and call the notification tree if the target was changed as a result of removing the named requirement. >From user mode: Only processes can register a pm_qos requirement. To provide for automatic cleanup for process the interface requires the process to register its parameter requirements in the following way: To register the default pm_qos target for the specific parameter, the process must open one of /dev/[cpu_dma_latency, network_latency, network_throughput] As long as the device node is held open that process has a registered requirement on the parameter. The name of the requirement is "process_<PID>" derived from the current->pid from within the open system call. To change the requested target value the process needs to write a s32 value to the open device node. This translates to a pm_qos_update_requirement call. To remove the user mode request for a target value simply close the device node. [akpm@linux-foundation.org: fix warnings] [akpm@linux-foundation.org: fix build] [akpm@linux-foundation.org: fix build again] [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Nmark gross <mgross@linux.intel.com> Cc: "John W. Linville" <linville@tuxdriver.com> Cc: Len Brown <lenb@kernel.org> Cc: Jaroslav Kysela <perex@suse.cz> Cc: Takashi Iwai <tiwai@suse.de> Cc: Arjan van de Ven <arjan@infradead.org> Cc: Venki Pallipadi <venkatesh.pallipadi@intel.com> Cc: Adam Belay <abelay@novell.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-