- 25 9月, 2014 1 次提交
-
-
由 Tomeu Vizoso 提交于
Also adds a class type PM_QOS_SUM that aggregates the values by summing them. It can be used by memory controllers to calculate the optimum clock frequency based on the bandwidth needs of the different memory clients. Signed-off-by: NTomeu Vizoso <tomeu.vizoso@collabora.com> Acked-by: NPavel Machek <pavel@ucw.cz> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 11 2月, 2014 2 次提交
-
-
由 Rafael J. Wysocki 提交于
Add a new latency tolerance device PM QoS type to be use for specifying active state (RPM_ACTIVE) memory access (DMA) latency tolerance requirements for devices. It may be used to prevent hardware from choosing overly aggressive energy-saving operation modes (causing too much latency to appear) for the whole platform. This feature reqiures hardware support, so it only will be available for devices having a new .set_latency_tolerance() callback in struct dev_pm_info populated, in which case the routine pointed to by it should implement whatever is necessary to transfer the effective requirement value to the hardware. Whenever the effective latency tolerance changes for the device, its .set_latency_tolerance() callback will be executed and the effective value will be passed to it. If that value is negative, which means that the list of latency tolerance requirements for the device is empty, the callback is expected to switch the underlying hardware latency tolerance control mechanism to an autonomous mode if available. If that value is PM_QOS_LATENCY_ANY, in turn, and the hardware supports a special "no requirement" setting, the callback is expected to use it. That allows software to prevent the hardware from automatically updating the device's latency tolerance in response to its power state changes (e.g. during transitions from D3cold to D0), which generally may be done in the autonomous latency tolerance control mode. If .set_latency_tolerance() is present for the device, a new pm_qos_latency_tolerance_us attribute will be present in the devivce's power directory in sysfs. Then, user space can use that attribute to specify its latency tolerance requirement for the device, if any. Writing "any" to it means "no requirement, but do not let the hardware control latency tolerance" and writing "auto" to it allows the hardware to be switched to the autonomous mode if there are no other requirements from the kernel side in the device's list. This changeset includes a fix from Mika Westerberg. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> -
由 Rafael J. Wysocki 提交于
Add a new field, no_constraints_value, to struct pm_qos_constraints representing a list of PM QoS constraint requests to be returned by pm_qos_get_value() when that list of requests is empty. That field will be equal to default_value for all of the existing global PM QoS classes and for the resume latency device PM QoS type, but it will be different from default_value for the new latency tolerance device PM QoS type introduced by the next changeset. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 18 10月, 2013 1 次提交
-
-
由 Andy Shevchenko 提交于
Let kstrtos32_from_user() do the necessary calls and checks. Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 14 8月, 2013 1 次提交
-
-
由 Stephen Boyd 提交于
pm_qos_update_request_timeout() updates a qos and then schedules a delayed work item to bring the qos back down to the default after the timeout. When the work item runs, pm_qos_work_fn() will call pm_qos_update_request() and deadlock because it tries to cancel itself via cancel_delayed_work_sync(). Future callers of that qos will also hang waiting to cancel the work that is canceling itself. Let's extract the little bit of code that does the real work of pm_qos_update_request() and call it from the work function so that we don't deadlock. Before ed1ac6e9 (PM: don't use [delayed_]work_pending()) this didn't happen because the work function wouldn't try to cancel itself. Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Reviewed-by: NTejun Heo <tj@kernel.org> Cc: 3.9+ <stable@vger.kernel.org> # 3.9+ Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 24 6月, 2013 2 次提交
-
-
由 Sahara 提交于
Adds tracepoints to pm_qos_add_request, pm_qos_update_request, pm_qos_remove_request, and pm_qos_update_request_timeout. It's useful for checking pm_qos_class, value, and timeout_us. Signed-off-by: NSahara <keun-o.park@windriver.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Sahara 提交于
This patch adds tracepoints to pm_qos_update_target and pm_qos_update_flags. It's useful for checking pm qos action, previous value and current value. Signed-off-by: NSahara <keun-o.park@windriver.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 21 6月, 2013 1 次提交
-
-
由 Sahara 提交于
The valid start index for pm_qos_array is not 0, but PM_QOS_CPU_DMA_LATENCY. There is a null_pm_qos at index 0 of pm_qos_array. However, null_pm_qos is not created as misc device so that inclusion of 0 index for checking pm_qos_class especially for file operations is not proper here. [rjw: Changelog, a bit] Signed-off-by: NSahara <keun-o.park@windriver.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 26 1月, 2013 1 次提交
-
-
由 Tejun Heo 提交于
There's no need to test whether a (delayed) work item is pending before queueing, flushing or cancelling it, so remove work_pending() tests used in those cases. Signed-off-by: NTejun Heo <tj@kernel.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 15 11月, 2012 1 次提交
-
-
由 Daniel Walter 提交于
Replace strict_strtoul() with kstrtoul() in pm_async_store() and pm_qos_power_write(). [rjw: Modified subject and changelog.] Signed-off-by: NDaniel Walter <sahne@0x90.at> Acked-by: NPavel Machek <pavel@ucw.cz> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 23 10月, 2012 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Introduce struct pm_qos_flags_request and struct pm_qos_flags representing PM QoS flags request type and PM QoS flags constraint type, respectively. With these definitions the data structures will be arranged so that the list member of a struct pm_qos_flags object will contain the head of a list of struct pm_qos_flags_request objects representing all of the "flags" requests present for the given device. Then, the effective_flags member of a struct pm_qos_flags object will contain the bitwise OR of the flags members of all the struct pm_qos_flags_request objects in the list. Additionally, introduce helper function pm_qos_update_flags() allowing the caller to manage the list of struct pm_qos_flags_request pointed to by the list member of struct pm_qos_flags. The flags are of type s32 so that the request's "value" field is always of the same type regardless of what kind of request it is (latency requests already have value fields of type s32). Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NJean Pihet <j-pihet@ti.com> Acked-by: Nmark gross <markgross@thegnar.org>
-
- 08 9月, 2012 1 次提交
-
-
由 Luis Gonzalez Fernandez 提交于
pm_qos_get_value don't return a return code in all cases. It's sure that anything interesting happend after BUG() but this prevent any compilation warning. [rjw: Chaneged the new return value to PM_QOS_DEFAULT_VALUE.] Signed-off-by: NLuis Gonzalez Fernandez <luisgf@gmail.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 29 3月, 2012 1 次提交
-
-
由 MyungJoo Ham 提交于
The new API, pm_qos_update_request_timeout() is to provide a timeout with pm_qos_update_request. For example, pm_qos_update_request_timeout(req, 100, 1000), means that QoS request on req with value 100 will be active for 1000 microseconds. After 1000 microseconds, the QoS request thru req is reset. If there were another pm_qos_update_request(req, x) during the 1000 us, this new request with value x will override as this is another request on the same req handle. A new request on the same req handle will always override the previous request whether it is the conventional request or it is the new timeout request. Signed-off-by: NMyungJoo Ham <myungjoo.ham@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NMark Gross <markgross@thegnar.org> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 30 1月, 2012 1 次提交
-
-
由 Alex Frid 提交于
- Replace class ID #define with enumeration - Loop through PM QoS objects during initialization (rather than initializing them one-by-one) Signed-off-by: NAlex Frid <afrid@nvidia.com> Reviewed-by: NAntti Miettinen <amiettinen@nvidia.com> Reviewed-by: NDiwakar Tundlam <dtundlam@nvidia.com> Reviewed-by: NScott Williams <scwilliams@nvidia.com> Reviewed-by: NYu-Huan Hsu <yhsu@nvidia.com> Acked-by: Nmarkgross <markgross@thegnar.org> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 08 11月, 2011 1 次提交
-
-
由 Dominik Brodowski 提交于
Since commit 4a31a334, the name of this misc device is not initialized, which leads to a funny device named /dev/(null) being created and /proc/misc containing an entry with just a number but no name. The latter leads to complaints by cryptsetup, which caused me to investigate this matter. Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 05 11月, 2011 1 次提交
-
-
由 Guennadi Liakhovetski 提交于
Remove an "if" check, that repeats an equivalent one 6 lines above. Signed-off-by: NGuennadi Liakhovetski <g.liakhovetski@gmx.de> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 01 11月, 2011 1 次提交
-
-
由 Paul Gortmaker 提交于
These files were getting <linux/module.h> via an implicit non-obvious path, but we want to crush those out of existence since they cost time during compiles of processing thousands of lines of headers for no reason. Give them the lightweight header that just contains the EXPORT_SYMBOL infrastructure. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 25 8月, 2011 6 次提交
-
-
由 Jean Pihet 提交于
Add a global notification chain that gets called upon changes to the aggregated constraint value for any device. The notification callbacks are passing the full constraint request data in order for the callees to have access to it. The current use is for the platform low-level code to access the target device of the constraint. Signed-off-by: NJean Pihet <j-pihet@ti.com> Reviewed-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Jean Pihet 提交于
In preparation for the per-device constratins support: - rename update_target to pm_qos_update_target - generalize and export pm_qos_update_target for usage by the upcoming per-device latency constraints framework: * operate on struct pm_qos_constraints for constraints management, * introduce an 'action' parameter for constraints add/update/remove, * the return value indicates if the aggregated constraint value has changed, - update the internal code to operate on struct pm_qos_constraints - add a NULL pointer check in the API functions Signed-off-by: NJean Pihet <j-pihet@ti.com> Reviewed-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> -
由 Jean Pihet 提交于
In preparation for the per-device constratins support, re-organize the data strctures: - add a struct pm_qos_constraints which contains the constraints related data - update struct pm_qos_object contents to the PM QoS internal object data. Add a pointer to struct pm_qos_constraints - update the internal code to use the new data structs. Signed-off-by: NJean Pihet <j-pihet@ti.com> Reviewed-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Jean Pihet 提交于
Move around the PM QoS misc devices management code for better readability. Signed-off-by: NJean Pihet <j-pihet@ti.com> Acked-by: Nmarkgross <markgross@thegnar.org> Reviewed-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Jean Pihet 提交于
- Misc fixes to improve code readability: * rename struct pm_qos_request_list to struct pm_qos_request, * rename pm_qos_req parameter to req in internal code, consistenly use req in the API parameters, * update the in-kernel API callers to the new parameters names, * rename of fields names (requests, list, node, constraints) Signed-off-by: NJean Pihet <j-pihet@ti.com> Acked-by: Nmarkgross <markgross@thegnar.org> Reviewed-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> -
由 Jean Pihet 提交于
The PM QoS implementation files are better named kernel/power/qos.c and include/linux/pm_qos.h. The PM QoS support is compiled under the CONFIG_PM option. Signed-off-by: NJean Pihet <j-pihet@ti.com> Acked-by: Nmarkgross <markgross@thegnar.org> Reviewed-by: NKevin Hilman <khilman@ti.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 08 7月, 2011 1 次提交
-
-
由 Dima Zavin 提交于
This was legacy code brought over from the RT tree and is no longer necessary. Signed-off-by: NDima Zavin <dima@android.com> Acked-by: NThomas Gleixner <tglx@linutronix.de> Cc: Daniel Walker <dwalker@codeaurora.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: Lai Jiangshan <laijs@cn.fujitsu.com> Link: http://lkml.kernel.org/r/1310084879-10351-2-git-send-email-dima@android.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 29 5月, 2011 1 次提交
-
-
由 Tim Chen 提交于
Thanks to the reviews and comments by Rafael, James, Mark and Andi. Here's version 2 of the patch incorporating your comments and also some update to my previous patch comments. I noticed that before entering idle state, the menu idle governor will look up the current pm_qos target value according to the list of qos requests received. This look up currently needs the acquisition of a lock to access the list of qos requests to find the qos target value, slowing down the entrance into idle state due to contention by multiple cpus to access this list. The contention is severe when there are a lot of cpus waking and going into idle. For example, for a simple workload that has 32 pair of processes ping ponging messages to each other, where 64 cpu cores are active in test system, I see the following profile with 37.82% of cpu cycles spent in contention of pm_qos_lock: - 37.82% swapper [kernel.kallsyms] [k] _raw_spin_lock_irqsave - _raw_spin_lock_irqsave - 95.65% pm_qos_request menu_select cpuidle_idle_call - cpu_idle 99.98% start_secondary A better approach will be to cache the updated pm_qos target value so reading it does not require lock acquisition as in the patch below. With this patch the contention for pm_qos_lock is removed and I saw a 2.2X increase in throughput for my message passing workload. cc: stable@kernel.org Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com> Acked-by: NAndi Kleen <ak@linux.intel.com> Acked-by: NJames Bottomley <James.Bottomley@suse.de> Acked-by: Nmark gross <markgross@thegnar.org> Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 27 5月, 2011 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Make pm_qos_power_write() accept values passed to it in the ASCII hex format either with or without an ending newline. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Acked-by: NMark Gross <markgross@thegnar.org>
-
- 10 4月, 2011 1 次提交
-
-
由 Justin P. Mattock 提交于
Signed-off-by: NJustin P. Mattock <justinmattock@gmail.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 15 3月, 2011 1 次提交
-
-
由 Thomas Renninger 提交于
I have a machine where entering deep C-states broke. pm_qos was a hot candidate, but I couldn't find any way to double check without the need of recompiling. While in this case it was a driver bug (ath9k): https://bugzilla.kernel.org/show_bug.cgi?id=27532 powertop or others may want to read out cpu_dma_latency restrictions which could be the cause of preventing a machine entering deeper C-states. Output with this patch: # default value of 2000 * USEC_PER_SEC (0x77359400) cat /dev/network_latency |hexdump 0000000 9400 7735 0000004 # value of 55 us which is the reason for not entering C2 cat /dev/cpu_dma_latency |hexdump 0000000 0037 0000 0000004 There is no reason to hide this info -> make pm_qos files readable. Signed-off-by: NThomas Renninger <trenn@suse.de> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 16 11月, 2010 1 次提交
-
-
由 Colin Cross 提交于
pm_qos_get_value had min and max reversed, causing all pm_qos requests to have no effect. Signed-off-by: NColin Cross <ccross@android.com> Acked-by: Nmark <markgross@thegnar.org> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Cc: stable@kernel.org
-
- 15 10月, 2010 1 次提交
-
-
由 Arnd Bergmann 提交于
All file_operations should get a .llseek operation so we can make nonseekable_open the default for future file operations without a .llseek pointer. The three cases that we can automatically detect are no_llseek, seq_lseek and default_llseek. For cases where we can we can automatically prove that the file offset is always ignored, we use noop_llseek, which maintains the current behavior of not returning an error from a seek. New drivers should normally not use noop_llseek but instead use no_llseek and call nonseekable_open at open time. Existing drivers can be converted to do the same when the maintainer knows for certain that no user code relies on calling seek on the device file. The generated code is often incorrectly indented and right now contains comments that clarify for each added line why a specific variant was chosen. In the version that gets submitted upstream, the comments will be gone and I will manually fix the indentation, because there does not seem to be a way to do that using coccinelle. Some amount of new code is currently sitting in linux-next that should get the same modifications, which I will do at the end of the merge window. Many thanks to Julia Lawall for helping me learn to write a semantic patch that does all this. ===== begin semantic patch ===== // This adds an llseek= method to all file operations, // as a preparation for making no_llseek the default. // // The rules are // - use no_llseek explicitly if we do nonseekable_open // - use seq_lseek for sequential files // - use default_llseek if we know we access f_pos // - use noop_llseek if we know we don't access f_pos, // but we still want to allow users to call lseek // @ open1 exists @ identifier nested_open; @@ nested_open(...) { <+... nonseekable_open(...) ...+> } @ open exists@ identifier open_f; identifier i, f; identifier open1.nested_open; @@ int open_f(struct inode *i, struct file *f) { <+... ( nonseekable_open(...) | nested_open(...) ) ...+> } @ read disable optional_qualifier exists @ identifier read_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; expression E; identifier func; @@ ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off) { <+... ( *off = E | *off += E | func(..., off, ...) | E = *off ) ...+> } @ read_no_fpos disable optional_qualifier exists @ identifier read_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; @@ ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off) { ... when != off } @ write @ identifier write_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; expression E; identifier func; @@ ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off) { <+... ( *off = E | *off += E | func(..., off, ...) | E = *off ) ...+> } @ write_no_fpos @ identifier write_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; @@ ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off) { ... when != off } @ fops0 @ identifier fops; @@ struct file_operations fops = { ... }; @ has_llseek depends on fops0 @ identifier fops0.fops; identifier llseek_f; @@ struct file_operations fops = { ... .llseek = llseek_f, ... }; @ has_read depends on fops0 @ identifier fops0.fops; identifier read_f; @@ struct file_operations fops = { ... .read = read_f, ... }; @ has_write depends on fops0 @ identifier fops0.fops; identifier write_f; @@ struct file_operations fops = { ... .write = write_f, ... }; @ has_open depends on fops0 @ identifier fops0.fops; identifier open_f; @@ struct file_operations fops = { ... .open = open_f, ... }; // use no_llseek if we call nonseekable_open //////////////////////////////////////////// @ nonseekable1 depends on !has_llseek && has_open @ identifier fops0.fops; identifier nso ~= "nonseekable_open"; @@ struct file_operations fops = { ... .open = nso, ... +.llseek = no_llseek, /* nonseekable */ }; @ nonseekable2 depends on !has_llseek @ identifier fops0.fops; identifier open.open_f; @@ struct file_operations fops = { ... .open = open_f, ... +.llseek = no_llseek, /* open uses nonseekable */ }; // use seq_lseek for sequential files ///////////////////////////////////// @ seq depends on !has_llseek @ identifier fops0.fops; identifier sr ~= "seq_read"; @@ struct file_operations fops = { ... .read = sr, ... +.llseek = seq_lseek, /* we have seq_read */ }; // use default_llseek if there is a readdir /////////////////////////////////////////// @ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier readdir_e; @@ // any other fop is used that changes pos struct file_operations fops = { ... .readdir = readdir_e, ... +.llseek = default_llseek, /* readdir is present */ }; // use default_llseek if at least one of read/write touches f_pos ///////////////////////////////////////////////////////////////// @ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier read.read_f; @@ // read fops use offset struct file_operations fops = { ... .read = read_f, ... +.llseek = default_llseek, /* read accesses f_pos */ }; @ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier write.write_f; @@ // write fops use offset struct file_operations fops = { ... .write = write_f, ... + .llseek = default_llseek, /* write accesses f_pos */ }; // Use noop_llseek if neither read nor write accesses f_pos /////////////////////////////////////////////////////////// @ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier read_no_fpos.read_f; identifier write_no_fpos.write_f; @@ // write fops use offset struct file_operations fops = { ... .write = write_f, .read = read_f, ... +.llseek = noop_llseek, /* read and write both use no f_pos */ }; @ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier write_no_fpos.write_f; @@ struct file_operations fops = { ... .write = write_f, ... +.llseek = noop_llseek, /* write uses no f_pos */ }; @ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier read_no_fpos.read_f; @@ struct file_operations fops = { ... .read = read_f, ... +.llseek = noop_llseek, /* read uses no f_pos */ }; @ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; @@ struct file_operations fops = { ... +.llseek = noop_llseek, /* no read or write fn */ }; ===== End semantic patch ===== Signed-off-by: NArnd Bergmann <arnd@arndb.de> Cc: Julia Lawall <julia@diku.dk> Cc: Christoph Hellwig <hch@infradead.org>
-
- 23 9月, 2010 1 次提交
-
-
由 Joe Perches 提交于
Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 11 9月, 2010 1 次提交
-
-
由 mark gross 提交于
Correct some pr_debug() misuse and add a stronger parameter check to pm_qos_write() for the ASCII hex value case. Thanks to Dan Carpenter for pointing out the problem! Signed-off-by: Nmark gross <markgross@thegnar.org> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 27 8月, 2010 1 次提交
-
-
由 Saravana Kannan 提交于
Fix the pm_qos_add_request() kerneldoc comment that doesn't reflect the behavior of the function after the last PM QoS update. Signed-off-by: NSaravana Kannan <skannan@codeaurora.org> Acked-by: Nmark gross <markgross@thegnar.org> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 25 8月, 2010 1 次提交
-
-
由 David Alan Gilbert 提交于
sparse spotted that the kzalloc() in pm_qos_power_open() in the current Linus' git tree had its parameters swapped. Fix this. Signed-off-by: NDavid Alan Gilbert <linux@treblig.org> Acked-by: Nmark gross <markgross@thegnar.org> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 19 7月, 2010 2 次提交
-
-
由 James Bottomley 提交于
All current users of pm_qos_add_request() have the ability to supply the memory required by the pm_qos routines, so make them do this and eliminate the kmalloc() with pm_qos_add_request(). This has the double benefit of making the call never fail and allowing it to be called from atomic context. Signed-off-by: NJames Bottomley <James.Bottomley@suse.de> Signed-off-by: Nmark gross <markgross@thegnar.org> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 James Bottomley 提交于
A lot of the pm_qos extremal value handling is really duplicating what a priority ordered list does, just in a less efficient fashion. Simply redoing the implementation in terms of a plist gets rid of a lot of this junk (although there are several other strange things that could do with tidying up, like pm_qos_request_list has to carry the pm_qos_class with every node, simply because it doesn't get passed in to pm_qos_update_request even though every caller knows full well what parameter it's updating). I think this redo is a win independent of android, so we should do something like this now. Signed-off-by: NJames Bottomley <James.Bottomley@suse.de> Signed-off-by: Nmark gross <markgross@thegnar.org> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 17 5月, 2010 1 次提交
-
-
由 Mark Gross 提交于
This update handles a use case where pm_qos update requests need to silently fail if the update is being sent to a handle that is NULL. The problem was that the original pm_qos silently fails when a request update is passed to a parameter that has not been added to the list yet. This update restores that behavior. Signed-off-by: Nmarkgross <markgross@thegnar.org> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 11 5月, 2010 1 次提交
-
-
由 Mark Gross 提交于
This patch changes the string based list management to a handle base implementation to help with the hot path use of pm-qos, it also renames much of the API to use "request" as opposed to "requirement" that was used in the initial implementation. I did this because request more accurately represents what it actually does. Also, I added a string based ABI for users wanting to use a string interface. So if the user writes 0xDDDDDDDD formatted hex it will be accepted by the interface. (someone asked me for it and I don't think it hurts anything.) This patch updates some documentation input I got from Randy. Signed-off-by: Nmarkgross <mgross@linux.intel.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 14 10月, 2009 2 次提交
-
-
由 Jonathan Corbet 提交于
"name" is a poor name for a file-global variable. It was used in three different functions, with no mutual exclusion. But it's just a tiny, temporary string; let's just move it onto the stack in the functions that need it. Also use snprintf() just in case. Signed-off-by: NJonathan Corbet <corbet@lwn.net> LKML-Reference: <20091010153349.113570550@linutronix.de> Acked-by: NMark Gross <mgross@linux.intel.com> Reviewed-by: NFrederic Weisbecker <fweisbec@gmail.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Jonathan Corbet 提交于
pm_qos_power_open got its lock_kernel() calls from the open() pushdown. A look at the code shows that the only global resources accessed are pm_qos_array and "name". pm_qos_array doesn't change (things pointed to therein do change, but they are atomics and/or are protected by pm_qos_lock). Accesses to "name" are totally unprotected with or without the BKL; that will be fixed shortly. The BKL is not helpful here; take it out. Signed-off-by: NJonathan Corbet <corbet@lwn.net> LKML-Reference: <20091010153349.071381158@linutronix.de> Acked-by: NMark Gross <mgross@linux.intel.com> Reviewed-by: NFrederic Weisbecker <fweisbec@gmail.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-