- 08 12月, 2016 2 次提交
-
-
由 xing wei 提交于
If there are any wakeup events being processed, read operation on /sys/power/wakeup_count will be blocked, so print the names of all active wakeup sources to help to find out who is preventing system suspend from triggering. While at it change pr_info() in pm_print_active_wakeup_sources() to pr_debug() to avoid excessive log noise. Signed-off-by: Nxing wei <xing.wei@intel.com> [ rjw: Subject & changelog ] Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Sahitya Tummala 提交于
If async_suspend is enabled for parent and child devices, then PM framework has to ensure that parent's async suspend gets called only after child's async suspend is done. In case if child's async suspend fails with error, then parent's async suspend must not be invoked. The current code uses async_error to ensure this but there is a problem with it in __device_suspend(). This function notifies the completion of child's async suspend before updating its error via async_error variable. As a result, parent's async suspend gets invoked even though it's child suspend has failed. Fix this bug by updating the async_error before notifying the child's completion. Signed-off-by: NSahitya Tummala <stummala@codeaurora.org> [ rjw: Rearranged wthitespace ] Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 07 12月, 2016 2 次提交
-
-
由 Tony Lindgren 提交于
I noticed some wakeirq flakeyness with consumer drivers not using autosuspend. For drivers not using autosuspend, the wakeirq may never get unmasked in rpm_suspend() because of irq desc->depth. We are configuring dedicated wakeirqs to start with IRQ_NOAUTOEN as we naturally don't want them running until rpm_suspend() is called. However, when a consumer driver initially calls pm_runtime_get(), we now wrongly start with disable_irq_nosync() call on the dedicated wakeirq that is disabled to start with. This causes desc->depth to toggle between 1 and 2 instead of the usual 0 and 1. This can prevent enable_irq() from unmasking the wakeirq as that only happens at desc->depth 1. This does not necessarily show up with drivers using autosuspend as there is time for disable_irq_nosync() before rpm_suspend() gets called after the autosuspend timeout. Let's fix the issue by adding wirq->status that lazily gets set on the first rpm_suspend(). We also need PM runtime core private functions for dev_pm_enable_wake_irq_check() and dev_pm_disable_wake_irq_check() so we can enable the dedicated wakeirq on the first rpm_suspend(). While at it, let's also fix the comments for dev_pm_enable_wake_irq() and dev_pm_disable_wake_irq(). Those can still be used by the consumer drivers as needed because the IRQ core manages the interrupt usecount for us. Fixes: 4990d4fe (PM / Wakeirq: Add automated device wake IRQ handling) Signed-off-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Lina Iyer 提交于
Re-using idle state definition provided by arm,idle-state for domain idle states creates a lot of confusion and limits further evolution of the domain idle definition. To keep things clear and simple, define a idle states for domain using a new compatible "domain-idle-state". Fix existing PM domains code to look for the newly defined compatible. Signed-off-by: NLina Iyer <lina.iyer@linaro.org> Reviewed-by: NUlf Hansson <ulf.hansson@linaro.org> Reviewed-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 06 12月, 2016 7 次提交
-
-
由 Viresh Kumar 提交于
If a platform specific OPP driver has called this routine first and set the regulators, then the second call from cpufreq-dt driver will hit the WARN_ON(). Remove the WARN_ON(), but continue to return error in such cases. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Reviewed-by: NStephen Boyd <sboyd@codeaurora.org> Tested-by: NDave Gerlach <d-gerlach@ti.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Viresh Kumar 提交于
The generic set_opp() handler isn't sufficient for platforms with complex DVFS. For example, some TI platforms have multiple regulators for a CPU device. The order in which various supplies need to be programmed is only known to the platform code and its best to leave it to it. This patch implements APIs to register platform specific set_opp() callback. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Tested-by: NDave Gerlach <d-gerlach@ti.com> Reviewed-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Viresh Kumar 提交于
Later patches would add support for custom set_opp() callbacks. This patch separates out the code for _generic_set_opp() handler in order to prepare for that. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Tested-by: NDave Gerlach <d-gerlach@ti.com> Reviewed-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Viresh Kumar 提交于
This patch adds infrastructure to manage multiple regulators and updates the only user (cpufreq-dt) of dev_pm_opp_set{put}_regulator(). This is preparatory work for adding full support for devices with multiple regulators. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Tested-by: NDave Gerlach <d-gerlach@ti.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Viresh Kumar 提交于
Pass the entire supply structure instead of all of its fields. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Tested-by: NDave Gerlach <d-gerlach@ti.com> Reviewed-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Viresh Kumar 提交于
This is a preparatory step for multiple regulator per device support. Move the voltage/current variables to a new structure. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Tested-by: NDave Gerlach <d-gerlach@ti.com> Reviewed-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Viresh Kumar 提交于
The OPP structure must not be used out of the rcu protected section. Cache the values to be used in separate variables instead. Cc: 4.6+ <stable@vger.kernel.org> # 4.6+ Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Reviewed-by: NStephen Boyd <sboyd@codeaurora.org> Tested-by: NDave Gerlach <d-gerlach@ti.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 01 12月, 2016 5 次提交
-
-
由 Stephen Boyd 提交于
Joonyoung Shim reported an interesting problem on his ARM octa-core Odoroid-XU3 platform. During system suspend, dev_pm_opp_put_regulator() was failing for a struct device for which dev_pm_opp_set_regulator() is called earlier. This happened because an earlier call to dev_pm_opp_of_cpumask_remove_table() function (from cpufreq-dt.c file) removed all the entries from opp_table->dev_list apart from the last CPU device in the cpumask of CPUs sharing the OPP. But both dev_pm_opp_set_regulator() and dev_pm_opp_put_regulator() routines get CPU device for the first CPU in the cpumask. And so the OPP core failed to find the OPP table for the struct device. This patch attempts to fix this problem by returning a pointer to the opp_table from dev_pm_opp_set_regulator() and using that as the parameter to dev_pm_opp_put_regulator(). This ensures that the dev_pm_opp_put_regulator() doesn't fail to find the opp table. Note that similar design problem also exists with other dev_pm_opp_put_*() APIs, but those aren't used currently by anyone and so we don't need to update them for now. Cc: 4.4+ <stable@vger.kernel.org> # 4.4+ Reported-by: NJoonyoung Shim <jy0922.shim@samsung.com> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> [ Viresh: Wrote commit log and tested on exynos 5250 ] Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Geert Uytterhoeven 提交于
EPROBE_DEFER is not an error, hence printing an error message like renesas_irqc e61c0000.interrupt-controller: failed to add to PM domain always-on: -517 may confuse the user. Suppress the error message in case of EPROBE_DEFER to fix this. Reported-by: NYoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: NSimon Horman <horms+renesas@verge.net.au> Acked-by: NKevin Hilman <khilman@baylibre.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Andrew Lutomirski 提交于
nvme wants a module parameter that overrides the default latency tolerance. This makes it easy for nvme to reflect that default in sysfs. Signed-off-by: NAndy Lutomirski <luto@kernel.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Andrew Lutomirski 提交于
If it was already 'auto', then writing 'auto' again would incorrectly fail. Signed-off-by: NAndy Lutomirski <luto@kernel.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Andrew Lutomirski 提交于
Negative values are special. Don't let users write them directly. Signed-off-by: NAndy Lutomirski <luto@kernel.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 30 11月, 2016 1 次提交
-
-
由 Chen Yu 提交于
Power management suspend/resume tracing (ab)uses the RTC to store suspend/resume information persistently. As a consequence the RTC value is clobbered when timekeeping is resumed and tries to inject the sleep time. Commit a4f8f666 ("timekeeping: Cap array access in timekeeping_debug") plugged a out of bounds array access in the timekeeping debug code which was caused by the clobbered RTC value, but we still use the clobbered RTC value for sleep time injection into kernel timekeeping, which will result in random adjustments depending on the stored "hash" value. To prevent this keep track of the RTC clobbering and ignore the invalid RTC timestamp at resume. If the system resumed successfully clear the flag, which marks the RTC as unusable, warn the user about the RTC clobber and recommend to adjust the RTC with 'ntpdate' or 'rdate'. [jstultz: Fixed up pr_warn formating, and implemented suggestions from Ingo] [ tglx: Rewrote changelog ] Originally-from: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NChen Yu <yu.c.chen@intel.com> Signed-off-by: NJohn Stultz <john.stultz@linaro.org> Acked-by: NPavel Machek <pavel@ucw.cz> Acked-by: NThomas Gleixner <tglx@linutronix.de> Cc: Prarit Bhargava <prarit@redhat.com> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Richard Cochran <richardcochran@gmail.com> Cc: Xunlei Pang <xlpang@redhat.com> Cc: Len Brown <lenb@kernel.org> Link: http://lkml.kernel.org/r/1480372524-15181-3-git-send-email-john.stultz@linaro.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 19 11月, 2016 1 次提交
-
-
由 Ulf Hansson 提交于
When the pm_runtime_force_suspend|resume() helpers were invented, we still had CONFIG_PM_RUNTIME and CONFIG_PM_SLEEP as separate Kconfig options. To make sure these helpers worked for all combinations and without introducing too much of complexity, the device was always resumed in pm_runtime_force_resume(). More precisely, when CONFIG_PM_SLEEP was set and CONFIG_PM_RUNTIME was unset, we needed to resume the device as the subsystem/driver couldn't rely on using runtime PM to do it. As the CONFIG_PM_RUNTIME option was merged into CONFIG_PM a while ago, it removed this combination, of using CONFIG_PM_SLEEP without the earlier CONFIG_PM_RUNTIME. For this reason we can now rely on the subsystem/driver to use runtime PM to resume the device, instead of forcing that to be done in all cases. In other words, let's defer the runtime resume to a later point when it's actually needed. Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Tested-by: NGeert Uytterhoeven <geert+renesas@glider.be> Acked-by: NKevin Hilman <khilman@baylibre.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 11 11月, 2016 2 次提交
-
-
由 Dan Carpenter 提交于
The first argument of WARN() is the condition, followed by the message. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Acked-by: NPavel Machek <pavel@ucw.cz> Acked-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Brian Norris 提交于
Consider two devices, A and B, where B is a child of A, and B utilizes asynchronous suspend (it does not matter whether A is sync or async). If B fails to suspend_noirq() or suspend_late(), or is interrupted by a wakeup (pm_wakeup_pending()), then it aborts and sets the async_error variable. However, device A does not (immediately) check the async_error variable; it may continue to run its own suspend_noirq()/suspend_late() callback. This is bad. We can resolve this problem by doing our error and wakeup checking (particularly, for the async_error flag) after waiting for children to suspend, instead of before. This also helps align the logic for the noirq and late suspend cases with the logic in __device_suspend(). It's easy to observe this erroneous behavior by, for example, forcing a device to sleep a bit in its suspend_noirq() (to ensure the parent is waiting for the child to complete), then return an error, and watch the parent suspend_noirq() still get called. (Or similarly, fake a wakeup event at the right (or is it wrong?) time.) Fixes: de377b39 (PM / sleep: Asynchronous threads for suspend_late) Fixes: 28b6fd6e (PM / sleep: Asynchronous threads for suspend_noirq) Reported-by: NJeffy Chen <jeffy.chen@rock-chips.com> Signed-off-by: NBrian Norris <briannorris@chromium.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 09 11月, 2016 1 次提交
-
-
由 Ulf Hansson 提交于
When resuming a device in __pm_runtime_set_status(), the prerequisite is that its parent must already be active, else an error code is returned and the device's status remains suspended. When suspending a device there is no similar constraints being validated. Let's change this to make the behaviour consistent, by not allowing to suspend a device with an active child, unless it has been explicitly set to ignore its children. Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 01 11月, 2016 4 次提交
-
-
由 Rafael J. Wysocki 提交于
If the device has no links to suppliers that should be used for runtime PM (links with DEVICE_LINK_PM_RUNTIME set), there is no reason to walk the list of suppliers for that device during runtime suspend and resume. Add a simple mechanism to detect that case and possibly avoid the extra unnecessary overhead. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Rafael J. Wysocki 提交于
Modify the runtime PM framework to use device links to ensure that supplier devices will not be suspended if any of their consumer devices are active. The idea is to reference count suppliers on the consumer's resume and drop references to them on its suspend. The information on whether or not the supplier has been reference counted by the consumer's (runtime) resume is stored in a new field (rpm_active) in the link object for each link. It may be necessary to clean up those references when the supplier is unbinding and that's why the links whose status is DEVICE_LINK_SUPPLIER_UNBIND are skipped by the runtime suspend and resume code. The above means that if the consumer device is probed in the runtime-active state, the supplier has to be resumed and reference counted by device_link_add() so the code works as expected on its (runtime) suspend. There is a new flag, DEVICE_LINK_RPM_ACTIVE, to tell device_link_add() about that (in which case the caller is responsible for making sure that the consumer really will be runtime-active when runtime PM is enabled for it). The other new link flag, DEVICE_LINK_PM_RUNTIME, tells the core whether or not the link should be used for runtime PM at all. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Rafael J. Wysocki 提交于
Make the device suspend/resume part of the core system suspend/resume code use device links to ensure that supplier and consumer devices will be suspended and resumed in the right order in case of async suspend/resume. The idea, roughly, is to use dpm_wait() to wait for all consumers before a supplier device suspend and to wait for all suppliers before a consumer device resume. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Rafael J. Wysocki 提交于
Currently, there is a problem with taking functional dependencies between devices into account. What I mean by a "functional dependency" is when the driver of device B needs device A to be functional and (generally) its driver to be present in order to work properly. This has certain consequences for power management (suspend/resume and runtime PM ordering) and shutdown ordering of these devices. In general, it also implies that the driver of A needs to be working for B to be probed successfully and it cannot be unbound from the device before the B's driver. Support for representing those functional dependencies between devices is added here to allow the driver core to track them and act on them in certain cases where applicable. The argument for doing that in the driver core is that there are quite a few distinct use cases involving device dependencies, they are relatively hard to get right in a driver (if one wants to address all of them properly) and it only gets worse if multiplied by the number of drivers potentially needing to do it. Morever, at least one case (asynchronous system suspend/resume) cannot be handled in a single driver at all, because it requires the driver of A to wait for B to suspend (during system suspend) and the driver of B to wait for A to resume (during system resume). For this reason, represent dependencies between devices as "links", with the help of struct device_link objects each containing pointers to the "linked" devices, a list node for each of them, status information, flags, and an RCU head for synchronization. Also add two new list heads, representing the lists of links to the devices that depend on the given one (consumers) and to the devices depended on by it (suppliers), and a "driver presence status" field (needed for figuring out initial states of device links) to struct device. The entire data structure consisting of all of the lists of link objects for all devices is protected by a mutex (for link object addition/removal and for list walks during device driver probing and removal) and by SRCU (for list walking in other case that will be introduced by subsequent change sets). If CONFIG_SRCU is not selected, however, an rwsem is used for protecting the entire data structure. In addition, each link object has an internal status field whose value reflects whether or not drivers are bound to the devices pointed to by the link or probing/removal of their drivers is in progress etc. That field is only modified under the device links mutex, but it may be read outside of it in some cases (introduced by subsequent change sets), so modifications of it are annotated with WRITE_ONCE(). New links are added by calling device_link_add() which takes three arguments: pointers to the devices in question and flags. In particular, if DL_FLAG_STATELESS is set in the flags, the link status is not to be taken into account for this link and the driver core will not manage it. In turn, if DL_FLAG_AUTOREMOVE is set in the flags, the driver core will remove the link automatically when the consumer device driver unbinds from it. One of the actions carried out by device_link_add() is to reorder the lists used for device shutdown and system suspend/resume to put the consumer device along with all of its children and all of its consumers (and so on, recursively) to the ends of those lists in order to ensure the right ordering between all of the supplier and consumer devices. For this reason, it is not possible to create a link between two devices if the would-be supplier device already depends on the would-be consumer device as either a direct descendant of it or a consumer of one of its direct descendants or one of its consumers and so on. There are two types of link objects, persistent and non-persistent. The persistent ones stay around until one of the target devices is deleted, while the non-persistent ones are removed automatically when the consumer driver unbinds from its device (ie. they are assumed to be valid only as long as the consumer device has a driver bound to it). Persistent links are created by default and non-persistent links are created when the DL_FLAG_AUTOREMOVE flag is passed to device_link_add(). Both persistent and non-persistent device links can be deleted with an explicit call to device_link_del(). Links created without the DL_FLAG_STATELESS flag set are managed by the driver core using a simple state machine. There are 5 states each link can be in: DORMANT (unused), AVAILABLE (the supplier driver is present and functional), CONSUMER_PROBE (the consumer driver is probing), ACTIVE (both supplier and consumer drivers are present and functional), and SUPPLIER_UNBIND (the supplier driver is unbinding). The driver core updates the link state automatically depending on what happens to the linked devices and for each link state specific actions are taken in addition to that. For example, if the supplier driver unbinds from its device, the driver core will also unbind the drivers of all of its consumers automatically under the assumption that they cannot function properly without the supplier. Analogously, the driver core will only allow the consumer driver to bind to its device if the supplier driver is present and functional (ie. the link is in the AVAILABLE state). If that's not the case, it will rely on the existing deferred probing mechanism to wait for the supplier driver to become available. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 27 10月, 2016 1 次提交
-
-
由 Colin Ian King 提交于
The return from of_count_phandle_with_args can be negative, so we should avoid kcalloc of a negative count of genpd_power_stat structs by sanity checking if count is zero or less. Signed-off-by: NColin Ian King <colin.king@canonical.com> Acked-by: NKevin Hilman <khilman@baylibre.com> Acked-by: NPavel Machek <pavel@ucw.cz> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 22 10月, 2016 5 次提交
-
-
由 Lina Iyer 提交于
Generic Power Domains currently support turning on/off only in process context. This prevents the usage of PM domains for domains that could be powered on/off in a context where IRQs are disabled. Many such domains exist today and do not get powered off, when the IRQ safe devices in that domain are powered off, because of this limitation. However, not all domains can operate in IRQ safe contexts. Genpd therefore, has to support both cases where the domain may or may not operate in IRQ safe contexts. Configuring genpd to use an appropriate lock for that domain, would allow domains that have IRQ safe devices to runtime suspend and resume, in atomic context. To achieve domain specific locking, set the domain's ->flag to GENPD_FLAG_IRQ_SAFE while defining the domain. This indicates that genpd should use a spinlock instead of a mutex for locking the domain. Locking is abstracted through genpd_lock() and genpd_unlock() functions that use the flag to determine the appropriate lock to be used for that domain. Domains that have lower latency to suspend and resume and can operate with IRQs disabled may now be able to save power, when the component devices and sub-domains are idle at runtime. The restriction this imposes on the domain hierarchy is that non-IRQ safe domains may not have IRQ-safe subdomains, but IRQ safe domains may have IRQ safe and non-IRQ safe subdomains and devices. Signed-off-by: NLina Iyer <lina.iyer@linaro.org> Acked-by: NUlf Hansson <ulf.hansson@linaro.org> Reviewed-by: NKevin Hilman <khilman@baylibre.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Lina Iyer 提交于
Abstract genpd lock/unlock calls, in preparation for domain specific locks added in the following patches. Signed-off-by: NLina Iyer <lina.iyer@linaro.org> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Reviewed-by: NKevin Hilman <khilman@baylibre.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Lina Iyer 提交于
Save the fwnode for the genpd state in the state node. PM Domain clients may use the fwnode to read in the platform specific domain state properties and associate them with the state. Signed-off-by: NLina Iyer <lina.iyer@linaro.org> Acked-by: NUlf Hansson <ulf.hansson@linaro.org> Reviewed-by: NKevin Hilman <khilman@baylibre.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Lina Iyer 提交于
This patch allows domains to define idle states in the DT. SoC's can define domain idle states in DT using the "domain-idle-states" property of the domain provider. Add API to read the idle states from DT that can be set in the genpd object. This patch is based on the original patch by Marc Titinger. Signed-off-by: NMarc Titinger <mtitinger+renesas@baylibre.com> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NLina Iyer <lina.iyer@linaro.org> Reviewed-by: NKevin Hilman <khilman@baylibre.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Lina Iyer 提交于
Allow PM Domain states to be defined dynamically by the drivers. This removes the limitation on the maximum number of states possible for a domain. Suggested-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NLina Iyer <lina.iyer@linaro.org> Acked-by: NUlf Hansson <ulf.hansson@linaro.org> Reviewed-by: NKevin Hilman <khilman@baylibre.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 21 10月, 2016 4 次提交
-
-
由 Ulf Hansson 提交于
Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Ulf Hansson 提交于
The exported function pm_children_suspended() has only one caller, which is the runtime PM internal function, rpm_check_suspend_allowed(). Let's clean-up this code, by removing pm_children_suspended() altogether and instead do the one-liner check directly in rpm_check_suspend_allowed(). Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Masahiro Yamada 提交于
These log messages are wrong because _of_get_opp_desc_node() returns an operating-points-v2 node. Commit a6eed752 ("PM / OPP: passing NULL to PTR_ERR()") fixed static checker warnings, and reworded the messages at the same time (but the latter was not mentioned in the git-log). Restore the correct messages. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Reviewed-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Masahiro Yamada 提交于
Since commit f47b72a1 ("PM / OPP: Move CONFIG_OF dependent code in a separate file"), this function is defined and called only in drivers/base/power/opp/of.c . Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 26 9月, 2016 1 次提交
-
-
由 Dave Gerlach 提交于
The OPP framework allows each OPP to set a opp-supported-hw property which provides values that are matched against supported_hw values provided by the platform to limit support for certain OPPs on specific hardware. Currently, if the platform does not set supported_hw values, all OPPs are interpreted as supported, even if they have provided their own opp-supported-hw values. If an OPP has provided opp-supported-hw, it is indicating that there is some specific hardware configuration it is supported by. These constraints should be honored, and if no supported_hw has been provided by the platform, there is no way to determine if that OPP is actually supported, so it should be marked as not supported. Signed-off-by: NDave Gerlach <d-gerlach@ti.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 24 9月, 2016 4 次提交
-
-
由 Ulf Hansson 提交于
These are internal static functions to genpd. Let's conform to the naming rules, by dropping the "pm_" prefix from these. Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Ulf Hansson 提交于
Measure latency does by itself contribute to an increased latency, thus we should avoid it when it isn't needed. Currently genpd measures latencies in the system PM phase for the ->power_on|off() callbacks, except in the syscore case when it's not allowed to use ktime_get() as timekeeping may be suspended. Since there should be plenty of occasions during runtime PM to perform these measurements, let's rely on that and drop them from system PM. This will also make it consistent for how measurements are done of the runtime PM callbacks (as those may be invoked during system PM). Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Ulf Hansson 提交于
In cases when the PM domain haven't assigned a system PM callback, the PM core fall-backs to check for the callback at the driver level instead. This makes it redundant to assign a pm_generic_* helper function to a corresponding system PM callback at a PM domain level. Therefore, let's remove these assignments in pm_genpd_init(). Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Ulf Hansson 提交于
There's no need to validate the PM domain by using genpd_lookup_dev() when removing the device via genpd's genpd_dev_pm_detach() function. That's because this function can't be called, unless there is a valid PM domain for the device. To simplify the behaviour, let's move code from pm_genpd_remove_device() into a new internal function, genpd_remove_device(), which is called from pm_genpd_remove_device() and genpd_dev_pm_detach(). Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Reviewed-by: NLina Iyer <lina.iyer@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-