提交 68d0080f 编写于 作者: L Linus Torvalds

Merge branch 'pm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/suspend-2.6

* 'pm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/suspend-2.6:
  PCI / PM: Block races between runtime PM and system sleep
  PM / Domains: Update documentation
  PM / Runtime: Handle clocks correctly if CONFIG_PM_RUNTIME is unset
  PM: Fix async resume following suspend failure
  PM: Free memory bitmaps if opening /dev/snapshot fails
  PM: Rename dev_pm_info.in_suspend to is_prepared
  PM: Update documentation regarding sysdevs
  PM / Runtime: Update doc: usage count no longer incremented across system PM
......@@ -520,59 +520,20 @@ Support for power domains is provided through the pwr_domain field of struct
device. This field is a pointer to an object of type struct dev_power_domain,
defined in include/linux/pm.h, providing a set of power management callbacks
analogous to the subsystem-level and device driver callbacks that are executed
for the given device during all power transitions, in addition to the respective
subsystem-level callbacks. Specifically, the power domain "suspend" callbacks
(i.e. ->runtime_suspend(), ->suspend(), ->freeze(), ->poweroff(), etc.) are
executed after the analogous subsystem-level callbacks, while the power domain
"resume" callbacks (i.e. ->runtime_resume(), ->resume(), ->thaw(), ->restore,
etc.) are executed before the analogous subsystem-level callbacks. Error codes
returned by the "suspend" and "resume" power domain callbacks are ignored.
Power domain ->runtime_idle() callback is executed before the subsystem-level
->runtime_idle() callback and the result returned by it is not ignored. Namely,
if it returns error code, the subsystem-level ->runtime_idle() callback will not
be called and the helper function rpm_idle() executing it will return error
code. This mechanism is intended to help platforms where saving device state
is a time consuming operation and should only be carried out if all devices
in the power domain are idle, before turning off the shared power resource(s).
Namely, the power domain ->runtime_idle() callback may return error code until
the pm_runtime_idle() helper (or its asychronous version) has been called for
all devices in the power domain (it is recommended that the returned error code
be -EBUSY in those cases), preventing the subsystem-level ->runtime_idle()
callback from being run prematurely.
The support for device power domains is only relevant to platforms needing to
use the same subsystem-level (e.g. platform bus type) and device driver power
management callbacks in many different power domain configurations and wanting
to avoid incorporating the support for power domains into the subsystem-level
callbacks. The other platforms need not implement it or take it into account
in any way.
System Devices
--------------
System devices (sysdevs) follow a slightly different API, which can be found in
include/linux/sysdev.h
drivers/base/sys.c
System devices will be suspended with interrupts disabled, and after all other
devices have been suspended. On resume, they will be resumed before any other
devices, and also with interrupts disabled. These things occur in special
"sysdev_driver" phases, which affect only system devices.
Thus, after the suspend_noirq (or freeze_noirq or poweroff_noirq) phase, when
the non-boot CPUs are all offline and IRQs are disabled on the remaining online
CPU, then a sysdev_driver.suspend phase is carried out, and the system enters a
sleep state (or a system image is created). During resume (or after the image
has been created or loaded) a sysdev_driver.resume phase is carried out, IRQs
are enabled on the only online CPU, the non-boot CPUs are enabled, and the
resume_noirq (or thaw_noirq or restore_noirq) phase begins.
Code to actually enter and exit the system-wide low power state sometimes
involves hardware details that are only known to the boot firmware, and
may leave a CPU running software (from SRAM or flash memory) that monitors
the system and manages its wakeup sequence.
for the given device during all power transitions, instead of the respective
subsystem-level callbacks. Specifically, if a device's pm_domain pointer is
not NULL, the ->suspend() callback from the object pointed to by it will be
executed instead of its subsystem's (e.g. bus type's) ->suspend() callback and
anlogously for all of the remaining callbacks. In other words, power management
domain callbacks, if defined for the given device, always take precedence over
the callbacks provided by the device's subsystem (e.g. bus type).
The support for device power management domains is only relevant to platforms
needing to use the same device driver power management callbacks in many
different power domain configurations and wanting to avoid incorporating the
support for power domains into subsystem-level callbacks, for example by
modifying the platform bus type. Other platforms need not implement it or take
it into account in any way.
Device Low Power (suspend) States
......
......@@ -566,11 +566,6 @@ to do this is:
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
The PM core always increments the run-time usage counter before calling the
->prepare() callback and decrements it after calling the ->complete() callback.
Hence disabling run-time PM temporarily like this will not cause any run-time
suspend callbacks to be lost.
7. Generic subsystem callbacks
Subsystems may wish to conserve code space by using the set of generic power
......
......@@ -387,7 +387,7 @@ static int pm_runtime_clk_notify(struct notifier_block *nb,
clknb = container_of(nb, struct pm_clk_notifier_block, nb);
switch (action) {
case BUS_NOTIFY_ADD_DEVICE:
case BUS_NOTIFY_BIND_DRIVER:
if (clknb->con_ids[0]) {
for (con_id = clknb->con_ids; *con_id; con_id++)
enable_clock(dev, *con_id);
......@@ -395,7 +395,7 @@ static int pm_runtime_clk_notify(struct notifier_block *nb,
enable_clock(dev, NULL);
}
break;
case BUS_NOTIFY_DEL_DEVICE:
case BUS_NOTIFY_UNBOUND_DRIVER:
if (clknb->con_ids[0]) {
for (con_id = clknb->con_ids; *con_id; con_id++)
disable_clock(dev, *con_id);
......
......@@ -57,7 +57,8 @@ static int async_error;
*/
void device_pm_init(struct device *dev)
{
dev->power.in_suspend = false;
dev->power.is_prepared = false;
dev->power.is_suspended = false;
init_completion(&dev->power.completion);
complete_all(&dev->power.completion);
dev->power.wakeup = NULL;
......@@ -91,7 +92,7 @@ void device_pm_add(struct device *dev)
pr_debug("PM: Adding info for %s:%s\n",
dev->bus ? dev->bus->name : "No Bus", dev_name(dev));
mutex_lock(&dpm_list_mtx);
if (dev->parent && dev->parent->power.in_suspend)
if (dev->parent && dev->parent->power.is_prepared)
dev_warn(dev, "parent %s should not be sleeping\n",
dev_name(dev->parent));
list_add_tail(&dev->power.entry, &dpm_list);
......@@ -511,7 +512,14 @@ static int device_resume(struct device *dev, pm_message_t state, bool async)
dpm_wait(dev->parent, async);
device_lock(dev);
dev->power.in_suspend = false;
/*
* This is a fib. But we'll allow new children to be added below
* a resumed device, even if the device hasn't been completed yet.
*/
dev->power.is_prepared = false;
if (!dev->power.is_suspended)
goto Unlock;
if (dev->pwr_domain) {
pm_dev_dbg(dev, state, "power domain ");
......@@ -548,6 +556,9 @@ static int device_resume(struct device *dev, pm_message_t state, bool async)
}
End:
dev->power.is_suspended = false;
Unlock:
device_unlock(dev);
complete_all(&dev->power.completion);
......@@ -670,7 +681,7 @@ void dpm_complete(pm_message_t state)
struct device *dev = to_device(dpm_prepared_list.prev);
get_device(dev);
dev->power.in_suspend = false;
dev->power.is_prepared = false;
list_move(&dev->power.entry, &list);
mutex_unlock(&dpm_list_mtx);
......@@ -835,11 +846,11 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
device_lock(dev);
if (async_error)
goto End;
goto Unlock;
if (pm_wakeup_pending()) {
async_error = -EBUSY;
goto End;
goto Unlock;
}
if (dev->pwr_domain) {
......@@ -877,6 +888,9 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
}
End:
dev->power.is_suspended = !error;
Unlock:
device_unlock(dev);
complete_all(&dev->power.completion);
......@@ -1042,7 +1056,7 @@ int dpm_prepare(pm_message_t state)
put_device(dev);
break;
}
dev->power.in_suspend = true;
dev->power.is_prepared = true;
if (!list_empty(&dev->power.entry))
list_move_tail(&dev->power.entry, &dpm_prepared_list);
put_device(dev);
......
......@@ -624,7 +624,7 @@ static int pci_pm_prepare(struct device *dev)
* system from the sleep state, we'll have to prevent it from signaling
* wake-up.
*/
pm_runtime_resume(dev);
pm_runtime_get_sync(dev);
if (drv && drv->pm && drv->pm->prepare)
error = drv->pm->prepare(dev);
......@@ -638,6 +638,8 @@ static void pci_pm_complete(struct device *dev)
if (drv && drv->pm && drv->pm->complete)
drv->pm->complete(dev);
pm_runtime_put_sync(dev);
}
#else /* !CONFIG_PM_SLEEP */
......
......@@ -375,7 +375,7 @@ static int usb_unbind_interface(struct device *dev)
* Just re-enable it without affecting the endpoint toggles.
*/
usb_enable_interface(udev, intf, false);
} else if (!error && !intf->dev.power.in_suspend) {
} else if (!error && !intf->dev.power.is_prepared) {
r = usb_set_interface(udev, intf->altsetting[0].
desc.bInterfaceNumber, 0);
if (r < 0)
......@@ -960,7 +960,7 @@ void usb_rebind_intf(struct usb_interface *intf)
}
/* Try to rebind the interface */
if (!intf->dev.power.in_suspend) {
if (!intf->dev.power.is_prepared) {
intf->needs_binding = 0;
rc = device_attach(&intf->dev);
if (rc < 0)
......@@ -1107,7 +1107,7 @@ static int usb_resume_interface(struct usb_device *udev,
if (intf->condition == USB_INTERFACE_UNBOUND) {
/* Carry out a deferred switch to altsetting 0 */
if (intf->needs_altsetting0 && !intf->dev.power.in_suspend) {
if (intf->needs_altsetting0 && !intf->dev.power.is_prepared) {
usb_set_interface(udev, intf->altsetting[0].
desc.bInterfaceNumber, 0);
intf->needs_altsetting0 = 0;
......
......@@ -654,13 +654,13 @@ static inline int device_is_registered(struct device *dev)
static inline void device_enable_async_suspend(struct device *dev)
{
if (!dev->power.in_suspend)
if (!dev->power.is_prepared)
dev->power.async_suspend = true;
}
static inline void device_disable_async_suspend(struct device *dev)
{
if (!dev->power.in_suspend)
if (!dev->power.is_prepared)
dev->power.async_suspend = false;
}
......
......@@ -425,7 +425,8 @@ struct dev_pm_info {
pm_message_t power_state;
unsigned int can_wakeup:1;
unsigned int async_suspend:1;
unsigned int in_suspend:1; /* Owned by the PM core */
bool is_prepared:1; /* Owned by the PM core */
bool is_suspended:1; /* Ditto */
spinlock_t lock;
#ifdef CONFIG_PM_SLEEP
struct list_head entry;
......
......@@ -113,8 +113,10 @@ static int snapshot_open(struct inode *inode, struct file *filp)
if (error)
pm_notifier_call_chain(PM_POST_RESTORE);
}
if (error)
if (error) {
free_basic_memory_bitmaps();
atomic_inc(&snapshot_device_available);
}
data->frozen = 0;
data->ready = 0;
data->platform_support = 0;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册