提交 46ee9645 编写于 作者: L Linus Torvalds

Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/suspend-2.6

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/suspend-2.6:
  PM: PM QOS update fix
  Freezer / cgroup freezer: Update stale locking comments
  PM / platform_bus: Allow runtime PM by default
  i2c: Fix bus-level power management callbacks
  PM QOS update
  PM / Hibernate: Fix block_io.c printk warning
  PM / Hibernate: Group swap ops
  PM / Hibernate: Move the first_sector out of swsusp_write
  PM / Hibernate: Separate block_io
  PM / Hibernate: Snapshot cleanup
  FS / libfs: Implement simple_write_to_buffer
  PM / Hibernate: document open(/dev/snapshot) side effects
  PM / Runtime: Add sysfs debug files
  PM: Improve device power management document
  PM: Update device power management document
  PM: Allow runtime_suspend methods to call pm_schedule_suspend()
  PM: pm_wakeup - switch to using bool
此差异已折叠。
...@@ -18,44 +18,46 @@ and pm_qos_params.h. This is done because having the available parameters ...@@ -18,44 +18,46 @@ and pm_qos_params.h. This is done because having the available parameters
being runtime configurable or changeable from a driver was seen as too easy to being runtime configurable or changeable from a driver was seen as too easy to
abuse. abuse.
For each parameter a list of performance requirements is maintained along with For each parameter a list of performance requests is maintained along with
an aggregated target value. The aggregated target value is updated with an aggregated target value. The aggregated target value is updated with
changes to the requirement list or elements of the list. Typically the changes to the request list or elements of the list. Typically the
aggregated target value is simply the max or min of the requirement values held aggregated target value is simply the max or min of the request values held
in the parameter list elements. in the parameter list elements.
From kernel mode the use of this interface is simple: From kernel mode the use of this interface is simple:
pm_qos_add_requirement(param_id, name, target_value):
Will insert a named element in the list for that identified PM_QOS parameter
with the target value. Upon change to this list the new target is recomputed
and any registered notifiers are called only if the target value is now
different.
pm_qos_update_requirement(param_id, name, new_target_value): handle = pm_qos_add_request(param_class, target_value):
Will search the list identified by the param_id for the named list element and Will insert an element into the list for that identified PM_QOS class with the
then update its target value, calling the notification tree if the aggregated target value. Upon change to this list the new target is recomputed and any
target is changed. with that name is already registered. registered notifiers are called only if the target value is now different.
Clients of pm_qos need to save the returned handle.
pm_qos_remove_requirement(param_id, name): void pm_qos_update_request(handle, new_target_value):
Will search the identified list for the named element and remove it, after Will update the list element pointed to by the handle with the new target value
removal it will update the aggregate target and call the notification tree if and recompute the new aggregated target, calling the notification tree if the
the target was changed as a result of removing the named requirement. target is changed.
void pm_qos_remove_request(handle):
Will remove the element. After removal it will update the aggregate target and
call the notification tree if the target was changed as a result of removing
the request.
From user mode: From user mode:
Only processes can register a pm_qos requirement. To provide for automatic Only processes can register a pm_qos request. To provide for automatic
cleanup for process the interface requires the process to register its cleanup of a process, the interface requires the process to register its
parameter requirements in the following way: parameter requests in the following way:
To register the default pm_qos target for the specific parameter, the process To register the default pm_qos target for the specific parameter, the process
must open one of /dev/[cpu_dma_latency, network_latency, network_throughput] must open one of /dev/[cpu_dma_latency, network_latency, network_throughput]
As long as the device node is held open that process has a registered As long as the device node is held open that process has a registered
requirement on the parameter. The name of the requirement is "process_<PID>" request on the parameter.
derived from the current->pid from within the open system call.
To change the requested target value the process needs to write a s32 value to To change the requested target value the process needs to write an s32 value to
the open device node. This translates to a pm_qos_update_requirement call. the open device node. Alternatively the user mode program could write a hex
string for the value using 10 char long format e.g. "0x12345678". This
translates to a pm_qos_update_request call.
To remove the user mode request for a target value simply close the device To remove the user mode request for a target value simply close the device
node. node.
......
...@@ -24,6 +24,10 @@ assumed to be in the resume mode. The device cannot be open for simultaneous ...@@ -24,6 +24,10 @@ assumed to be in the resume mode. The device cannot be open for simultaneous
reading and writing. It is also impossible to have the device open more than reading and writing. It is also impossible to have the device open more than
once at a time. once at a time.
Even opening the device has side effects. Data structures are
allocated, and PM_HIBERNATION_PREPARE / PM_RESTORE_PREPARE chains are
called.
The ioctl() commands recognized by the device are: The ioctl() commands recognized by the device are:
SNAPSHOT_FREEZE - freeze user space processes (the current process is SNAPSHOT_FREEZE - freeze user space processes (the current process is
......
...@@ -698,7 +698,7 @@ static int acpi_processor_power_seq_show(struct seq_file *seq, void *offset) ...@@ -698,7 +698,7 @@ static int acpi_processor_power_seq_show(struct seq_file *seq, void *offset)
"max_cstate: C%d\n" "max_cstate: C%d\n"
"maximum allowed latency: %d usec\n", "maximum allowed latency: %d usec\n",
pr->power.state ? pr->power.state - pr->power.states : 0, pr->power.state ? pr->power.state - pr->power.states : 0,
max_cstate, pm_qos_requirement(PM_QOS_CPU_DMA_LATENCY)); max_cstate, pm_qos_request(PM_QOS_CPU_DMA_LATENCY));
seq_puts(seq, "states:\n"); seq_puts(seq, "states:\n");
......
...@@ -967,17 +967,17 @@ static int platform_pm_restore_noirq(struct device *dev) ...@@ -967,17 +967,17 @@ static int platform_pm_restore_noirq(struct device *dev)
int __weak platform_pm_runtime_suspend(struct device *dev) int __weak platform_pm_runtime_suspend(struct device *dev)
{ {
return -ENOSYS; return pm_generic_runtime_suspend(dev);
}; };
int __weak platform_pm_runtime_resume(struct device *dev) int __weak platform_pm_runtime_resume(struct device *dev)
{ {
return -ENOSYS; return pm_generic_runtime_resume(dev);
}; };
int __weak platform_pm_runtime_idle(struct device *dev) int __weak platform_pm_runtime_idle(struct device *dev)
{ {
return -ENOSYS; return pm_generic_runtime_idle(dev);
}; };
#else /* !CONFIG_PM_RUNTIME */ #else /* !CONFIG_PM_RUNTIME */
......
...@@ -229,14 +229,16 @@ int __pm_runtime_suspend(struct device *dev, bool from_wq) ...@@ -229,14 +229,16 @@ int __pm_runtime_suspend(struct device *dev, bool from_wq)
if (retval) { if (retval) {
dev->power.runtime_status = RPM_ACTIVE; dev->power.runtime_status = RPM_ACTIVE;
pm_runtime_cancel_pending(dev);
if (retval == -EAGAIN || retval == -EBUSY) { if (retval == -EAGAIN || retval == -EBUSY) {
if (dev->power.timer_expires == 0)
notify = true; notify = true;
dev->power.runtime_error = 0; dev->power.runtime_error = 0;
} else {
pm_runtime_cancel_pending(dev);
} }
} else { } else {
dev->power.runtime_status = RPM_SUSPENDED; dev->power.runtime_status = RPM_SUSPENDED;
pm_runtime_deactivate_timer(dev);
if (dev->parent) { if (dev->parent) {
parent = dev->parent; parent = dev->parent;
...@@ -659,8 +661,6 @@ int pm_schedule_suspend(struct device *dev, unsigned int delay) ...@@ -659,8 +661,6 @@ int pm_schedule_suspend(struct device *dev, unsigned int delay)
if (dev->power.runtime_status == RPM_SUSPENDED) if (dev->power.runtime_status == RPM_SUSPENDED)
retval = 1; retval = 1;
else if (dev->power.runtime_status == RPM_SUSPENDING)
retval = -EINPROGRESS;
else if (atomic_read(&dev->power.usage_count) > 0 else if (atomic_read(&dev->power.usage_count) > 0
|| dev->power.disable_depth > 0) || dev->power.disable_depth > 0)
retval = -EAGAIN; retval = -EAGAIN;
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
#include <linux/device.h> #include <linux/device.h>
#include <linux/string.h> #include <linux/string.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <asm/atomic.h>
#include "power.h" #include "power.h"
/* /*
...@@ -143,7 +144,59 @@ wake_store(struct device * dev, struct device_attribute *attr, ...@@ -143,7 +144,59 @@ wake_store(struct device * dev, struct device_attribute *attr,
static DEVICE_ATTR(wakeup, 0644, wake_show, wake_store); static DEVICE_ATTR(wakeup, 0644, wake_show, wake_store);
#ifdef CONFIG_PM_SLEEP_ADVANCED_DEBUG #ifdef CONFIG_PM_ADVANCED_DEBUG
#ifdef CONFIG_PM_RUNTIME
static ssize_t rtpm_usagecount_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
return sprintf(buf, "%d\n", atomic_read(&dev->power.usage_count));
}
static ssize_t rtpm_children_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
return sprintf(buf, "%d\n", dev->power.ignore_children ?
0 : atomic_read(&dev->power.child_count));
}
static ssize_t rtpm_enabled_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
if ((dev->power.disable_depth) && (dev->power.runtime_auto == false))
return sprintf(buf, "disabled & forbidden\n");
else if (dev->power.disable_depth)
return sprintf(buf, "disabled\n");
else if (dev->power.runtime_auto == false)
return sprintf(buf, "forbidden\n");
return sprintf(buf, "enabled\n");
}
static ssize_t rtpm_status_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
if (dev->power.runtime_error)
return sprintf(buf, "error\n");
switch (dev->power.runtime_status) {
case RPM_SUSPENDED:
return sprintf(buf, "suspended\n");
case RPM_SUSPENDING:
return sprintf(buf, "suspending\n");
case RPM_RESUMING:
return sprintf(buf, "resuming\n");
case RPM_ACTIVE:
return sprintf(buf, "active\n");
}
return -EIO;
}
static DEVICE_ATTR(runtime_usage, 0444, rtpm_usagecount_show, NULL);
static DEVICE_ATTR(runtime_active_kids, 0444, rtpm_children_show, NULL);
static DEVICE_ATTR(runtime_status, 0444, rtpm_status_show, NULL);
static DEVICE_ATTR(runtime_enabled, 0444, rtpm_enabled_show, NULL);
#endif
static ssize_t async_show(struct device *dev, struct device_attribute *attr, static ssize_t async_show(struct device *dev, struct device_attribute *attr,
char *buf) char *buf)
{ {
...@@ -170,15 +223,21 @@ static ssize_t async_store(struct device *dev, struct device_attribute *attr, ...@@ -170,15 +223,21 @@ static ssize_t async_store(struct device *dev, struct device_attribute *attr,
} }
static DEVICE_ATTR(async, 0644, async_show, async_store); static DEVICE_ATTR(async, 0644, async_show, async_store);
#endif /* CONFIG_PM_SLEEP_ADVANCED_DEBUG */ #endif /* CONFIG_PM_ADVANCED_DEBUG */
static struct attribute * power_attrs[] = { static struct attribute * power_attrs[] = {
#ifdef CONFIG_PM_RUNTIME #ifdef CONFIG_PM_RUNTIME
&dev_attr_control.attr, &dev_attr_control.attr,
#endif #endif
&dev_attr_wakeup.attr, &dev_attr_wakeup.attr,
#ifdef CONFIG_PM_SLEEP_ADVANCED_DEBUG #ifdef CONFIG_PM_ADVANCED_DEBUG
&dev_attr_async.attr, &dev_attr_async.attr,
#ifdef CONFIG_PM_RUNTIME
&dev_attr_runtime_usage.attr,
&dev_attr_runtime_active_kids.attr,
&dev_attr_runtime_status.attr,
&dev_attr_runtime_enabled.attr,
#endif
#endif #endif
NULL, NULL,
}; };
......
...@@ -67,7 +67,7 @@ static int ladder_select_state(struct cpuidle_device *dev) ...@@ -67,7 +67,7 @@ static int ladder_select_state(struct cpuidle_device *dev)
struct ladder_device *ldev = &__get_cpu_var(ladder_devices); struct ladder_device *ldev = &__get_cpu_var(ladder_devices);
struct ladder_device_state *last_state; struct ladder_device_state *last_state;
int last_residency, last_idx = ldev->last_state_idx; int last_residency, last_idx = ldev->last_state_idx;
int latency_req = pm_qos_requirement(PM_QOS_CPU_DMA_LATENCY); int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
/* Special case when user has set very strict latency requirement */ /* Special case when user has set very strict latency requirement */
if (unlikely(latency_req == 0)) { if (unlikely(latency_req == 0)) {
......
...@@ -182,7 +182,7 @@ static u64 div_round64(u64 dividend, u32 divisor) ...@@ -182,7 +182,7 @@ static u64 div_round64(u64 dividend, u32 divisor)
static int menu_select(struct cpuidle_device *dev) static int menu_select(struct cpuidle_device *dev)
{ {
struct menu_device *data = &__get_cpu_var(menu_devices); struct menu_device *data = &__get_cpu_var(menu_devices);
int latency_req = pm_qos_requirement(PM_QOS_CPU_DMA_LATENCY); int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
int i; int i;
int multiplier; int multiplier;
......
...@@ -159,106 +159,130 @@ static void i2c_device_shutdown(struct device *dev) ...@@ -159,106 +159,130 @@ static void i2c_device_shutdown(struct device *dev)
driver->shutdown(client); driver->shutdown(client);
} }
#ifdef CONFIG_SUSPEND #ifdef CONFIG_PM_SLEEP
static int i2c_device_pm_suspend(struct device *dev) static int i2c_legacy_suspend(struct device *dev, pm_message_t mesg)
{ {
const struct dev_pm_ops *pm; struct i2c_client *client = i2c_verify_client(dev);
struct i2c_driver *driver;
if (!dev->driver) if (!client || !dev->driver)
return 0; return 0;
pm = dev->driver->pm; driver = to_i2c_driver(dev->driver);
if (!pm || !pm->suspend) if (!driver->suspend)
return 0; return 0;
return pm->suspend(dev); return driver->suspend(client, mesg);
} }
static int i2c_device_pm_resume(struct device *dev) static int i2c_legacy_resume(struct device *dev)
{ {
const struct dev_pm_ops *pm; struct i2c_client *client = i2c_verify_client(dev);
struct i2c_driver *driver;
if (!dev->driver) if (!client || !dev->driver)
return 0; return 0;
pm = dev->driver->pm; driver = to_i2c_driver(dev->driver);
if (!pm || !pm->resume) if (!driver->resume)
return 0; return 0;
return pm->resume(dev); return driver->resume(client);
} }
#else
#define i2c_device_pm_suspend NULL
#define i2c_device_pm_resume NULL
#endif
#ifdef CONFIG_PM_RUNTIME static int i2c_device_pm_suspend(struct device *dev)
static int i2c_device_runtime_suspend(struct device *dev)
{ {
const struct dev_pm_ops *pm; const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
if (!dev->driver) if (pm_runtime_suspended(dev))
return 0; return 0;
pm = dev->driver->pm;
if (!pm || !pm->runtime_suspend)
return 0;
return pm->runtime_suspend(dev);
}
static int i2c_device_runtime_resume(struct device *dev) if (pm)
{ return pm->suspend ? pm->suspend(dev) : 0;
const struct dev_pm_ops *pm;
if (!dev->driver) return i2c_legacy_suspend(dev, PMSG_SUSPEND);
return 0;
pm = dev->driver->pm;
if (!pm || !pm->runtime_resume)
return 0;
return pm->runtime_resume(dev);
} }
static int i2c_device_runtime_idle(struct device *dev) static int i2c_device_pm_resume(struct device *dev)
{ {
const struct dev_pm_ops *pm = NULL; const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
int ret; int ret;
if (dev->driver) if (pm)
pm = dev->driver->pm; ret = pm->resume ? pm->resume(dev) : 0;
if (pm && pm->runtime_idle) { else
ret = pm->runtime_idle(dev); ret = i2c_legacy_resume(dev);
if (ret)
return ret; if (!ret) {
pm_runtime_disable(dev);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
} }
return pm_runtime_suspend(dev); return ret;
} }
#else
#define i2c_device_runtime_suspend NULL
#define i2c_device_runtime_resume NULL
#define i2c_device_runtime_idle NULL
#endif
static int i2c_device_suspend(struct device *dev, pm_message_t mesg) static int i2c_device_pm_freeze(struct device *dev)
{ {
struct i2c_client *client = i2c_verify_client(dev); const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
struct i2c_driver *driver;
if (!client || !dev->driver) if (pm_runtime_suspended(dev))
return 0; return 0;
driver = to_i2c_driver(dev->driver);
if (!driver->suspend) if (pm)
return 0; return pm->freeze ? pm->freeze(dev) : 0;
return driver->suspend(client, mesg);
return i2c_legacy_suspend(dev, PMSG_FREEZE);
} }
static int i2c_device_resume(struct device *dev) static int i2c_device_pm_thaw(struct device *dev)
{ {
struct i2c_client *client = i2c_verify_client(dev); const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
struct i2c_driver *driver;
if (!client || !dev->driver) if (pm_runtime_suspended(dev))
return 0; return 0;
driver = to_i2c_driver(dev->driver);
if (!driver->resume) if (pm)
return pm->thaw ? pm->thaw(dev) : 0;
return i2c_legacy_resume(dev);
}
static int i2c_device_pm_poweroff(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
if (pm_runtime_suspended(dev))
return 0; return 0;
return driver->resume(client);
if (pm)
return pm->poweroff ? pm->poweroff(dev) : 0;
return i2c_legacy_suspend(dev, PMSG_HIBERNATE);
}
static int i2c_device_pm_restore(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
int ret;
if (pm)
ret = pm->restore ? pm->restore(dev) : 0;
else
ret = i2c_legacy_resume(dev);
if (!ret) {
pm_runtime_disable(dev);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
}
return ret;
} }
#else /* !CONFIG_PM_SLEEP */
#define i2c_device_pm_suspend NULL
#define i2c_device_pm_resume NULL
#define i2c_device_pm_freeze NULL
#define i2c_device_pm_thaw NULL
#define i2c_device_pm_poweroff NULL
#define i2c_device_pm_restore NULL
#endif /* !CONFIG_PM_SLEEP */
static void i2c_client_dev_release(struct device *dev) static void i2c_client_dev_release(struct device *dev)
{ {
...@@ -301,9 +325,15 @@ static const struct attribute_group *i2c_dev_attr_groups[] = { ...@@ -301,9 +325,15 @@ static const struct attribute_group *i2c_dev_attr_groups[] = {
static const struct dev_pm_ops i2c_device_pm_ops = { static const struct dev_pm_ops i2c_device_pm_ops = {
.suspend = i2c_device_pm_suspend, .suspend = i2c_device_pm_suspend,
.resume = i2c_device_pm_resume, .resume = i2c_device_pm_resume,
.runtime_suspend = i2c_device_runtime_suspend, .freeze = i2c_device_pm_freeze,
.runtime_resume = i2c_device_runtime_resume, .thaw = i2c_device_pm_thaw,
.runtime_idle = i2c_device_runtime_idle, .poweroff = i2c_device_pm_poweroff,
.restore = i2c_device_pm_restore,
SET_RUNTIME_PM_OPS(
pm_generic_runtime_suspend,
pm_generic_runtime_resume,
pm_generic_runtime_idle
)
}; };
struct bus_type i2c_bus_type = { struct bus_type i2c_bus_type = {
...@@ -312,8 +342,6 @@ struct bus_type i2c_bus_type = { ...@@ -312,8 +342,6 @@ struct bus_type i2c_bus_type = {
.probe = i2c_device_probe, .probe = i2c_device_probe,
.remove = i2c_device_remove, .remove = i2c_device_remove,
.shutdown = i2c_device_shutdown, .shutdown = i2c_device_shutdown,
.suspend = i2c_device_suspend,
.resume = i2c_device_resume,
.pm = &i2c_device_pm_ops, .pm = &i2c_device_pm_ops,
}; };
EXPORT_SYMBOL_GPL(i2c_bus_type); EXPORT_SYMBOL_GPL(i2c_bus_type);
......
...@@ -2524,11 +2524,11 @@ static void e1000_configure_rx(struct e1000_adapter *adapter) ...@@ -2524,11 +2524,11 @@ static void e1000_configure_rx(struct e1000_adapter *adapter)
* excessive C-state transition latencies result in * excessive C-state transition latencies result in
* dropped transactions. * dropped transactions.
*/ */
pm_qos_update_requirement(PM_QOS_CPU_DMA_LATENCY, pm_qos_update_request(
adapter->netdev->name, 55); adapter->netdev->pm_qos_req, 55);
} else { } else {
pm_qos_update_requirement(PM_QOS_CPU_DMA_LATENCY, pm_qos_update_request(
adapter->netdev->name, adapter->netdev->pm_qos_req,
PM_QOS_DEFAULT_VALUE); PM_QOS_DEFAULT_VALUE);
} }
} }
...@@ -2824,8 +2824,8 @@ int e1000e_up(struct e1000_adapter *adapter) ...@@ -2824,8 +2824,8 @@ int e1000e_up(struct e1000_adapter *adapter)
/* DMA latency requirement to workaround early-receive/jumbo issue */ /* DMA latency requirement to workaround early-receive/jumbo issue */
if (adapter->flags & FLAG_HAS_ERT) if (adapter->flags & FLAG_HAS_ERT)
pm_qos_add_requirement(PM_QOS_CPU_DMA_LATENCY, adapter->netdev->pm_qos_req =
adapter->netdev->name, pm_qos_add_request(PM_QOS_CPU_DMA_LATENCY,
PM_QOS_DEFAULT_VALUE); PM_QOS_DEFAULT_VALUE);
/* hardware has been reset, we need to reload some things */ /* hardware has been reset, we need to reload some things */
...@@ -2887,9 +2887,11 @@ void e1000e_down(struct e1000_adapter *adapter) ...@@ -2887,9 +2887,11 @@ void e1000e_down(struct e1000_adapter *adapter)
e1000_clean_tx_ring(adapter); e1000_clean_tx_ring(adapter);
e1000_clean_rx_ring(adapter); e1000_clean_rx_ring(adapter);
if (adapter->flags & FLAG_HAS_ERT) if (adapter->flags & FLAG_HAS_ERT) {
pm_qos_remove_requirement(PM_QOS_CPU_DMA_LATENCY, pm_qos_remove_request(
adapter->netdev->name); adapter->netdev->pm_qos_req);
adapter->netdev->pm_qos_req = NULL;
}
/* /*
* TODO: for power management, we could drop the link and * TODO: for power management, we could drop the link and
......
...@@ -48,6 +48,7 @@ ...@@ -48,6 +48,7 @@
#define DRV_VERSION "1.0.0-k0" #define DRV_VERSION "1.0.0-k0"
char igbvf_driver_name[] = "igbvf"; char igbvf_driver_name[] = "igbvf";
const char igbvf_driver_version[] = DRV_VERSION; const char igbvf_driver_version[] = DRV_VERSION;
struct pm_qos_request_list *igbvf_driver_pm_qos_req;
static const char igbvf_driver_string[] = static const char igbvf_driver_string[] =
"Intel(R) Virtual Function Network Driver"; "Intel(R) Virtual Function Network Driver";
static const char igbvf_copyright[] = "Copyright (c) 2009 Intel Corporation."; static const char igbvf_copyright[] = "Copyright (c) 2009 Intel Corporation.";
...@@ -2899,7 +2900,7 @@ static int __init igbvf_init_module(void) ...@@ -2899,7 +2900,7 @@ static int __init igbvf_init_module(void)
printk(KERN_INFO "%s\n", igbvf_copyright); printk(KERN_INFO "%s\n", igbvf_copyright);
ret = pci_register_driver(&igbvf_driver); ret = pci_register_driver(&igbvf_driver);
pm_qos_add_requirement(PM_QOS_CPU_DMA_LATENCY, igbvf_driver_name, igbvf_driver_pm_qos_req = pm_qos_add_request(PM_QOS_CPU_DMA_LATENCY,
PM_QOS_DEFAULT_VALUE); PM_QOS_DEFAULT_VALUE);
return ret; return ret;
...@@ -2915,7 +2916,8 @@ module_init(igbvf_init_module); ...@@ -2915,7 +2916,8 @@ module_init(igbvf_init_module);
static void __exit igbvf_exit_module(void) static void __exit igbvf_exit_module(void)
{ {
pci_unregister_driver(&igbvf_driver); pci_unregister_driver(&igbvf_driver);
pm_qos_remove_requirement(PM_QOS_CPU_DMA_LATENCY, igbvf_driver_name); pm_qos_remove_request(igbvf_driver_pm_qos_req);
igbvf_driver_pm_qos_req = NULL;
} }
module_exit(igbvf_exit_module); module_exit(igbvf_exit_module);
......
...@@ -174,6 +174,8 @@ that only one external action is invoked at a time. ...@@ -174,6 +174,8 @@ that only one external action is invoked at a time.
#define DRV_DESCRIPTION "Intel(R) PRO/Wireless 2100 Network Driver" #define DRV_DESCRIPTION "Intel(R) PRO/Wireless 2100 Network Driver"
#define DRV_COPYRIGHT "Copyright(c) 2003-2006 Intel Corporation" #define DRV_COPYRIGHT "Copyright(c) 2003-2006 Intel Corporation"
struct pm_qos_request_list *ipw2100_pm_qos_req;
/* Debugging stuff */ /* Debugging stuff */
#ifdef CONFIG_IPW2100_DEBUG #ifdef CONFIG_IPW2100_DEBUG
#define IPW2100_RX_DEBUG /* Reception debugging */ #define IPW2100_RX_DEBUG /* Reception debugging */
...@@ -1739,7 +1741,7 @@ static int ipw2100_up(struct ipw2100_priv *priv, int deferred) ...@@ -1739,7 +1741,7 @@ static int ipw2100_up(struct ipw2100_priv *priv, int deferred)
/* the ipw2100 hardware really doesn't want power management delays /* the ipw2100 hardware really doesn't want power management delays
* longer than 175usec * longer than 175usec
*/ */
pm_qos_update_requirement(PM_QOS_CPU_DMA_LATENCY, "ipw2100", 175); pm_qos_update_request(ipw2100_pm_qos_req, 175);
/* If the interrupt is enabled, turn it off... */ /* If the interrupt is enabled, turn it off... */
spin_lock_irqsave(&priv->low_lock, flags); spin_lock_irqsave(&priv->low_lock, flags);
...@@ -1887,8 +1889,7 @@ static void ipw2100_down(struct ipw2100_priv *priv) ...@@ -1887,8 +1889,7 @@ static void ipw2100_down(struct ipw2100_priv *priv)
ipw2100_disable_interrupts(priv); ipw2100_disable_interrupts(priv);
spin_unlock_irqrestore(&priv->low_lock, flags); spin_unlock_irqrestore(&priv->low_lock, flags);
pm_qos_update_requirement(PM_QOS_CPU_DMA_LATENCY, "ipw2100", pm_qos_update_request(ipw2100_pm_qos_req, PM_QOS_DEFAULT_VALUE);
PM_QOS_DEFAULT_VALUE);
/* We have to signal any supplicant if we are disassociating */ /* We have to signal any supplicant if we are disassociating */
if (associated) if (associated)
...@@ -6669,7 +6670,7 @@ static int __init ipw2100_init(void) ...@@ -6669,7 +6670,7 @@ static int __init ipw2100_init(void)
if (ret) if (ret)
goto out; goto out;
pm_qos_add_requirement(PM_QOS_CPU_DMA_LATENCY, "ipw2100", ipw2100_pm_qos_req = pm_qos_add_request(PM_QOS_CPU_DMA_LATENCY,
PM_QOS_DEFAULT_VALUE); PM_QOS_DEFAULT_VALUE);
#ifdef CONFIG_IPW2100_DEBUG #ifdef CONFIG_IPW2100_DEBUG
ipw2100_debug_level = debug; ipw2100_debug_level = debug;
...@@ -6692,7 +6693,7 @@ static void __exit ipw2100_exit(void) ...@@ -6692,7 +6693,7 @@ static void __exit ipw2100_exit(void)
&driver_attr_debug_level); &driver_attr_debug_level);
#endif #endif
pci_unregister_driver(&ipw2100_pci_driver); pci_unregister_driver(&ipw2100_pci_driver);
pm_qos_remove_requirement(PM_QOS_CPU_DMA_LATENCY, "ipw2100"); pm_qos_remove_request(ipw2100_pm_qos_req);
} }
module_init(ipw2100_init); module_init(ipw2100_init);
......
...@@ -546,6 +546,40 @@ ssize_t simple_read_from_buffer(void __user *to, size_t count, loff_t *ppos, ...@@ -546,6 +546,40 @@ ssize_t simple_read_from_buffer(void __user *to, size_t count, loff_t *ppos,
return count; return count;
} }
/**
* simple_write_to_buffer - copy data from user space to the buffer
* @to: the buffer to write to
* @available: the size of the buffer
* @ppos: the current position in the buffer
* @from: the user space buffer to read from
* @count: the maximum number of bytes to read
*
* The simple_write_to_buffer() function reads up to @count bytes from the user
* space address starting at @from into the buffer @to at offset @ppos.
*
* On success, the number of bytes written is returned and the offset @ppos is
* advanced by this number, or negative value is returned on error.
**/
ssize_t simple_write_to_buffer(void *to, size_t available, loff_t *ppos,
const void __user *from, size_t count)
{
loff_t pos = *ppos;
size_t res;
if (pos < 0)
return -EINVAL;
if (pos >= available || !count)
return 0;
if (count > available - pos)
count = available - pos;
res = copy_from_user(to + pos, from, count);
if (res == count)
return -EFAULT;
count -= res;
*ppos = pos + count;
return count;
}
/** /**
* memory_read_from_buffer - copy data from the buffer * memory_read_from_buffer - copy data from the buffer
* @to: the kernel space buffer to read to * @to: the kernel space buffer to read to
...@@ -864,6 +898,7 @@ EXPORT_SYMBOL(simple_statfs); ...@@ -864,6 +898,7 @@ EXPORT_SYMBOL(simple_statfs);
EXPORT_SYMBOL(simple_sync_file); EXPORT_SYMBOL(simple_sync_file);
EXPORT_SYMBOL(simple_unlink); EXPORT_SYMBOL(simple_unlink);
EXPORT_SYMBOL(simple_read_from_buffer); EXPORT_SYMBOL(simple_read_from_buffer);
EXPORT_SYMBOL(simple_write_to_buffer);
EXPORT_SYMBOL(memory_read_from_buffer); EXPORT_SYMBOL(memory_read_from_buffer);
EXPORT_SYMBOL(simple_transaction_set); EXPORT_SYMBOL(simple_transaction_set);
EXPORT_SYMBOL(simple_transaction_get); EXPORT_SYMBOL(simple_transaction_get);
......
...@@ -2362,6 +2362,8 @@ extern void simple_release_fs(struct vfsmount **mount, int *count); ...@@ -2362,6 +2362,8 @@ extern void simple_release_fs(struct vfsmount **mount, int *count);
extern ssize_t simple_read_from_buffer(void __user *to, size_t count, extern ssize_t simple_read_from_buffer(void __user *to, size_t count,
loff_t *ppos, const void *from, size_t available); loff_t *ppos, const void *from, size_t available);
extern ssize_t simple_write_to_buffer(void *to, size_t available, loff_t *ppos,
const void __user *from, size_t count);
extern int simple_fsync(struct file *, struct dentry *, int); extern int simple_fsync(struct file *, struct dentry *, int);
......
...@@ -31,6 +31,7 @@ ...@@ -31,6 +31,7 @@
#include <linux/if_link.h> #include <linux/if_link.h>
#ifdef __KERNEL__ #ifdef __KERNEL__
#include <linux/pm_qos_params.h>
#include <linux/timer.h> #include <linux/timer.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/mm.h> #include <linux/mm.h>
...@@ -711,6 +712,9 @@ struct net_device { ...@@ -711,6 +712,9 @@ struct net_device {
* the interface. * the interface.
*/ */
char name[IFNAMSIZ]; char name[IFNAMSIZ];
struct pm_qos_request_list *pm_qos_req;
/* device name hash chain */ /* device name hash chain */
struct hlist_node name_hlist; struct hlist_node name_hlist;
/* snmp alias */ /* snmp alias */
......
...@@ -14,12 +14,14 @@ ...@@ -14,12 +14,14 @@
#define PM_QOS_NUM_CLASSES 4 #define PM_QOS_NUM_CLASSES 4
#define PM_QOS_DEFAULT_VALUE -1 #define PM_QOS_DEFAULT_VALUE -1
int pm_qos_add_requirement(int qos, char *name, s32 value); struct pm_qos_request_list;
int pm_qos_update_requirement(int qos, char *name, s32 new_value);
void pm_qos_remove_requirement(int qos, char *name);
int pm_qos_requirement(int qos); struct pm_qos_request_list *pm_qos_add_request(int pm_qos_class, s32 value);
void pm_qos_update_request(struct pm_qos_request_list *pm_qos_req,
s32 new_value);
void pm_qos_remove_request(struct pm_qos_request_list *pm_qos_req);
int pm_qos_add_notifier(int qos, struct notifier_block *notifier); int pm_qos_request(int pm_qos_class);
int pm_qos_remove_notifier(int qos, struct notifier_block *notifier); int pm_qos_add_notifier(int pm_qos_class, struct notifier_block *notifier);
int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier);
...@@ -30,6 +30,9 @@ extern void pm_runtime_enable(struct device *dev); ...@@ -30,6 +30,9 @@ extern void pm_runtime_enable(struct device *dev);
extern void __pm_runtime_disable(struct device *dev, bool check_resume); extern void __pm_runtime_disable(struct device *dev, bool check_resume);
extern void pm_runtime_allow(struct device *dev); extern void pm_runtime_allow(struct device *dev);
extern void pm_runtime_forbid(struct device *dev); extern void pm_runtime_forbid(struct device *dev);
extern int pm_generic_runtime_idle(struct device *dev);
extern int pm_generic_runtime_suspend(struct device *dev);
extern int pm_generic_runtime_resume(struct device *dev);
static inline bool pm_children_suspended(struct device *dev) static inline bool pm_children_suspended(struct device *dev)
{ {
...@@ -96,6 +99,10 @@ static inline bool device_run_wake(struct device *dev) { return false; } ...@@ -96,6 +99,10 @@ static inline bool device_run_wake(struct device *dev) { return false; }
static inline void device_set_run_wake(struct device *dev, bool enable) {} static inline void device_set_run_wake(struct device *dev, bool enable) {}
static inline bool pm_runtime_suspended(struct device *dev) { return false; } static inline bool pm_runtime_suspended(struct device *dev) { return false; }
static inline int pm_generic_runtime_idle(struct device *dev) { return 0; }
static inline int pm_generic_runtime_suspend(struct device *dev) { return 0; }
static inline int pm_generic_runtime_resume(struct device *dev) { return 0; }
#endif /* !CONFIG_PM_RUNTIME */ #endif /* !CONFIG_PM_RUNTIME */
static inline int pm_runtime_get(struct device *dev) static inline int pm_runtime_get(struct device *dev)
......
...@@ -25,32 +25,34 @@ ...@@ -25,32 +25,34 @@
# error "please don't include this file directly" # error "please don't include this file directly"
#endif #endif
#include <linux/types.h>
#ifdef CONFIG_PM #ifdef CONFIG_PM
/* changes to device_may_wakeup take effect on the next pm state change. /* changes to device_may_wakeup take effect on the next pm state change.
* by default, devices should wakeup if they can. * by default, devices should wakeup if they can.
*/ */
static inline void device_init_wakeup(struct device *dev, int val) static inline void device_init_wakeup(struct device *dev, bool val)
{ {
dev->power.can_wakeup = dev->power.should_wakeup = !!val; dev->power.can_wakeup = dev->power.should_wakeup = val;
} }
static inline void device_set_wakeup_capable(struct device *dev, int val) static inline void device_set_wakeup_capable(struct device *dev, bool capable)
{ {
dev->power.can_wakeup = !!val; dev->power.can_wakeup = capable;
} }
static inline int device_can_wakeup(struct device *dev) static inline bool device_can_wakeup(struct device *dev)
{ {
return dev->power.can_wakeup; return dev->power.can_wakeup;
} }
static inline void device_set_wakeup_enable(struct device *dev, int val) static inline void device_set_wakeup_enable(struct device *dev, bool enable)
{ {
dev->power.should_wakeup = !!val; dev->power.should_wakeup = enable;
} }
static inline int device_may_wakeup(struct device *dev) static inline bool device_may_wakeup(struct device *dev)
{ {
return dev->power.can_wakeup && dev->power.should_wakeup; return dev->power.can_wakeup && dev->power.should_wakeup;
} }
...@@ -58,20 +60,28 @@ static inline int device_may_wakeup(struct device *dev) ...@@ -58,20 +60,28 @@ static inline int device_may_wakeup(struct device *dev)
#else /* !CONFIG_PM */ #else /* !CONFIG_PM */
/* For some reason the next two routines work even without CONFIG_PM */ /* For some reason the next two routines work even without CONFIG_PM */
static inline void device_init_wakeup(struct device *dev, int val) static inline void device_init_wakeup(struct device *dev, bool val)
{ {
dev->power.can_wakeup = !!val; dev->power.can_wakeup = val;
} }
static inline void device_set_wakeup_capable(struct device *dev, int val) { } static inline void device_set_wakeup_capable(struct device *dev, bool capable)
{
}
static inline int device_can_wakeup(struct device *dev) static inline bool device_can_wakeup(struct device *dev)
{ {
return dev->power.can_wakeup; return dev->power.can_wakeup;
} }
#define device_set_wakeup_enable(dev, val) do {} while (0) static inline void device_set_wakeup_enable(struct device *dev, bool enable)
#define device_may_wakeup(dev) 0 {
}
static inline bool device_may_wakeup(struct device *dev)
{
return false;
}
#endif /* !CONFIG_PM */ #endif /* !CONFIG_PM */
......
...@@ -29,6 +29,7 @@ ...@@ -29,6 +29,7 @@
#include <linux/poll.h> #include <linux/poll.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/bitops.h> #include <linux/bitops.h>
#include <linux/pm_qos_params.h>
#define snd_pcm_substream_chip(substream) ((substream)->private_data) #define snd_pcm_substream_chip(substream) ((substream)->private_data)
#define snd_pcm_chip(pcm) ((pcm)->private_data) #define snd_pcm_chip(pcm) ((pcm)->private_data)
...@@ -365,7 +366,7 @@ struct snd_pcm_substream { ...@@ -365,7 +366,7 @@ struct snd_pcm_substream {
int number; int number;
char name[32]; /* substream name */ char name[32]; /* substream name */
int stream; /* stream (direction) */ int stream; /* stream (direction) */
char latency_id[20]; /* latency identifier */ struct pm_qos_request_list *latency_pm_qos_req; /* pm_qos request */
size_t buffer_bytes_max; /* limit ring buffer size */ size_t buffer_bytes_max; /* limit ring buffer size */
struct snd_dma_buffer dma_buffer; struct snd_dma_buffer dma_buffer;
unsigned int dma_buf_id; unsigned int dma_buf_id;
......
...@@ -89,10 +89,10 @@ struct cgroup_subsys freezer_subsys; ...@@ -89,10 +89,10 @@ struct cgroup_subsys freezer_subsys;
/* Locks taken and their ordering /* Locks taken and their ordering
* ------------------------------ * ------------------------------
* css_set_lock
* cgroup_mutex (AKA cgroup_lock) * cgroup_mutex (AKA cgroup_lock)
* task->alloc_lock (AKA task_lock)
* freezer->lock * freezer->lock
* css_set_lock
* task->alloc_lock (AKA task_lock)
* task->sighand->siglock * task->sighand->siglock
* *
* cgroup code forces css_set_lock to be taken before task->alloc_lock * cgroup code forces css_set_lock to be taken before task->alloc_lock
...@@ -100,33 +100,38 @@ struct cgroup_subsys freezer_subsys; ...@@ -100,33 +100,38 @@ struct cgroup_subsys freezer_subsys;
* freezer_create(), freezer_destroy(): * freezer_create(), freezer_destroy():
* cgroup_mutex [ by cgroup core ] * cgroup_mutex [ by cgroup core ]
* *
* can_attach(): * freezer_can_attach():
* cgroup_mutex * cgroup_mutex (held by caller of can_attach)
* *
* cgroup_frozen(): * cgroup_freezing_or_frozen():
* task->alloc_lock (to get task's cgroup) * task->alloc_lock (to get task's cgroup)
* *
* freezer_fork() (preserving fork() performance means can't take cgroup_mutex): * freezer_fork() (preserving fork() performance means can't take cgroup_mutex):
* task->alloc_lock (to get task's cgroup)
* freezer->lock * freezer->lock
* sighand->siglock (if the cgroup is freezing) * sighand->siglock (if the cgroup is freezing)
* *
* freezer_read(): * freezer_read():
* cgroup_mutex * cgroup_mutex
* freezer->lock * freezer->lock
* write_lock css_set_lock (cgroup iterator start)
* task->alloc_lock
* read_lock css_set_lock (cgroup iterator start) * read_lock css_set_lock (cgroup iterator start)
* *
* freezer_write() (freeze): * freezer_write() (freeze):
* cgroup_mutex * cgroup_mutex
* freezer->lock * freezer->lock
* write_lock css_set_lock (cgroup iterator start)
* task->alloc_lock
* read_lock css_set_lock (cgroup iterator start) * read_lock css_set_lock (cgroup iterator start)
* sighand->siglock * sighand->siglock (fake signal delivery inside freeze_task())
* *
* freezer_write() (unfreeze): * freezer_write() (unfreeze):
* cgroup_mutex * cgroup_mutex
* freezer->lock * freezer->lock
* write_lock css_set_lock (cgroup iterator start)
* task->alloc_lock
* read_lock css_set_lock (cgroup iterator start) * read_lock css_set_lock (cgroup iterator start)
* task->alloc_lock (to prevent races with freeze_task()) * task->alloc_lock (inside thaw_process(), prevents race with refrigerator())
* sighand->siglock * sighand->siglock
*/ */
static struct cgroup_subsys_state *freezer_create(struct cgroup_subsys *ss, static struct cgroup_subsys_state *freezer_create(struct cgroup_subsys *ss,
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
* This module exposes the interface to kernel space for specifying * This module exposes the interface to kernel space for specifying
* QoS dependencies. It provides infrastructure for registration of: * QoS dependencies. It provides infrastructure for registration of:
* *
* Dependents on a QoS value : register requirements * Dependents on a QoS value : register requests
* Watchers of QoS value : get notified when target QoS value changes * Watchers of QoS value : get notified when target QoS value changes
* *
* This QoS design is best effort based. Dependents register their QoS needs. * This QoS design is best effort based. Dependents register their QoS needs.
...@@ -14,19 +14,21 @@ ...@@ -14,19 +14,21 @@
* timeout: usec <-- currently not used. * timeout: usec <-- currently not used.
* throughput: kbs (kilo byte / sec) * throughput: kbs (kilo byte / sec)
* *
* There are lists of pm_qos_objects each one wrapping requirements, notifiers * There are lists of pm_qos_objects each one wrapping requests, notifiers
* *
* User mode requirements on a QOS parameter register themselves to the * User mode requests on a QOS parameter register themselves to the
* subsystem by opening the device node /dev/... and writing there request to * subsystem by opening the device node /dev/... and writing there request to
* the node. As long as the process holds a file handle open to the node the * the node. As long as the process holds a file handle open to the node the
* client continues to be accounted for. Upon file release the usermode * client continues to be accounted for. Upon file release the usermode
* requirement is removed and a new qos target is computed. This way when the * request is removed and a new qos target is computed. This way when the
* requirement that the application has is cleaned up when closes the file * request that the application has is cleaned up when closes the file
* pointer or exits the pm_qos_object will get an opportunity to clean up. * pointer or exits the pm_qos_object will get an opportunity to clean up.
* *
* Mark Gross <mgross@linux.intel.com> * Mark Gross <mgross@linux.intel.com>
*/ */
/*#define DEBUG*/
#include <linux/pm_qos_params.h> #include <linux/pm_qos_params.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
...@@ -42,25 +44,25 @@ ...@@ -42,25 +44,25 @@
#include <linux/uaccess.h> #include <linux/uaccess.h>
/* /*
* locking rule: all changes to requirements or notifiers lists * locking rule: all changes to requests or notifiers lists
* or pm_qos_object list and pm_qos_objects need to happen with pm_qos_lock * or pm_qos_object list and pm_qos_objects need to happen with pm_qos_lock
* held, taken with _irqsave. One lock to rule them all * held, taken with _irqsave. One lock to rule them all
*/ */
struct requirement_list { struct pm_qos_request_list {
struct list_head list; struct list_head list;
union { union {
s32 value; s32 value;
s32 usec; s32 usec;
s32 kbps; s32 kbps;
}; };
char *name; int pm_qos_class;
}; };
static s32 max_compare(s32 v1, s32 v2); static s32 max_compare(s32 v1, s32 v2);
static s32 min_compare(s32 v1, s32 v2); static s32 min_compare(s32 v1, s32 v2);
struct pm_qos_object { struct pm_qos_object {
struct requirement_list requirements; struct pm_qos_request_list requests;
struct blocking_notifier_head *notifiers; struct blocking_notifier_head *notifiers;
struct miscdevice pm_qos_power_miscdev; struct miscdevice pm_qos_power_miscdev;
char *name; char *name;
...@@ -72,7 +74,7 @@ struct pm_qos_object { ...@@ -72,7 +74,7 @@ struct pm_qos_object {
static struct pm_qos_object null_pm_qos; static struct pm_qos_object null_pm_qos;
static BLOCKING_NOTIFIER_HEAD(cpu_dma_lat_notifier); static BLOCKING_NOTIFIER_HEAD(cpu_dma_lat_notifier);
static struct pm_qos_object cpu_dma_pm_qos = { static struct pm_qos_object cpu_dma_pm_qos = {
.requirements = {LIST_HEAD_INIT(cpu_dma_pm_qos.requirements.list)}, .requests = {LIST_HEAD_INIT(cpu_dma_pm_qos.requests.list)},
.notifiers = &cpu_dma_lat_notifier, .notifiers = &cpu_dma_lat_notifier,
.name = "cpu_dma_latency", .name = "cpu_dma_latency",
.default_value = 2000 * USEC_PER_SEC, .default_value = 2000 * USEC_PER_SEC,
...@@ -82,7 +84,7 @@ static struct pm_qos_object cpu_dma_pm_qos = { ...@@ -82,7 +84,7 @@ static struct pm_qos_object cpu_dma_pm_qos = {
static BLOCKING_NOTIFIER_HEAD(network_lat_notifier); static BLOCKING_NOTIFIER_HEAD(network_lat_notifier);
static struct pm_qos_object network_lat_pm_qos = { static struct pm_qos_object network_lat_pm_qos = {
.requirements = {LIST_HEAD_INIT(network_lat_pm_qos.requirements.list)}, .requests = {LIST_HEAD_INIT(network_lat_pm_qos.requests.list)},
.notifiers = &network_lat_notifier, .notifiers = &network_lat_notifier,
.name = "network_latency", .name = "network_latency",
.default_value = 2000 * USEC_PER_SEC, .default_value = 2000 * USEC_PER_SEC,
...@@ -93,8 +95,7 @@ static struct pm_qos_object network_lat_pm_qos = { ...@@ -93,8 +95,7 @@ static struct pm_qos_object network_lat_pm_qos = {
static BLOCKING_NOTIFIER_HEAD(network_throughput_notifier); static BLOCKING_NOTIFIER_HEAD(network_throughput_notifier);
static struct pm_qos_object network_throughput_pm_qos = { static struct pm_qos_object network_throughput_pm_qos = {
.requirements = .requests = {LIST_HEAD_INIT(network_throughput_pm_qos.requests.list)},
{LIST_HEAD_INIT(network_throughput_pm_qos.requirements.list)},
.notifiers = &network_throughput_notifier, .notifiers = &network_throughput_notifier,
.name = "network_throughput", .name = "network_throughput",
.default_value = 0, .default_value = 0,
...@@ -135,30 +136,33 @@ static s32 min_compare(s32 v1, s32 v2) ...@@ -135,30 +136,33 @@ static s32 min_compare(s32 v1, s32 v2)
} }
static void update_target(int target) static void update_target(int pm_qos_class)
{ {
s32 extreme_value; s32 extreme_value;
struct requirement_list *node; struct pm_qos_request_list *node;
unsigned long flags; unsigned long flags;
int call_notifier = 0; int call_notifier = 0;
spin_lock_irqsave(&pm_qos_lock, flags); spin_lock_irqsave(&pm_qos_lock, flags);
extreme_value = pm_qos_array[target]->default_value; extreme_value = pm_qos_array[pm_qos_class]->default_value;
list_for_each_entry(node, list_for_each_entry(node,
&pm_qos_array[target]->requirements.list, list) { &pm_qos_array[pm_qos_class]->requests.list, list) {
extreme_value = pm_qos_array[target]->comparitor( extreme_value = pm_qos_array[pm_qos_class]->comparitor(
extreme_value, node->value); extreme_value, node->value);
} }
if (atomic_read(&pm_qos_array[target]->target_value) != extreme_value) { if (atomic_read(&pm_qos_array[pm_qos_class]->target_value) !=
extreme_value) {
call_notifier = 1; call_notifier = 1;
atomic_set(&pm_qos_array[target]->target_value, extreme_value); atomic_set(&pm_qos_array[pm_qos_class]->target_value,
pr_debug(KERN_ERR "new target for qos %d is %d\n", target, extreme_value);
atomic_read(&pm_qos_array[target]->target_value)); pr_debug(KERN_ERR "new target for qos %d is %d\n", pm_qos_class,
atomic_read(&pm_qos_array[pm_qos_class]->target_value));
} }
spin_unlock_irqrestore(&pm_qos_lock, flags); spin_unlock_irqrestore(&pm_qos_lock, flags);
if (call_notifier) if (call_notifier)
blocking_notifier_call_chain(pm_qos_array[target]->notifiers, blocking_notifier_call_chain(
pm_qos_array[pm_qos_class]->notifiers,
(unsigned long) extreme_value, NULL); (unsigned long) extreme_value, NULL);
} }
...@@ -185,125 +189,112 @@ static int find_pm_qos_object_by_minor(int minor) ...@@ -185,125 +189,112 @@ static int find_pm_qos_object_by_minor(int minor)
} }
/** /**
* pm_qos_requirement - returns current system wide qos expectation * pm_qos_request - returns current system wide qos expectation
* @pm_qos_class: identification of which qos value is requested * @pm_qos_class: identification of which qos value is requested
* *
* This function returns the current target value in an atomic manner. * This function returns the current target value in an atomic manner.
*/ */
int pm_qos_requirement(int pm_qos_class) int pm_qos_request(int pm_qos_class)
{ {
return atomic_read(&pm_qos_array[pm_qos_class]->target_value); return atomic_read(&pm_qos_array[pm_qos_class]->target_value);
} }
EXPORT_SYMBOL_GPL(pm_qos_requirement); EXPORT_SYMBOL_GPL(pm_qos_request);
/** /**
* pm_qos_add_requirement - inserts new qos request into the list * pm_qos_add_request - inserts new qos request into the list
* @pm_qos_class: identifies which list of qos request to us * @pm_qos_class: identifies which list of qos request to us
* @name: identifies the request
* @value: defines the qos request * @value: defines the qos request
* *
* This function inserts a new entry in the pm_qos_class list of requested qos * This function inserts a new entry in the pm_qos_class list of requested qos
* performance characteristics. It recomputes the aggregate QoS expectations * performance characteristics. It recomputes the aggregate QoS expectations
* for the pm_qos_class of parameters. * for the pm_qos_class of parameters, and returns the pm_qos_request list
* element as a handle for use in updating and removal. Call needs to save
* this handle for later use.
*/ */
int pm_qos_add_requirement(int pm_qos_class, char *name, s32 value) struct pm_qos_request_list *pm_qos_add_request(int pm_qos_class, s32 value)
{ {
struct requirement_list *dep; struct pm_qos_request_list *dep;
unsigned long flags; unsigned long flags;
dep = kzalloc(sizeof(struct requirement_list), GFP_KERNEL); dep = kzalloc(sizeof(struct pm_qos_request_list), GFP_KERNEL);
if (dep) { if (dep) {
if (value == PM_QOS_DEFAULT_VALUE) if (value == PM_QOS_DEFAULT_VALUE)
dep->value = pm_qos_array[pm_qos_class]->default_value; dep->value = pm_qos_array[pm_qos_class]->default_value;
else else
dep->value = value; dep->value = value;
dep->name = kstrdup(name, GFP_KERNEL); dep->pm_qos_class = pm_qos_class;
if (!dep->name)
goto cleanup;
spin_lock_irqsave(&pm_qos_lock, flags); spin_lock_irqsave(&pm_qos_lock, flags);
list_add(&dep->list, list_add(&dep->list,
&pm_qos_array[pm_qos_class]->requirements.list); &pm_qos_array[pm_qos_class]->requests.list);
spin_unlock_irqrestore(&pm_qos_lock, flags); spin_unlock_irqrestore(&pm_qos_lock, flags);
update_target(pm_qos_class); update_target(pm_qos_class);
return 0;
} }
cleanup: return dep;
kfree(dep);
return -ENOMEM;
} }
EXPORT_SYMBOL_GPL(pm_qos_add_requirement); EXPORT_SYMBOL_GPL(pm_qos_add_request);
/** /**
* pm_qos_update_requirement - modifies an existing qos request * pm_qos_update_request - modifies an existing qos request
* @pm_qos_class: identifies which list of qos request to us * @pm_qos_req : handle to list element holding a pm_qos request to use
* @name: identifies the request
* @value: defines the qos request * @value: defines the qos request
* *
* Updates an existing qos requirement for the pm_qos_class of parameters along * Updates an existing qos request for the pm_qos_class of parameters along
* with updating the target pm_qos_class value. * with updating the target pm_qos_class value.
* *
* If the named request isn't in the list then no change is made. * Attempts are made to make this code callable on hot code paths.
*/ */
int pm_qos_update_requirement(int pm_qos_class, char *name, s32 new_value) void pm_qos_update_request(struct pm_qos_request_list *pm_qos_req,
s32 new_value)
{ {
unsigned long flags; unsigned long flags;
struct requirement_list *node;
int pending_update = 0; int pending_update = 0;
s32 temp;
if (pm_qos_req) { /*guard against callers passing in null */
spin_lock_irqsave(&pm_qos_lock, flags); spin_lock_irqsave(&pm_qos_lock, flags);
list_for_each_entry(node,
&pm_qos_array[pm_qos_class]->requirements.list, list) {
if (strcmp(node->name, name) == 0) {
if (new_value == PM_QOS_DEFAULT_VALUE) if (new_value == PM_QOS_DEFAULT_VALUE)
node->value = temp = pm_qos_array[pm_qos_req->pm_qos_class]->default_value;
pm_qos_array[pm_qos_class]->default_value;
else else
node->value = new_value; temp = new_value;
if (temp != pm_qos_req->value) {
pending_update = 1; pending_update = 1;
break; pm_qos_req->value = temp;
}
} }
spin_unlock_irqrestore(&pm_qos_lock, flags); spin_unlock_irqrestore(&pm_qos_lock, flags);
if (pending_update) if (pending_update)
update_target(pm_qos_class); update_target(pm_qos_req->pm_qos_class);
}
return 0;
} }
EXPORT_SYMBOL_GPL(pm_qos_update_requirement); EXPORT_SYMBOL_GPL(pm_qos_update_request);
/** /**
* pm_qos_remove_requirement - modifies an existing qos request * pm_qos_remove_request - modifies an existing qos request
* @pm_qos_class: identifies which list of qos request to us * @pm_qos_req: handle to request list element
* @name: identifies the request
* *
* Will remove named qos request from pm_qos_class list of parameters and * Will remove pm qos request from the list of requests and
* recompute the current target value for the pm_qos_class. * recompute the current target value for the pm_qos_class. Call this
* on slow code paths.
*/ */
void pm_qos_remove_requirement(int pm_qos_class, char *name) void pm_qos_remove_request(struct pm_qos_request_list *pm_qos_req)
{ {
unsigned long flags; unsigned long flags;
struct requirement_list *node; int qos_class;
int pending_update = 0;
if (pm_qos_req == NULL)
return;
/* silent return to keep pcm code cleaner */
qos_class = pm_qos_req->pm_qos_class;
spin_lock_irqsave(&pm_qos_lock, flags); spin_lock_irqsave(&pm_qos_lock, flags);
list_for_each_entry(node, list_del(&pm_qos_req->list);
&pm_qos_array[pm_qos_class]->requirements.list, list) { kfree(pm_qos_req);
if (strcmp(node->name, name) == 0) {
kfree(node->name);
list_del(&node->list);
kfree(node);
pending_update = 1;
break;
}
}
spin_unlock_irqrestore(&pm_qos_lock, flags); spin_unlock_irqrestore(&pm_qos_lock, flags);
if (pending_update) update_target(qos_class);
update_target(pm_qos_class);
} }
EXPORT_SYMBOL_GPL(pm_qos_remove_requirement); EXPORT_SYMBOL_GPL(pm_qos_remove_request);
/** /**
* pm_qos_add_notifier - sets notification entry for changes to target value * pm_qos_add_notifier - sets notification entry for changes to target value
...@@ -313,7 +304,7 @@ EXPORT_SYMBOL_GPL(pm_qos_remove_requirement); ...@@ -313,7 +304,7 @@ EXPORT_SYMBOL_GPL(pm_qos_remove_requirement);
* will register the notifier into a notification chain that gets called * will register the notifier into a notification chain that gets called
* upon changes to the pm_qos_class target value. * upon changes to the pm_qos_class target value.
*/ */
int pm_qos_add_notifier(int pm_qos_class, struct notifier_block *notifier) int pm_qos_add_notifier(int pm_qos_class, struct notifier_block *notifier)
{ {
int retval; int retval;
...@@ -343,21 +334,16 @@ int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier) ...@@ -343,21 +334,16 @@ int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier)
} }
EXPORT_SYMBOL_GPL(pm_qos_remove_notifier); EXPORT_SYMBOL_GPL(pm_qos_remove_notifier);
#define PID_NAME_LEN 32
static int pm_qos_power_open(struct inode *inode, struct file *filp) static int pm_qos_power_open(struct inode *inode, struct file *filp)
{ {
int ret;
long pm_qos_class; long pm_qos_class;
char name[PID_NAME_LEN];
pm_qos_class = find_pm_qos_object_by_minor(iminor(inode)); pm_qos_class = find_pm_qos_object_by_minor(iminor(inode));
if (pm_qos_class >= 0) { if (pm_qos_class >= 0) {
filp->private_data = (void *)pm_qos_class; filp->private_data = (void *) pm_qos_add_request(pm_qos_class,
snprintf(name, PID_NAME_LEN, "process_%d", current->pid);
ret = pm_qos_add_requirement(pm_qos_class, name,
PM_QOS_DEFAULT_VALUE); PM_QOS_DEFAULT_VALUE);
if (ret >= 0)
if (filp->private_data)
return 0; return 0;
} }
return -EPERM; return -EPERM;
...@@ -365,32 +351,40 @@ static int pm_qos_power_open(struct inode *inode, struct file *filp) ...@@ -365,32 +351,40 @@ static int pm_qos_power_open(struct inode *inode, struct file *filp)
static int pm_qos_power_release(struct inode *inode, struct file *filp) static int pm_qos_power_release(struct inode *inode, struct file *filp)
{ {
int pm_qos_class; struct pm_qos_request_list *req;
char name[PID_NAME_LEN];
pm_qos_class = (long)filp->private_data; req = (struct pm_qos_request_list *)filp->private_data;
snprintf(name, PID_NAME_LEN, "process_%d", current->pid); pm_qos_remove_request(req);
pm_qos_remove_requirement(pm_qos_class, name);
return 0; return 0;
} }
static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf, static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf,
size_t count, loff_t *f_pos) size_t count, loff_t *f_pos)
{ {
s32 value; s32 value;
int pm_qos_class; int x;
char name[PID_NAME_LEN]; char ascii_value[11];
struct pm_qos_request_list *pm_qos_req;
pm_qos_class = (long)filp->private_data; if (count == sizeof(s32)) {
if (count != sizeof(s32))
return -EINVAL;
if (copy_from_user(&value, buf, sizeof(s32))) if (copy_from_user(&value, buf, sizeof(s32)))
return -EFAULT; return -EFAULT;
snprintf(name, PID_NAME_LEN, "process_%d", current->pid); } else if (count == 11) { /* len('0x12345678/0') */
pm_qos_update_requirement(pm_qos_class, name, value); if (copy_from_user(ascii_value, buf, 11))
return -EFAULT;
x = sscanf(ascii_value, "%x", &value);
if (x != 1)
return -EINVAL;
pr_debug(KERN_ERR "%s, %d, 0x%x\n", ascii_value, x, value);
} else
return -EINVAL;
pm_qos_req = (struct pm_qos_request_list *)filp->private_data;
pm_qos_update_request(pm_qos_req, value);
return sizeof(s32); return count;
} }
......
...@@ -8,7 +8,8 @@ obj-$(CONFIG_PM_SLEEP) += console.o ...@@ -8,7 +8,8 @@ obj-$(CONFIG_PM_SLEEP) += console.o
obj-$(CONFIG_FREEZER) += process.o obj-$(CONFIG_FREEZER) += process.o
obj-$(CONFIG_SUSPEND) += suspend.o obj-$(CONFIG_SUSPEND) += suspend.o
obj-$(CONFIG_PM_TEST_SUSPEND) += suspend_test.o obj-$(CONFIG_PM_TEST_SUSPEND) += suspend_test.o
obj-$(CONFIG_HIBERNATION) += hibernate.o snapshot.o swap.o user.o obj-$(CONFIG_HIBERNATION) += hibernate.o snapshot.o swap.o user.o \
block_io.o
obj-$(CONFIG_HIBERNATION_NVS) += hibernate_nvs.o obj-$(CONFIG_HIBERNATION_NVS) += hibernate_nvs.o
obj-$(CONFIG_MAGIC_SYSRQ) += poweroff.o obj-$(CONFIG_MAGIC_SYSRQ) += poweroff.o
/*
* This file provides functions for block I/O operations on swap/file.
*
* Copyright (C) 1998,2001-2005 Pavel Machek <pavel@ucw.cz>
* Copyright (C) 2006 Rafael J. Wysocki <rjw@sisk.pl>
*
* This file is released under the GPLv2.
*/
#include <linux/bio.h>
#include <linux/kernel.h>
#include <linux/pagemap.h>
#include <linux/swap.h>
#include "power.h"
/**
* submit - submit BIO request.
* @rw: READ or WRITE.
* @off physical offset of page.
* @page: page we're reading or writing.
* @bio_chain: list of pending biod (for async reading)
*
* Straight from the textbook - allocate and initialize the bio.
* If we're reading, make sure the page is marked as dirty.
* Then submit it and, if @bio_chain == NULL, wait.
*/
static int submit(int rw, struct block_device *bdev, sector_t sector,
struct page *page, struct bio **bio_chain)
{
const int bio_rw = rw | (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG);
struct bio *bio;
bio = bio_alloc(__GFP_WAIT | __GFP_HIGH, 1);
bio->bi_sector = sector;
bio->bi_bdev = bdev;
bio->bi_end_io = end_swap_bio_read;
if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) {
printk(KERN_ERR "PM: Adding page to bio failed at %llu\n",
(unsigned long long)sector);
bio_put(bio);
return -EFAULT;
}
lock_page(page);
bio_get(bio);
if (bio_chain == NULL) {
submit_bio(bio_rw, bio);
wait_on_page_locked(page);
if (rw == READ)
bio_set_pages_dirty(bio);
bio_put(bio);
} else {
if (rw == READ)
get_page(page); /* These pages are freed later */
bio->bi_private = *bio_chain;
*bio_chain = bio;
submit_bio(bio_rw, bio);
}
return 0;
}
int hib_bio_read_page(pgoff_t page_off, void *addr, struct bio **bio_chain)
{
return submit(READ, hib_resume_bdev, page_off * (PAGE_SIZE >> 9),
virt_to_page(addr), bio_chain);
}
int hib_bio_write_page(pgoff_t page_off, void *addr, struct bio **bio_chain)
{
return submit(WRITE, hib_resume_bdev, page_off * (PAGE_SIZE >> 9),
virt_to_page(addr), bio_chain);
}
int hib_wait_on_bio_chain(struct bio **bio_chain)
{
struct bio *bio;
struct bio *next_bio;
int ret = 0;
if (bio_chain == NULL)
return 0;
bio = *bio_chain;
if (bio == NULL)
return 0;
while (bio) {
struct page *page;
next_bio = bio->bi_private;
page = bio->bi_io_vec[0].bv_page;
wait_on_page_locked(page);
if (!PageUptodate(page) || PageError(page))
ret = -EIO;
put_page(page);
bio_put(bio);
bio = next_bio;
}
*bio_chain = NULL;
return ret;
}
...@@ -97,24 +97,12 @@ extern int hibernate_preallocate_memory(void); ...@@ -97,24 +97,12 @@ extern int hibernate_preallocate_memory(void);
*/ */
struct snapshot_handle { struct snapshot_handle {
loff_t offset; /* number of the last byte ready for reading
* or writing in the sequence
*/
unsigned int cur; /* number of the block of PAGE_SIZE bytes the unsigned int cur; /* number of the block of PAGE_SIZE bytes the
* next operation will refer to (ie. current) * next operation will refer to (ie. current)
*/ */
unsigned int cur_offset; /* offset with respect to the current
* block (for the next operation)
*/
unsigned int prev; /* number of the block of PAGE_SIZE bytes that
* was the current one previously
*/
void *buffer; /* address of the block to read from void *buffer; /* address of the block to read from
* or write to * or write to
*/ */
unsigned int buf_offset; /* location to read from or write to,
* given as a displacement from 'buffer'
*/
int sync_read; /* Set to one to notify the caller of int sync_read; /* Set to one to notify the caller of
* snapshot_write_next() that it may * snapshot_write_next() that it may
* need to call wait_on_bio_chain() * need to call wait_on_bio_chain()
...@@ -125,12 +113,12 @@ struct snapshot_handle { ...@@ -125,12 +113,12 @@ struct snapshot_handle {
* snapshot_read_next()/snapshot_write_next() is allowed to * snapshot_read_next()/snapshot_write_next() is allowed to
* read/write data after the function returns * read/write data after the function returns
*/ */
#define data_of(handle) ((handle).buffer + (handle).buf_offset) #define data_of(handle) ((handle).buffer)
extern unsigned int snapshot_additional_pages(struct zone *zone); extern unsigned int snapshot_additional_pages(struct zone *zone);
extern unsigned long snapshot_get_image_size(void); extern unsigned long snapshot_get_image_size(void);
extern int snapshot_read_next(struct snapshot_handle *handle, size_t count); extern int snapshot_read_next(struct snapshot_handle *handle);
extern int snapshot_write_next(struct snapshot_handle *handle, size_t count); extern int snapshot_write_next(struct snapshot_handle *handle);
extern void snapshot_write_finalize(struct snapshot_handle *handle); extern void snapshot_write_finalize(struct snapshot_handle *handle);
extern int snapshot_image_loaded(struct snapshot_handle *handle); extern int snapshot_image_loaded(struct snapshot_handle *handle);
...@@ -154,6 +142,15 @@ extern int swsusp_read(unsigned int *flags_p); ...@@ -154,6 +142,15 @@ extern int swsusp_read(unsigned int *flags_p);
extern int swsusp_write(unsigned int flags); extern int swsusp_write(unsigned int flags);
extern void swsusp_close(fmode_t); extern void swsusp_close(fmode_t);
/* kernel/power/block_io.c */
extern struct block_device *hib_resume_bdev;
extern int hib_bio_read_page(pgoff_t page_off, void *addr,
struct bio **bio_chain);
extern int hib_bio_write_page(pgoff_t page_off, void *addr,
struct bio **bio_chain);
extern int hib_wait_on_bio_chain(struct bio **bio_chain);
struct timeval; struct timeval;
/* kernel/power/swsusp.c */ /* kernel/power/swsusp.c */
extern void swsusp_show_speed(struct timeval *, struct timeval *, extern void swsusp_show_speed(struct timeval *, struct timeval *,
......
...@@ -1604,14 +1604,9 @@ pack_pfns(unsigned long *buf, struct memory_bitmap *bm) ...@@ -1604,14 +1604,9 @@ pack_pfns(unsigned long *buf, struct memory_bitmap *bm)
* snapshot_handle structure. The structure gets updated and a pointer * snapshot_handle structure. The structure gets updated and a pointer
* to it should be passed to this function every next time. * to it should be passed to this function every next time.
* *
* The @count parameter should contain the number of bytes the caller
* wants to read from the snapshot. It must not be zero.
*
* On success the function returns a positive number. Then, the caller * On success the function returns a positive number. Then, the caller
* is allowed to read up to the returned number of bytes from the memory * is allowed to read up to the returned number of bytes from the memory
* location computed by the data_of() macro. The number returned * location computed by the data_of() macro.
* may be smaller than @count, but this only happens if the read would
* cross a page boundary otherwise.
* *
* The function returns 0 to indicate the end of data stream condition, * The function returns 0 to indicate the end of data stream condition,
* and a negative number is returned on error. In such cases the * and a negative number is returned on error. In such cases the
...@@ -1619,7 +1614,7 @@ pack_pfns(unsigned long *buf, struct memory_bitmap *bm) ...@@ -1619,7 +1614,7 @@ pack_pfns(unsigned long *buf, struct memory_bitmap *bm)
* any more. * any more.
*/ */
int snapshot_read_next(struct snapshot_handle *handle, size_t count) int snapshot_read_next(struct snapshot_handle *handle)
{ {
if (handle->cur > nr_meta_pages + nr_copy_pages) if (handle->cur > nr_meta_pages + nr_copy_pages)
return 0; return 0;
...@@ -1630,7 +1625,7 @@ int snapshot_read_next(struct snapshot_handle *handle, size_t count) ...@@ -1630,7 +1625,7 @@ int snapshot_read_next(struct snapshot_handle *handle, size_t count)
if (!buffer) if (!buffer)
return -ENOMEM; return -ENOMEM;
} }
if (!handle->offset) { if (!handle->cur) {
int error; int error;
error = init_header((struct swsusp_info *)buffer); error = init_header((struct swsusp_info *)buffer);
...@@ -1639,9 +1634,7 @@ int snapshot_read_next(struct snapshot_handle *handle, size_t count) ...@@ -1639,9 +1634,7 @@ int snapshot_read_next(struct snapshot_handle *handle, size_t count)
handle->buffer = buffer; handle->buffer = buffer;
memory_bm_position_reset(&orig_bm); memory_bm_position_reset(&orig_bm);
memory_bm_position_reset(&copy_bm); memory_bm_position_reset(&copy_bm);
} } else if (handle->cur <= nr_meta_pages) {
if (handle->prev < handle->cur) {
if (handle->cur <= nr_meta_pages) {
memset(buffer, 0, PAGE_SIZE); memset(buffer, 0, PAGE_SIZE);
pack_pfns(buffer, &orig_bm); pack_pfns(buffer, &orig_bm);
} else { } else {
...@@ -1663,18 +1656,8 @@ int snapshot_read_next(struct snapshot_handle *handle, size_t count) ...@@ -1663,18 +1656,8 @@ int snapshot_read_next(struct snapshot_handle *handle, size_t count)
handle->buffer = page_address(page); handle->buffer = page_address(page);
} }
} }
handle->prev = handle->cur;
}
handle->buf_offset = handle->cur_offset;
if (handle->cur_offset + count >= PAGE_SIZE) {
count = PAGE_SIZE - handle->cur_offset;
handle->cur_offset = 0;
handle->cur++; handle->cur++;
} else { return PAGE_SIZE;
handle->cur_offset += count;
}
handle->offset += count;
return count;
} }
/** /**
...@@ -2133,14 +2116,9 @@ static void *get_buffer(struct memory_bitmap *bm, struct chain_allocator *ca) ...@@ -2133,14 +2116,9 @@ static void *get_buffer(struct memory_bitmap *bm, struct chain_allocator *ca)
* snapshot_handle structure. The structure gets updated and a pointer * snapshot_handle structure. The structure gets updated and a pointer
* to it should be passed to this function every next time. * to it should be passed to this function every next time.
* *
* The @count parameter should contain the number of bytes the caller
* wants to write to the image. It must not be zero.
*
* On success the function returns a positive number. Then, the caller * On success the function returns a positive number. Then, the caller
* is allowed to write up to the returned number of bytes to the memory * is allowed to write up to the returned number of bytes to the memory
* location computed by the data_of() macro. The number returned * location computed by the data_of() macro.
* may be smaller than @count, but this only happens if the write would
* cross a page boundary otherwise.
* *
* The function returns 0 to indicate the "end of file" condition, * The function returns 0 to indicate the "end of file" condition,
* and a negative number is returned on error. In such cases the * and a negative number is returned on error. In such cases the
...@@ -2148,16 +2126,18 @@ static void *get_buffer(struct memory_bitmap *bm, struct chain_allocator *ca) ...@@ -2148,16 +2126,18 @@ static void *get_buffer(struct memory_bitmap *bm, struct chain_allocator *ca)
* any more. * any more.
*/ */
int snapshot_write_next(struct snapshot_handle *handle, size_t count) int snapshot_write_next(struct snapshot_handle *handle)
{ {
static struct chain_allocator ca; static struct chain_allocator ca;
int error = 0; int error = 0;
/* Check if we have already loaded the entire image */ /* Check if we have already loaded the entire image */
if (handle->prev && handle->cur > nr_meta_pages + nr_copy_pages) if (handle->cur > 1 && handle->cur > nr_meta_pages + nr_copy_pages)
return 0; return 0;
if (handle->offset == 0) { handle->sync_read = 1;
if (!handle->cur) {
if (!buffer) if (!buffer)
/* This makes the buffer be freed by swsusp_free() */ /* This makes the buffer be freed by swsusp_free() */
buffer = get_image_page(GFP_ATOMIC, PG_ANY); buffer = get_image_page(GFP_ATOMIC, PG_ANY);
...@@ -2166,10 +2146,7 @@ int snapshot_write_next(struct snapshot_handle *handle, size_t count) ...@@ -2166,10 +2146,7 @@ int snapshot_write_next(struct snapshot_handle *handle, size_t count)
return -ENOMEM; return -ENOMEM;
handle->buffer = buffer; handle->buffer = buffer;
} } else if (handle->cur == 1) {
handle->sync_read = 1;
if (handle->prev < handle->cur) {
if (handle->prev == 0) {
error = load_header(buffer); error = load_header(buffer);
if (error) if (error)
return error; return error;
...@@ -2178,12 +2155,12 @@ int snapshot_write_next(struct snapshot_handle *handle, size_t count) ...@@ -2178,12 +2155,12 @@ int snapshot_write_next(struct snapshot_handle *handle, size_t count)
if (error) if (error)
return error; return error;
} else if (handle->prev <= nr_meta_pages) { } else if (handle->cur <= nr_meta_pages + 1) {
error = unpack_orig_pfns(buffer, &copy_bm); error = unpack_orig_pfns(buffer, &copy_bm);
if (error) if (error)
return error; return error;
if (handle->prev == nr_meta_pages) { if (handle->cur == nr_meta_pages + 1) {
error = prepare_image(&orig_bm, &copy_bm); error = prepare_image(&orig_bm, &copy_bm);
if (error) if (error)
return error; return error;
...@@ -2204,18 +2181,8 @@ int snapshot_write_next(struct snapshot_handle *handle, size_t count) ...@@ -2204,18 +2181,8 @@ int snapshot_write_next(struct snapshot_handle *handle, size_t count)
if (handle->buffer != buffer) if (handle->buffer != buffer)
handle->sync_read = 0; handle->sync_read = 0;
} }
handle->prev = handle->cur;
}
handle->buf_offset = handle->cur_offset;
if (handle->cur_offset + count >= PAGE_SIZE) {
count = PAGE_SIZE - handle->cur_offset;
handle->cur_offset = 0;
handle->cur++; handle->cur++;
} else { return PAGE_SIZE;
handle->cur_offset += count;
}
handle->offset += count;
return count;
} }
/** /**
...@@ -2230,7 +2197,7 @@ void snapshot_write_finalize(struct snapshot_handle *handle) ...@@ -2230,7 +2197,7 @@ void snapshot_write_finalize(struct snapshot_handle *handle)
{ {
copy_last_highmem_page(); copy_last_highmem_page();
/* Free only if we have loaded the image entirely */ /* Free only if we have loaded the image entirely */
if (handle->prev && handle->cur > nr_meta_pages + nr_copy_pages) { if (handle->cur > 1 && handle->cur > nr_meta_pages + nr_copy_pages) {
memory_bm_free(&orig_bm, PG_UNSAFE_CLEAR); memory_bm_free(&orig_bm, PG_UNSAFE_CLEAR);
free_highmem_data(); free_highmem_data();
} }
......
...@@ -29,6 +29,40 @@ ...@@ -29,6 +29,40 @@
#define SWSUSP_SIG "S1SUSPEND" #define SWSUSP_SIG "S1SUSPEND"
/*
* The swap map is a data structure used for keeping track of each page
* written to a swap partition. It consists of many swap_map_page
* structures that contain each an array of MAP_PAGE_SIZE swap entries.
* These structures are stored on the swap and linked together with the
* help of the .next_swap member.
*
* The swap map is created during suspend. The swap map pages are
* allocated and populated one at a time, so we only need one memory
* page to set up the entire structure.
*
* During resume we also only need to use one swap_map_page structure
* at a time.
*/
#define MAP_PAGE_ENTRIES (PAGE_SIZE / sizeof(sector_t) - 1)
struct swap_map_page {
sector_t entries[MAP_PAGE_ENTRIES];
sector_t next_swap;
};
/**
* The swap_map_handle structure is used for handling swap in
* a file-alike way
*/
struct swap_map_handle {
struct swap_map_page *cur;
sector_t cur_swap;
sector_t first_sector;
unsigned int k;
};
struct swsusp_header { struct swsusp_header {
char reserved[PAGE_SIZE - 20 - sizeof(sector_t) - sizeof(int)]; char reserved[PAGE_SIZE - 20 - sizeof(sector_t) - sizeof(int)];
sector_t image; sector_t image;
...@@ -145,110 +179,24 @@ int swsusp_swap_in_use(void) ...@@ -145,110 +179,24 @@ int swsusp_swap_in_use(void)
*/ */
static unsigned short root_swap = 0xffff; static unsigned short root_swap = 0xffff;
static struct block_device *resume_bdev; struct block_device *hib_resume_bdev;
/**
* submit - submit BIO request.
* @rw: READ or WRITE.
* @off physical offset of page.
* @page: page we're reading or writing.
* @bio_chain: list of pending biod (for async reading)
*
* Straight from the textbook - allocate and initialize the bio.
* If we're reading, make sure the page is marked as dirty.
* Then submit it and, if @bio_chain == NULL, wait.
*/
static int submit(int rw, pgoff_t page_off, struct page *page,
struct bio **bio_chain)
{
const int bio_rw = rw | (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG);
struct bio *bio;
bio = bio_alloc(__GFP_WAIT | __GFP_HIGH, 1);
bio->bi_sector = page_off * (PAGE_SIZE >> 9);
bio->bi_bdev = resume_bdev;
bio->bi_end_io = end_swap_bio_read;
if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) {
printk(KERN_ERR "PM: Adding page to bio failed at %ld\n",
page_off);
bio_put(bio);
return -EFAULT;
}
lock_page(page);
bio_get(bio);
if (bio_chain == NULL) {
submit_bio(bio_rw, bio);
wait_on_page_locked(page);
if (rw == READ)
bio_set_pages_dirty(bio);
bio_put(bio);
} else {
if (rw == READ)
get_page(page); /* These pages are freed later */
bio->bi_private = *bio_chain;
*bio_chain = bio;
submit_bio(bio_rw, bio);
}
return 0;
}
static int bio_read_page(pgoff_t page_off, void *addr, struct bio **bio_chain)
{
return submit(READ, page_off, virt_to_page(addr), bio_chain);
}
static int bio_write_page(pgoff_t page_off, void *addr, struct bio **bio_chain)
{
return submit(WRITE, page_off, virt_to_page(addr), bio_chain);
}
static int wait_on_bio_chain(struct bio **bio_chain)
{
struct bio *bio;
struct bio *next_bio;
int ret = 0;
if (bio_chain == NULL)
return 0;
bio = *bio_chain;
if (bio == NULL)
return 0;
while (bio) {
struct page *page;
next_bio = bio->bi_private;
page = bio->bi_io_vec[0].bv_page;
wait_on_page_locked(page);
if (!PageUptodate(page) || PageError(page))
ret = -EIO;
put_page(page);
bio_put(bio);
bio = next_bio;
}
*bio_chain = NULL;
return ret;
}
/* /*
* Saving part * Saving part
*/ */
static int mark_swapfiles(sector_t start, unsigned int flags) static int mark_swapfiles(struct swap_map_handle *handle, unsigned int flags)
{ {
int error; int error;
bio_read_page(swsusp_resume_block, swsusp_header, NULL); hib_bio_read_page(swsusp_resume_block, swsusp_header, NULL);
if (!memcmp("SWAP-SPACE",swsusp_header->sig, 10) || if (!memcmp("SWAP-SPACE",swsusp_header->sig, 10) ||
!memcmp("SWAPSPACE2",swsusp_header->sig, 10)) { !memcmp("SWAPSPACE2",swsusp_header->sig, 10)) {
memcpy(swsusp_header->orig_sig,swsusp_header->sig, 10); memcpy(swsusp_header->orig_sig,swsusp_header->sig, 10);
memcpy(swsusp_header->sig,SWSUSP_SIG, 10); memcpy(swsusp_header->sig,SWSUSP_SIG, 10);
swsusp_header->image = start; swsusp_header->image = handle->first_sector;
swsusp_header->flags = flags; swsusp_header->flags = flags;
error = bio_write_page(swsusp_resume_block, error = hib_bio_write_page(swsusp_resume_block,
swsusp_header, NULL); swsusp_header, NULL);
} else { } else {
printk(KERN_ERR "PM: Swap header not found!\n"); printk(KERN_ERR "PM: Swap header not found!\n");
...@@ -260,25 +208,26 @@ static int mark_swapfiles(sector_t start, unsigned int flags) ...@@ -260,25 +208,26 @@ static int mark_swapfiles(sector_t start, unsigned int flags)
/** /**
* swsusp_swap_check - check if the resume device is a swap device * swsusp_swap_check - check if the resume device is a swap device
* and get its index (if so) * and get its index (if so)
*
* This is called before saving image
*/ */
static int swsusp_swap_check(void)
static int swsusp_swap_check(void) /* This is called before saving image */
{ {
int res; int res;
res = swap_type_of(swsusp_resume_device, swsusp_resume_block, res = swap_type_of(swsusp_resume_device, swsusp_resume_block,
&resume_bdev); &hib_resume_bdev);
if (res < 0) if (res < 0)
return res; return res;
root_swap = res; root_swap = res;
res = blkdev_get(resume_bdev, FMODE_WRITE); res = blkdev_get(hib_resume_bdev, FMODE_WRITE);
if (res) if (res)
return res; return res;
res = set_blocksize(resume_bdev, PAGE_SIZE); res = set_blocksize(hib_resume_bdev, PAGE_SIZE);
if (res < 0) if (res < 0)
blkdev_put(resume_bdev, FMODE_WRITE); blkdev_put(hib_resume_bdev, FMODE_WRITE);
return res; return res;
} }
...@@ -309,42 +258,9 @@ static int write_page(void *buf, sector_t offset, struct bio **bio_chain) ...@@ -309,42 +258,9 @@ static int write_page(void *buf, sector_t offset, struct bio **bio_chain)
} else { } else {
src = buf; src = buf;
} }
return bio_write_page(offset, src, bio_chain); return hib_bio_write_page(offset, src, bio_chain);
} }
/*
* The swap map is a data structure used for keeping track of each page
* written to a swap partition. It consists of many swap_map_page
* structures that contain each an array of MAP_PAGE_SIZE swap entries.
* These structures are stored on the swap and linked together with the
* help of the .next_swap member.
*
* The swap map is created during suspend. The swap map pages are
* allocated and populated one at a time, so we only need one memory
* page to set up the entire structure.
*
* During resume we also only need to use one swap_map_page structure
* at a time.
*/
#define MAP_PAGE_ENTRIES (PAGE_SIZE / sizeof(sector_t) - 1)
struct swap_map_page {
sector_t entries[MAP_PAGE_ENTRIES];
sector_t next_swap;
};
/**
* The swap_map_handle structure is used for handling swap in
* a file-alike way
*/
struct swap_map_handle {
struct swap_map_page *cur;
sector_t cur_swap;
unsigned int k;
};
static void release_swap_writer(struct swap_map_handle *handle) static void release_swap_writer(struct swap_map_handle *handle)
{ {
if (handle->cur) if (handle->cur)
...@@ -354,16 +270,33 @@ static void release_swap_writer(struct swap_map_handle *handle) ...@@ -354,16 +270,33 @@ static void release_swap_writer(struct swap_map_handle *handle)
static int get_swap_writer(struct swap_map_handle *handle) static int get_swap_writer(struct swap_map_handle *handle)
{ {
int ret;
ret = swsusp_swap_check();
if (ret) {
if (ret != -ENOSPC)
printk(KERN_ERR "PM: Cannot find swap device, try "
"swapon -a.\n");
return ret;
}
handle->cur = (struct swap_map_page *)get_zeroed_page(GFP_KERNEL); handle->cur = (struct swap_map_page *)get_zeroed_page(GFP_KERNEL);
if (!handle->cur) if (!handle->cur) {
return -ENOMEM; ret = -ENOMEM;
goto err_close;
}
handle->cur_swap = alloc_swapdev_block(root_swap); handle->cur_swap = alloc_swapdev_block(root_swap);
if (!handle->cur_swap) { if (!handle->cur_swap) {
release_swap_writer(handle); ret = -ENOSPC;
return -ENOSPC; goto err_rel;
} }
handle->k = 0; handle->k = 0;
handle->first_sector = handle->cur_swap;
return 0; return 0;
err_rel:
release_swap_writer(handle);
err_close:
swsusp_close(FMODE_WRITE);
return ret;
} }
static int swap_write_page(struct swap_map_handle *handle, void *buf, static int swap_write_page(struct swap_map_handle *handle, void *buf,
...@@ -380,7 +313,7 @@ static int swap_write_page(struct swap_map_handle *handle, void *buf, ...@@ -380,7 +313,7 @@ static int swap_write_page(struct swap_map_handle *handle, void *buf,
return error; return error;
handle->cur->entries[handle->k++] = offset; handle->cur->entries[handle->k++] = offset;
if (handle->k >= MAP_PAGE_ENTRIES) { if (handle->k >= MAP_PAGE_ENTRIES) {
error = wait_on_bio_chain(bio_chain); error = hib_wait_on_bio_chain(bio_chain);
if (error) if (error)
goto out; goto out;
offset = alloc_swapdev_block(root_swap); offset = alloc_swapdev_block(root_swap);
...@@ -406,6 +339,24 @@ static int flush_swap_writer(struct swap_map_handle *handle) ...@@ -406,6 +339,24 @@ static int flush_swap_writer(struct swap_map_handle *handle)
return -EINVAL; return -EINVAL;
} }
static int swap_writer_finish(struct swap_map_handle *handle,
unsigned int flags, int error)
{
if (!error) {
flush_swap_writer(handle);
printk(KERN_INFO "PM: S");
error = mark_swapfiles(handle, flags);
printk("|\n");
}
if (error)
free_all_swap_pages(root_swap);
release_swap_writer(handle);
swsusp_close(FMODE_WRITE);
return error;
}
/** /**
* save_image - save the suspend image data * save_image - save the suspend image data
*/ */
...@@ -431,7 +382,7 @@ static int save_image(struct swap_map_handle *handle, ...@@ -431,7 +382,7 @@ static int save_image(struct swap_map_handle *handle,
bio = NULL; bio = NULL;
do_gettimeofday(&start); do_gettimeofday(&start);
while (1) { while (1) {
ret = snapshot_read_next(snapshot, PAGE_SIZE); ret = snapshot_read_next(snapshot);
if (ret <= 0) if (ret <= 0)
break; break;
ret = swap_write_page(handle, data_of(*snapshot), &bio); ret = swap_write_page(handle, data_of(*snapshot), &bio);
...@@ -441,7 +392,7 @@ static int save_image(struct swap_map_handle *handle, ...@@ -441,7 +392,7 @@ static int save_image(struct swap_map_handle *handle,
printk(KERN_CONT "\b\b\b\b%3d%%", nr_pages / m); printk(KERN_CONT "\b\b\b\b%3d%%", nr_pages / m);
nr_pages++; nr_pages++;
} }
err2 = wait_on_bio_chain(&bio); err2 = hib_wait_on_bio_chain(&bio);
do_gettimeofday(&stop); do_gettimeofday(&stop);
if (!ret) if (!ret)
ret = err2; ret = err2;
...@@ -483,50 +434,34 @@ int swsusp_write(unsigned int flags) ...@@ -483,50 +434,34 @@ int swsusp_write(unsigned int flags)
struct swap_map_handle handle; struct swap_map_handle handle;
struct snapshot_handle snapshot; struct snapshot_handle snapshot;
struct swsusp_info *header; struct swsusp_info *header;
unsigned long pages;
int error; int error;
error = swsusp_swap_check(); pages = snapshot_get_image_size();
error = get_swap_writer(&handle);
if (error) { if (error) {
printk(KERN_ERR "PM: Cannot find swap device, try " printk(KERN_ERR "PM: Cannot get swap writer\n");
"swapon -a.\n");
return error; return error;
} }
if (!enough_swap(pages)) {
printk(KERN_ERR "PM: Not enough free swap\n");
error = -ENOSPC;
goto out_finish;
}
memset(&snapshot, 0, sizeof(struct snapshot_handle)); memset(&snapshot, 0, sizeof(struct snapshot_handle));
error = snapshot_read_next(&snapshot, PAGE_SIZE); error = snapshot_read_next(&snapshot);
if (error < PAGE_SIZE) { if (error < PAGE_SIZE) {
if (error >= 0) if (error >= 0)
error = -EFAULT; error = -EFAULT;
goto out; goto out_finish;
} }
header = (struct swsusp_info *)data_of(snapshot); header = (struct swsusp_info *)data_of(snapshot);
if (!enough_swap(header->pages)) {
printk(KERN_ERR "PM: Not enough free swap\n");
error = -ENOSPC;
goto out;
}
error = get_swap_writer(&handle);
if (!error) {
sector_t start = handle.cur_swap;
error = swap_write_page(&handle, header, NULL); error = swap_write_page(&handle, header, NULL);
if (!error) if (!error)
error = save_image(&handle, &snapshot, error = save_image(&handle, &snapshot, pages - 1);
header->pages - 1); out_finish:
error = swap_writer_finish(&handle, flags, error);
if (!error) {
flush_swap_writer(&handle);
printk(KERN_INFO "PM: S");
error = mark_swapfiles(start, flags);
printk("|\n");
}
}
if (error)
free_all_swap_pages(root_swap);
release_swap_writer(&handle);
out:
swsusp_close(FMODE_WRITE);
return error; return error;
} }
...@@ -542,18 +477,21 @@ static void release_swap_reader(struct swap_map_handle *handle) ...@@ -542,18 +477,21 @@ static void release_swap_reader(struct swap_map_handle *handle)
handle->cur = NULL; handle->cur = NULL;
} }
static int get_swap_reader(struct swap_map_handle *handle, sector_t start) static int get_swap_reader(struct swap_map_handle *handle,
unsigned int *flags_p)
{ {
int error; int error;
if (!start) *flags_p = swsusp_header->flags;
if (!swsusp_header->image) /* how can this happen? */
return -EINVAL; return -EINVAL;
handle->cur = (struct swap_map_page *)get_zeroed_page(__GFP_WAIT | __GFP_HIGH); handle->cur = (struct swap_map_page *)get_zeroed_page(__GFP_WAIT | __GFP_HIGH);
if (!handle->cur) if (!handle->cur)
return -ENOMEM; return -ENOMEM;
error = bio_read_page(start, handle->cur, NULL); error = hib_bio_read_page(swsusp_header->image, handle->cur, NULL);
if (error) { if (error) {
release_swap_reader(handle); release_swap_reader(handle);
return error; return error;
...@@ -573,21 +511,28 @@ static int swap_read_page(struct swap_map_handle *handle, void *buf, ...@@ -573,21 +511,28 @@ static int swap_read_page(struct swap_map_handle *handle, void *buf,
offset = handle->cur->entries[handle->k]; offset = handle->cur->entries[handle->k];
if (!offset) if (!offset)
return -EFAULT; return -EFAULT;
error = bio_read_page(offset, buf, bio_chain); error = hib_bio_read_page(offset, buf, bio_chain);
if (error) if (error)
return error; return error;
if (++handle->k >= MAP_PAGE_ENTRIES) { if (++handle->k >= MAP_PAGE_ENTRIES) {
error = wait_on_bio_chain(bio_chain); error = hib_wait_on_bio_chain(bio_chain);
handle->k = 0; handle->k = 0;
offset = handle->cur->next_swap; offset = handle->cur->next_swap;
if (!offset) if (!offset)
release_swap_reader(handle); release_swap_reader(handle);
else if (!error) else if (!error)
error = bio_read_page(offset, handle->cur, NULL); error = hib_bio_read_page(offset, handle->cur, NULL);
} }
return error; return error;
} }
static int swap_reader_finish(struct swap_map_handle *handle)
{
release_swap_reader(handle);
return 0;
}
/** /**
* load_image - load the image using the swap map handle * load_image - load the image using the swap map handle
* @handle and the snapshot handle @snapshot * @handle and the snapshot handle @snapshot
...@@ -615,21 +560,21 @@ static int load_image(struct swap_map_handle *handle, ...@@ -615,21 +560,21 @@ static int load_image(struct swap_map_handle *handle,
bio = NULL; bio = NULL;
do_gettimeofday(&start); do_gettimeofday(&start);
for ( ; ; ) { for ( ; ; ) {
error = snapshot_write_next(snapshot, PAGE_SIZE); error = snapshot_write_next(snapshot);
if (error <= 0) if (error <= 0)
break; break;
error = swap_read_page(handle, data_of(*snapshot), &bio); error = swap_read_page(handle, data_of(*snapshot), &bio);
if (error) if (error)
break; break;
if (snapshot->sync_read) if (snapshot->sync_read)
error = wait_on_bio_chain(&bio); error = hib_wait_on_bio_chain(&bio);
if (error) if (error)
break; break;
if (!(nr_pages % m)) if (!(nr_pages % m))
printk("\b\b\b\b%3d%%", nr_pages / m); printk("\b\b\b\b%3d%%", nr_pages / m);
nr_pages++; nr_pages++;
} }
err2 = wait_on_bio_chain(&bio); err2 = hib_wait_on_bio_chain(&bio);
do_gettimeofday(&stop); do_gettimeofday(&stop);
if (!error) if (!error)
error = err2; error = err2;
...@@ -657,20 +602,20 @@ int swsusp_read(unsigned int *flags_p) ...@@ -657,20 +602,20 @@ int swsusp_read(unsigned int *flags_p)
struct snapshot_handle snapshot; struct snapshot_handle snapshot;
struct swsusp_info *header; struct swsusp_info *header;
*flags_p = swsusp_header->flags;
memset(&snapshot, 0, sizeof(struct snapshot_handle)); memset(&snapshot, 0, sizeof(struct snapshot_handle));
error = snapshot_write_next(&snapshot, PAGE_SIZE); error = snapshot_write_next(&snapshot);
if (error < PAGE_SIZE) if (error < PAGE_SIZE)
return error < 0 ? error : -EFAULT; return error < 0 ? error : -EFAULT;
header = (struct swsusp_info *)data_of(snapshot); header = (struct swsusp_info *)data_of(snapshot);
error = get_swap_reader(&handle, swsusp_header->image); error = get_swap_reader(&handle, flags_p);
if (error)
goto end;
if (!error) if (!error)
error = swap_read_page(&handle, header, NULL); error = swap_read_page(&handle, header, NULL);
if (!error) if (!error)
error = load_image(&handle, &snapshot, header->pages - 1); error = load_image(&handle, &snapshot, header->pages - 1);
release_swap_reader(&handle); swap_reader_finish(&handle);
end:
if (!error) if (!error)
pr_debug("PM: Image successfully loaded\n"); pr_debug("PM: Image successfully loaded\n");
else else
...@@ -686,11 +631,11 @@ int swsusp_check(void) ...@@ -686,11 +631,11 @@ int swsusp_check(void)
{ {
int error; int error;
resume_bdev = open_by_devnum(swsusp_resume_device, FMODE_READ); hib_resume_bdev = open_by_devnum(swsusp_resume_device, FMODE_READ);
if (!IS_ERR(resume_bdev)) { if (!IS_ERR(hib_resume_bdev)) {
set_blocksize(resume_bdev, PAGE_SIZE); set_blocksize(hib_resume_bdev, PAGE_SIZE);
memset(swsusp_header, 0, PAGE_SIZE); memset(swsusp_header, 0, PAGE_SIZE);
error = bio_read_page(swsusp_resume_block, error = hib_bio_read_page(swsusp_resume_block,
swsusp_header, NULL); swsusp_header, NULL);
if (error) if (error)
goto put; goto put;
...@@ -698,7 +643,7 @@ int swsusp_check(void) ...@@ -698,7 +643,7 @@ int swsusp_check(void)
if (!memcmp(SWSUSP_SIG, swsusp_header->sig, 10)) { if (!memcmp(SWSUSP_SIG, swsusp_header->sig, 10)) {
memcpy(swsusp_header->sig, swsusp_header->orig_sig, 10); memcpy(swsusp_header->sig, swsusp_header->orig_sig, 10);
/* Reset swap signature now */ /* Reset swap signature now */
error = bio_write_page(swsusp_resume_block, error = hib_bio_write_page(swsusp_resume_block,
swsusp_header, NULL); swsusp_header, NULL);
} else { } else {
error = -EINVAL; error = -EINVAL;
...@@ -706,11 +651,11 @@ int swsusp_check(void) ...@@ -706,11 +651,11 @@ int swsusp_check(void)
put: put:
if (error) if (error)
blkdev_put(resume_bdev, FMODE_READ); blkdev_put(hib_resume_bdev, FMODE_READ);
else else
pr_debug("PM: Signature found, resuming\n"); pr_debug("PM: Signature found, resuming\n");
} else { } else {
error = PTR_ERR(resume_bdev); error = PTR_ERR(hib_resume_bdev);
} }
if (error) if (error)
...@@ -725,12 +670,12 @@ int swsusp_check(void) ...@@ -725,12 +670,12 @@ int swsusp_check(void)
void swsusp_close(fmode_t mode) void swsusp_close(fmode_t mode)
{ {
if (IS_ERR(resume_bdev)) { if (IS_ERR(hib_resume_bdev)) {
pr_debug("PM: Image device not initialised\n"); pr_debug("PM: Image device not initialised\n");
return; return;
} }
blkdev_put(resume_bdev, mode); blkdev_put(hib_resume_bdev, mode);
} }
static int swsusp_header_init(void) static int swsusp_header_init(void)
......
...@@ -151,6 +151,7 @@ static ssize_t snapshot_read(struct file *filp, char __user *buf, ...@@ -151,6 +151,7 @@ static ssize_t snapshot_read(struct file *filp, char __user *buf,
{ {
struct snapshot_data *data; struct snapshot_data *data;
ssize_t res; ssize_t res;
loff_t pg_offp = *offp & ~PAGE_MASK;
mutex_lock(&pm_mutex); mutex_lock(&pm_mutex);
...@@ -159,14 +160,19 @@ static ssize_t snapshot_read(struct file *filp, char __user *buf, ...@@ -159,14 +160,19 @@ static ssize_t snapshot_read(struct file *filp, char __user *buf,
res = -ENODATA; res = -ENODATA;
goto Unlock; goto Unlock;
} }
res = snapshot_read_next(&data->handle, count); if (!pg_offp) { /* on page boundary? */
if (res > 0) { res = snapshot_read_next(&data->handle);
if (copy_to_user(buf, data_of(data->handle), res)) if (res <= 0)
res = -EFAULT; goto Unlock;
else } else {
*offp = data->handle.offset; res = PAGE_SIZE - pg_offp;
} }
res = simple_read_from_buffer(buf, count, &pg_offp,
data_of(data->handle), res);
if (res > 0)
*offp += res;
Unlock: Unlock:
mutex_unlock(&pm_mutex); mutex_unlock(&pm_mutex);
...@@ -178,18 +184,25 @@ static ssize_t snapshot_write(struct file *filp, const char __user *buf, ...@@ -178,18 +184,25 @@ static ssize_t snapshot_write(struct file *filp, const char __user *buf,
{ {
struct snapshot_data *data; struct snapshot_data *data;
ssize_t res; ssize_t res;
loff_t pg_offp = *offp & ~PAGE_MASK;
mutex_lock(&pm_mutex); mutex_lock(&pm_mutex);
data = filp->private_data; data = filp->private_data;
res = snapshot_write_next(&data->handle, count);
if (res > 0) { if (!pg_offp) {
if (copy_from_user(data_of(data->handle), buf, res)) res = snapshot_write_next(&data->handle);
res = -EFAULT; if (res <= 0)
else goto unlock;
*offp = data->handle.offset; } else {
res = PAGE_SIZE - pg_offp;
} }
res = simple_write_to_buffer(data_of(data->handle), res, &pg_offp,
buf, count);
if (res > 0)
*offp += res;
unlock:
mutex_unlock(&pm_mutex); mutex_unlock(&pm_mutex);
return res; return res;
......
...@@ -495,7 +495,7 @@ void ieee80211_recalc_ps(struct ieee80211_local *local, s32 latency) ...@@ -495,7 +495,7 @@ void ieee80211_recalc_ps(struct ieee80211_local *local, s32 latency)
s32 beaconint_us; s32 beaconint_us;
if (latency < 0) if (latency < 0)
latency = pm_qos_requirement(PM_QOS_NETWORK_LATENCY); latency = pm_qos_request(PM_QOS_NETWORK_LATENCY);
beaconint_us = ieee80211_tu_to_usec( beaconint_us = ieee80211_tu_to_usec(
found->vif.bss_conf.beacon_int); found->vif.bss_conf.beacon_int);
......
...@@ -648,9 +648,6 @@ int snd_pcm_new_stream(struct snd_pcm *pcm, int stream, int substream_count) ...@@ -648,9 +648,6 @@ int snd_pcm_new_stream(struct snd_pcm *pcm, int stream, int substream_count)
substream->number = idx; substream->number = idx;
substream->stream = stream; substream->stream = stream;
sprintf(substream->name, "subdevice #%i", idx); sprintf(substream->name, "subdevice #%i", idx);
snprintf(substream->latency_id, sizeof(substream->latency_id),
"ALSA-PCM%d-%d%c%d", pcm->card->number, pcm->device,
(stream ? 'c' : 'p'), idx);
substream->buffer_bytes_max = UINT_MAX; substream->buffer_bytes_max = UINT_MAX;
if (prev == NULL) if (prev == NULL)
pstr->substream = substream; pstr->substream = substream;
......
...@@ -484,11 +484,13 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream, ...@@ -484,11 +484,13 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
snd_pcm_timer_resolution_change(substream); snd_pcm_timer_resolution_change(substream);
runtime->status->state = SNDRV_PCM_STATE_SETUP; runtime->status->state = SNDRV_PCM_STATE_SETUP;
pm_qos_remove_requirement(PM_QOS_CPU_DMA_LATENCY, if (substream->latency_pm_qos_req) {
substream->latency_id); pm_qos_remove_request(substream->latency_pm_qos_req);
substream->latency_pm_qos_req = NULL;
}
if ((usecs = period_to_usecs(runtime)) >= 0) if ((usecs = period_to_usecs(runtime)) >= 0)
pm_qos_add_requirement(PM_QOS_CPU_DMA_LATENCY, substream->latency_pm_qos_req = pm_qos_add_request(
substream->latency_id, usecs); PM_QOS_CPU_DMA_LATENCY, usecs);
return 0; return 0;
_error: _error:
/* hardware might be unuseable from this time, /* hardware might be unuseable from this time,
...@@ -543,8 +545,8 @@ static int snd_pcm_hw_free(struct snd_pcm_substream *substream) ...@@ -543,8 +545,8 @@ static int snd_pcm_hw_free(struct snd_pcm_substream *substream)
if (substream->ops->hw_free) if (substream->ops->hw_free)
result = substream->ops->hw_free(substream); result = substream->ops->hw_free(substream);
runtime->status->state = SNDRV_PCM_STATE_OPEN; runtime->status->state = SNDRV_PCM_STATE_OPEN;
pm_qos_remove_requirement(PM_QOS_CPU_DMA_LATENCY, pm_qos_remove_request(substream->latency_pm_qos_req);
substream->latency_id); substream->latency_pm_qos_req = NULL;
return result; return result;
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册