Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
openeuler
raspberrypi-kernel
提交
a5950f26
R
raspberrypi-kernel
项目概览
openeuler
/
raspberrypi-kernel
通知
13
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
R
raspberrypi-kernel
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
a5950f26
编写于
7年前
作者:
R
Rafael J. Wysocki
浏览文件
操作
浏览文件
下载
差异文件
Merge back suspend/resume/hibernate material for v4.15.
上级
87cbde8d
eb672c02
变更
6
隐藏空白更改
内联
并排
Showing
6 changed file
with
101 addition
and
117 deletion
+101
-117
Documentation/driver-api/pm/devices.rst
Documentation/driver-api/pm/devices.rst
+24
-1
arch/arm/common/locomo.c
arch/arm/common/locomo.c
+0
-24
arch/arm/include/asm/hardware/locomo.h
arch/arm/include/asm/hardware/locomo.h
+0
-2
kernel/power/qos.c
kernel/power/qos.c
+2
-2
kernel/power/snapshot.c
kernel/power/snapshot.c
+18
-17
kernel/power/swap.c
kernel/power/swap.c
+57
-71
未找到文件。
Documentation/driver-api/pm/devices.rst
浏览文件 @
a5950f26
...
@@ -328,7 +328,10 @@ the phases are: ``prepare``, ``suspend``, ``suspend_late``, ``suspend_noirq``.
...
@@ -328,7 +328,10 @@ the phases are: ``prepare``, ``suspend``, ``suspend_late``, ``suspend_noirq``.
After the ``->prepare`` callback method returns, no new children may be
After the ``->prepare`` callback method returns, no new children may be
registered below the device. The method may also prepare the device or
registered below the device. The method may also prepare the device or
driver in some way for the upcoming system power transition, but it
driver in some way for the upcoming system power transition, but it
should not put the device into a low-power state.
should not put the device into a low-power state. Moreover, if the
device supports runtime power management, the ``->prepare`` callback
method must not update its state in case it is necessary to resume it
from runtime suspend later on.
For devices supporting runtime power management, the return value of the
For devices supporting runtime power management, the return value of the
prepare callback can be used to indicate to the PM core that it may
prepare callback can be used to indicate to the PM core that it may
...
@@ -356,6 +359,16 @@ the phases are: ``prepare``, ``suspend``, ``suspend_late``, ``suspend_noirq``.
...
@@ -356,6 +359,16 @@ the phases are: ``prepare``, ``suspend``, ``suspend_late``, ``suspend_noirq``.
the appropriate low-power state, depending on the bus type the device is
the appropriate low-power state, depending on the bus type the device is
on, and they may enable wakeup events.
on, and they may enable wakeup events.
However, for devices supporting runtime power management, the
``->suspend`` methods provided by subsystems (bus types and PM domains
in particular) must follow an additional rule regarding what can be done
to the devices before their drivers' ``->suspend`` methods are called.
Namely, they can only resume the devices from runtime suspend by
calling :c:func:`pm_runtime_resume` for them, if that is necessary, and
they must not update the state of the devices in any other way at that
time (in case the drivers need to resume the devices from runtime
suspend in their ``->suspend`` methods).
3. For a number of devices it is convenient to split suspend into the
3. For a number of devices it is convenient to split suspend into the
"quiesce device" and "save device state" phases, in which cases
"quiesce device" and "save device state" phases, in which cases
``suspend_late`` is meant to do the latter. It is always executed after
``suspend_late`` is meant to do the latter. It is always executed after
...
@@ -729,6 +742,16 @@ state temporarily, for example so that its system wakeup capability can be
...
@@ -729,6 +742,16 @@ state temporarily, for example so that its system wakeup capability can be
disabled. This all depends on the hardware and the design of the subsystem and
disabled. This all depends on the hardware and the design of the subsystem and
device driver in question.
device driver in question.
If it is necessary to resume a device from runtime suspend during a system-wide
transition into a sleep state, that can be done by calling
:c:func:`pm_runtime_resume` for it from the ``->suspend`` callback (or its
couterpart for transitions related to hibernation) of either the device's driver
or a subsystem responsible for it (for example, a bus type or a PM domain).
That is guaranteed to work by the requirement that subsystems must not change
the state of devices (possibly except for resuming them from runtime suspend)
from their ``->prepare`` and ``->suspend`` callbacks (or equivalent) *before*
invoking device drivers' ``->suspend`` callbacks (or equivalent).
During system-wide resume from a sleep state it's easiest to put devices into
During system-wide resume from a sleep state it's easiest to put devices into
the full-power state, as explained in :file:`Documentation/power/runtime_pm.txt`.
the full-power state, as explained in :file:`Documentation/power/runtime_pm.txt`.
Refer to that document for more information regarding this particular issue as
Refer to that document for more information regarding this particular issue as
...
...
This diff is collapsed.
Click to expand it.
arch/arm/common/locomo.c
浏览文件 @
a5950f26
...
@@ -826,28 +826,6 @@ static int locomo_match(struct device *_dev, struct device_driver *_drv)
...
@@ -826,28 +826,6 @@ static int locomo_match(struct device *_dev, struct device_driver *_drv)
return
dev
->
devid
==
drv
->
devid
;
return
dev
->
devid
==
drv
->
devid
;
}
}
static
int
locomo_bus_suspend
(
struct
device
*
dev
,
pm_message_t
state
)
{
struct
locomo_dev
*
ldev
=
LOCOMO_DEV
(
dev
);
struct
locomo_driver
*
drv
=
LOCOMO_DRV
(
dev
->
driver
);
int
ret
=
0
;
if
(
drv
&&
drv
->
suspend
)
ret
=
drv
->
suspend
(
ldev
,
state
);
return
ret
;
}
static
int
locomo_bus_resume
(
struct
device
*
dev
)
{
struct
locomo_dev
*
ldev
=
LOCOMO_DEV
(
dev
);
struct
locomo_driver
*
drv
=
LOCOMO_DRV
(
dev
->
driver
);
int
ret
=
0
;
if
(
drv
&&
drv
->
resume
)
ret
=
drv
->
resume
(
ldev
);
return
ret
;
}
static
int
locomo_bus_probe
(
struct
device
*
dev
)
static
int
locomo_bus_probe
(
struct
device
*
dev
)
{
{
struct
locomo_dev
*
ldev
=
LOCOMO_DEV
(
dev
);
struct
locomo_dev
*
ldev
=
LOCOMO_DEV
(
dev
);
...
@@ -875,8 +853,6 @@ struct bus_type locomo_bus_type = {
...
@@ -875,8 +853,6 @@ struct bus_type locomo_bus_type = {
.
match
=
locomo_match
,
.
match
=
locomo_match
,
.
probe
=
locomo_bus_probe
,
.
probe
=
locomo_bus_probe
,
.
remove
=
locomo_bus_remove
,
.
remove
=
locomo_bus_remove
,
.
suspend
=
locomo_bus_suspend
,
.
resume
=
locomo_bus_resume
,
};
};
int
locomo_driver_register
(
struct
locomo_driver
*
driver
)
int
locomo_driver_register
(
struct
locomo_driver
*
driver
)
...
...
This diff is collapsed.
Click to expand it.
arch/arm/include/asm/hardware/locomo.h
浏览文件 @
a5950f26
...
@@ -189,8 +189,6 @@ struct locomo_driver {
...
@@ -189,8 +189,6 @@ struct locomo_driver {
unsigned
int
devid
;
unsigned
int
devid
;
int
(
*
probe
)(
struct
locomo_dev
*
);
int
(
*
probe
)(
struct
locomo_dev
*
);
int
(
*
remove
)(
struct
locomo_dev
*
);
int
(
*
remove
)(
struct
locomo_dev
*
);
int
(
*
suspend
)(
struct
locomo_dev
*
,
pm_message_t
);
int
(
*
resume
)(
struct
locomo_dev
*
);
};
};
#define LOCOMO_DRV(_d) container_of((_d), struct locomo_driver, drv)
#define LOCOMO_DRV(_d) container_of((_d), struct locomo_driver, drv)
...
...
This diff is collapsed.
Click to expand it.
kernel/power/qos.c
浏览文件 @
a5950f26
...
@@ -701,8 +701,8 @@ static int __init pm_qos_power_init(void)
...
@@ -701,8 +701,8 @@ static int __init pm_qos_power_init(void)
for
(
i
=
PM_QOS_CPU_DMA_LATENCY
;
i
<
PM_QOS_NUM_CLASSES
;
i
++
)
{
for
(
i
=
PM_QOS_CPU_DMA_LATENCY
;
i
<
PM_QOS_NUM_CLASSES
;
i
++
)
{
ret
=
register_pm_qos_misc
(
pm_qos_array
[
i
],
d
);
ret
=
register_pm_qos_misc
(
pm_qos_array
[
i
],
d
);
if
(
ret
<
0
)
{
if
(
ret
<
0
)
{
pr
intk
(
KERN_ERR
"pm_qos_param
: %s setup failed
\n
"
,
pr
_err
(
"%s
: %s setup failed
\n
"
,
pm_qos_array
[
i
]
->
name
);
__func__
,
pm_qos_array
[
i
]
->
name
);
return
ret
;
return
ret
;
}
}
}
}
...
...
This diff is collapsed.
Click to expand it.
kernel/power/snapshot.c
浏览文件 @
a5950f26
...
@@ -10,6 +10,8 @@
...
@@ -10,6 +10,8 @@
*
*
*/
*/
#define pr_fmt(fmt) "PM: " fmt
#include <linux/version.h>
#include <linux/version.h>
#include <linux/module.h>
#include <linux/module.h>
#include <linux/mm.h>
#include <linux/mm.h>
...
@@ -967,7 +969,7 @@ void __init __register_nosave_region(unsigned long start_pfn,
...
@@ -967,7 +969,7 @@ void __init __register_nosave_region(unsigned long start_pfn,
region
->
end_pfn
=
end_pfn
;
region
->
end_pfn
=
end_pfn
;
list_add_tail
(
&
region
->
list
,
&
nosave_regions
);
list_add_tail
(
&
region
->
list
,
&
nosave_regions
);
Report:
Report:
pr
intk
(
KERN_INFO
"PM:
Registered nosave memory: [mem %#010llx-%#010llx]
\n
"
,
pr
_info
(
"
Registered nosave memory: [mem %#010llx-%#010llx]
\n
"
,
(
unsigned
long
long
)
start_pfn
<<
PAGE_SHIFT
,
(
unsigned
long
long
)
start_pfn
<<
PAGE_SHIFT
,
((
unsigned
long
long
)
end_pfn
<<
PAGE_SHIFT
)
-
1
);
((
unsigned
long
long
)
end_pfn
<<
PAGE_SHIFT
)
-
1
);
}
}
...
@@ -1039,7 +1041,7 @@ static void mark_nosave_pages(struct memory_bitmap *bm)
...
@@ -1039,7 +1041,7 @@ static void mark_nosave_pages(struct memory_bitmap *bm)
list_for_each_entry
(
region
,
&
nosave_regions
,
list
)
{
list_for_each_entry
(
region
,
&
nosave_regions
,
list
)
{
unsigned
long
pfn
;
unsigned
long
pfn
;
pr_debug
(
"
PM:
Marking nosave pages: [mem %#010llx-%#010llx]
\n
"
,
pr_debug
(
"Marking nosave pages: [mem %#010llx-%#010llx]
\n
"
,
(
unsigned
long
long
)
region
->
start_pfn
<<
PAGE_SHIFT
,
(
unsigned
long
long
)
region
->
start_pfn
<<
PAGE_SHIFT
,
((
unsigned
long
long
)
region
->
end_pfn
<<
PAGE_SHIFT
)
((
unsigned
long
long
)
region
->
end_pfn
<<
PAGE_SHIFT
)
-
1
);
-
1
);
...
@@ -1095,7 +1097,7 @@ int create_basic_memory_bitmaps(void)
...
@@ -1095,7 +1097,7 @@ int create_basic_memory_bitmaps(void)
free_pages_map
=
bm2
;
free_pages_map
=
bm2
;
mark_nosave_pages
(
forbidden_pages_map
);
mark_nosave_pages
(
forbidden_pages_map
);
pr_debug
(
"
PM:
Basic memory bitmaps created
\n
"
);
pr_debug
(
"Basic memory bitmaps created
\n
"
);
return
0
;
return
0
;
...
@@ -1131,7 +1133,7 @@ void free_basic_memory_bitmaps(void)
...
@@ -1131,7 +1133,7 @@ void free_basic_memory_bitmaps(void)
memory_bm_free
(
bm2
,
PG_UNSAFE_CLEAR
);
memory_bm_free
(
bm2
,
PG_UNSAFE_CLEAR
);
kfree
(
bm2
);
kfree
(
bm2
);
pr_debug
(
"
PM:
Basic memory bitmaps freed
\n
"
);
pr_debug
(
"Basic memory bitmaps freed
\n
"
);
}
}
void
clear_free_pages
(
void
)
void
clear_free_pages
(
void
)
...
@@ -1152,7 +1154,7 @@ void clear_free_pages(void)
...
@@ -1152,7 +1154,7 @@ void clear_free_pages(void)
pfn
=
memory_bm_next_pfn
(
bm
);
pfn
=
memory_bm_next_pfn
(
bm
);
}
}
memory_bm_position_reset
(
bm
);
memory_bm_position_reset
(
bm
);
pr_info
(
"
PM:
free pages cleared after restore
\n
"
);
pr_info
(
"free pages cleared after restore
\n
"
);
#endif
/* PAGE_POISONING_ZERO */
#endif
/* PAGE_POISONING_ZERO */
}
}
...
@@ -1690,7 +1692,7 @@ int hibernate_preallocate_memory(void)
...
@@ -1690,7 +1692,7 @@ int hibernate_preallocate_memory(void)
ktime_t
start
,
stop
;
ktime_t
start
,
stop
;
int
error
;
int
error
;
pr
intk
(
KERN_INFO
"PM:
Preallocating image memory... "
);
pr
_info
(
"
Preallocating image memory... "
);
start
=
ktime_get
();
start
=
ktime_get
();
error
=
memory_bm_create
(
&
orig_bm
,
GFP_IMAGE
,
PG_ANY
);
error
=
memory_bm_create
(
&
orig_bm
,
GFP_IMAGE
,
PG_ANY
);
...
@@ -1821,13 +1823,13 @@ int hibernate_preallocate_memory(void)
...
@@ -1821,13 +1823,13 @@ int hibernate_preallocate_memory(void)
out:
out:
stop
=
ktime_get
();
stop
=
ktime_get
();
pr
intk
(
KERN_CONT
"done (allocated %lu pages)
\n
"
,
pages
);
pr
_cont
(
"done (allocated %lu pages)
\n
"
,
pages
);
swsusp_show_speed
(
start
,
stop
,
pages
,
"Allocated"
);
swsusp_show_speed
(
start
,
stop
,
pages
,
"Allocated"
);
return
0
;
return
0
;
err_out:
err_out:
pr
intk
(
KERN_CONT
"
\n
"
);
pr
_cont
(
"
\n
"
);
swsusp_free
();
swsusp_free
();
return
-
ENOMEM
;
return
-
ENOMEM
;
}
}
...
@@ -1867,8 +1869,8 @@ static int enough_free_mem(unsigned int nr_pages, unsigned int nr_highmem)
...
@@ -1867,8 +1869,8 @@ static int enough_free_mem(unsigned int nr_pages, unsigned int nr_highmem)
free
+=
zone_page_state
(
zone
,
NR_FREE_PAGES
);
free
+=
zone_page_state
(
zone
,
NR_FREE_PAGES
);
nr_pages
+=
count_pages_for_highmem
(
nr_highmem
);
nr_pages
+=
count_pages_for_highmem
(
nr_highmem
);
pr_debug
(
"
PM:
Normal pages needed: %u + %u, available pages: %u
\n
"
,
pr_debug
(
"Normal pages needed: %u + %u, available pages: %u
\n
"
,
nr_pages
,
PAGES_FOR_IO
,
free
);
nr_pages
,
PAGES_FOR_IO
,
free
);
return
free
>
nr_pages
+
PAGES_FOR_IO
;
return
free
>
nr_pages
+
PAGES_FOR_IO
;
}
}
...
@@ -1961,20 +1963,20 @@ asmlinkage __visible int swsusp_save(void)
...
@@ -1961,20 +1963,20 @@ asmlinkage __visible int swsusp_save(void)
{
{
unsigned
int
nr_pages
,
nr_highmem
;
unsigned
int
nr_pages
,
nr_highmem
;
pr
intk
(
KERN_INFO
"PM:
Creating hibernation image:
\n
"
);
pr
_info
(
"
Creating hibernation image:
\n
"
);
drain_local_pages
(
NULL
);
drain_local_pages
(
NULL
);
nr_pages
=
count_data_pages
();
nr_pages
=
count_data_pages
();
nr_highmem
=
count_highmem_pages
();
nr_highmem
=
count_highmem_pages
();
pr
intk
(
KERN_INFO
"PM:
Need to copy %u pages
\n
"
,
nr_pages
+
nr_highmem
);
pr
_info
(
"
Need to copy %u pages
\n
"
,
nr_pages
+
nr_highmem
);
if
(
!
enough_free_mem
(
nr_pages
,
nr_highmem
))
{
if
(
!
enough_free_mem
(
nr_pages
,
nr_highmem
))
{
pr
intk
(
KERN_ERR
"PM:
Not enough free memory
\n
"
);
pr
_err
(
"
Not enough free memory
\n
"
);
return
-
ENOMEM
;
return
-
ENOMEM
;
}
}
if
(
swsusp_alloc
(
&
copy_bm
,
nr_pages
,
nr_highmem
))
{
if
(
swsusp_alloc
(
&
copy_bm
,
nr_pages
,
nr_highmem
))
{
pr
intk
(
KERN_ERR
"PM:
Memory allocation failed
\n
"
);
pr
_err
(
"
Memory allocation failed
\n
"
);
return
-
ENOMEM
;
return
-
ENOMEM
;
}
}
...
@@ -1995,8 +1997,7 @@ asmlinkage __visible int swsusp_save(void)
...
@@ -1995,8 +1997,7 @@ asmlinkage __visible int swsusp_save(void)
nr_copy_pages
=
nr_pages
;
nr_copy_pages
=
nr_pages
;
nr_meta_pages
=
DIV_ROUND_UP
(
nr_pages
*
sizeof
(
long
),
PAGE_SIZE
);
nr_meta_pages
=
DIV_ROUND_UP
(
nr_pages
*
sizeof
(
long
),
PAGE_SIZE
);
printk
(
KERN_INFO
"PM: Hibernation image created (%d pages copied)
\n
"
,
pr_info
(
"Hibernation image created (%d pages copied)
\n
"
,
nr_pages
);
nr_pages
);
return
0
;
return
0
;
}
}
...
@@ -2170,7 +2171,7 @@ static int check_header(struct swsusp_info *info)
...
@@ -2170,7 +2171,7 @@ static int check_header(struct swsusp_info *info)
if
(
!
reason
&&
info
->
num_physpages
!=
get_num_physpages
())
if
(
!
reason
&&
info
->
num_physpages
!=
get_num_physpages
())
reason
=
"memory size"
;
reason
=
"memory size"
;
if
(
reason
)
{
if
(
reason
)
{
pr
intk
(
KERN_ERR
"PM:
Image mismatch: %s
\n
"
,
reason
);
pr
_err
(
"
Image mismatch: %s
\n
"
,
reason
);
return
-
EPERM
;
return
-
EPERM
;
}
}
return
0
;
return
0
;
...
...
This diff is collapsed.
Click to expand it.
kernel/power/swap.c
浏览文件 @
a5950f26
...
@@ -12,6 +12,8 @@
...
@@ -12,6 +12,8 @@
*
*
*/
*/
#define pr_fmt(fmt) "PM: " fmt
#include <linux/module.h>
#include <linux/module.h>
#include <linux/file.h>
#include <linux/file.h>
#include <linux/delay.h>
#include <linux/delay.h>
...
@@ -241,9 +243,9 @@ static void hib_end_io(struct bio *bio)
...
@@ -241,9 +243,9 @@ static void hib_end_io(struct bio *bio)
struct
page
*
page
=
bio
->
bi_io_vec
[
0
].
bv_page
;
struct
page
*
page
=
bio
->
bi_io_vec
[
0
].
bv_page
;
if
(
bio
->
bi_status
)
{
if
(
bio
->
bi_status
)
{
pr
intk
(
KERN_ALERT
"Read-error on swap-device (%u:%u:%Lu)
\n
"
,
pr
_alert
(
"Read-error on swap-device (%u:%u:%Lu)
\n
"
,
MAJOR
(
bio_dev
(
bio
)),
MINOR
(
bio_dev
(
bio
)),
MAJOR
(
bio_dev
(
bio
)),
MINOR
(
bio_dev
(
bio
)),
(
unsigned
long
long
)
bio
->
bi_iter
.
bi_sector
);
(
unsigned
long
long
)
bio
->
bi_iter
.
bi_sector
);
}
}
if
(
bio_data_dir
(
bio
)
==
WRITE
)
if
(
bio_data_dir
(
bio
)
==
WRITE
)
...
@@ -273,8 +275,8 @@ static int hib_submit_io(int op, int op_flags, pgoff_t page_off, void *addr,
...
@@ -273,8 +275,8 @@ static int hib_submit_io(int op, int op_flags, pgoff_t page_off, void *addr,
bio_set_op_attrs
(
bio
,
op
,
op_flags
);
bio_set_op_attrs
(
bio
,
op
,
op_flags
);
if
(
bio_add_page
(
bio
,
page
,
PAGE_SIZE
,
0
)
<
PAGE_SIZE
)
{
if
(
bio_add_page
(
bio
,
page
,
PAGE_SIZE
,
0
)
<
PAGE_SIZE
)
{
pr
intk
(
KERN_ERR
"PM:
Adding page to bio failed at %llu
\n
"
,
pr
_err
(
"
Adding page to bio failed at %llu
\n
"
,
(
unsigned
long
long
)
bio
->
bi_iter
.
bi_sector
);
(
unsigned
long
long
)
bio
->
bi_iter
.
bi_sector
);
bio_put
(
bio
);
bio_put
(
bio
);
return
-
EFAULT
;
return
-
EFAULT
;
}
}
...
@@ -319,7 +321,7 @@ static int mark_swapfiles(struct swap_map_handle *handle, unsigned int flags)
...
@@ -319,7 +321,7 @@ static int mark_swapfiles(struct swap_map_handle *handle, unsigned int flags)
error
=
hib_submit_io
(
REQ_OP_WRITE
,
REQ_SYNC
,
error
=
hib_submit_io
(
REQ_OP_WRITE
,
REQ_SYNC
,
swsusp_resume_block
,
swsusp_header
,
NULL
);
swsusp_resume_block
,
swsusp_header
,
NULL
);
}
else
{
}
else
{
pr
intk
(
KERN_ERR
"PM:
Swap header not found!
\n
"
);
pr
_err
(
"
Swap header not found!
\n
"
);
error
=
-
ENODEV
;
error
=
-
ENODEV
;
}
}
return
error
;
return
error
;
...
@@ -413,8 +415,7 @@ static int get_swap_writer(struct swap_map_handle *handle)
...
@@ -413,8 +415,7 @@ static int get_swap_writer(struct swap_map_handle *handle)
ret
=
swsusp_swap_check
();
ret
=
swsusp_swap_check
();
if
(
ret
)
{
if
(
ret
)
{
if
(
ret
!=
-
ENOSPC
)
if
(
ret
!=
-
ENOSPC
)
printk
(
KERN_ERR
"PM: Cannot find swap device, try "
pr_err
(
"Cannot find swap device, try swapon -a
\n
"
);
"swapon -a.
\n
"
);
return
ret
;
return
ret
;
}
}
handle
->
cur
=
(
struct
swap_map_page
*
)
get_zeroed_page
(
GFP_KERNEL
);
handle
->
cur
=
(
struct
swap_map_page
*
)
get_zeroed_page
(
GFP_KERNEL
);
...
@@ -491,9 +492,9 @@ static int swap_writer_finish(struct swap_map_handle *handle,
...
@@ -491,9 +492,9 @@ static int swap_writer_finish(struct swap_map_handle *handle,
{
{
if
(
!
error
)
{
if
(
!
error
)
{
flush_swap_writer
(
handle
);
flush_swap_writer
(
handle
);
pr
intk
(
KERN_INFO
"PM:
S"
);
pr
_info
(
"
S"
);
error
=
mark_swapfiles
(
handle
,
flags
);
error
=
mark_swapfiles
(
handle
,
flags
);
pr
intk
(
"|
\n
"
);
pr
_cont
(
"|
\n
"
);
}
}
if
(
error
)
if
(
error
)
...
@@ -542,7 +543,7 @@ static int save_image(struct swap_map_handle *handle,
...
@@ -542,7 +543,7 @@ static int save_image(struct swap_map_handle *handle,
hib_init_batch
(
&
hb
);
hib_init_batch
(
&
hb
);
pr
intk
(
KERN_INFO
"PM:
Saving image data pages (%u pages)...
\n
"
,
pr
_info
(
"
Saving image data pages (%u pages)...
\n
"
,
nr_to_write
);
nr_to_write
);
m
=
nr_to_write
/
10
;
m
=
nr_to_write
/
10
;
if
(
!
m
)
if
(
!
m
)
...
@@ -557,8 +558,8 @@ static int save_image(struct swap_map_handle *handle,
...
@@ -557,8 +558,8 @@ static int save_image(struct swap_map_handle *handle,
if
(
ret
)
if
(
ret
)
break
;
break
;
if
(
!
(
nr_pages
%
m
))
if
(
!
(
nr_pages
%
m
))
pr
intk
(
KERN_INFO
"PM:
Image saving progress: %3d%%
\n
"
,
pr
_info
(
"
Image saving progress: %3d%%
\n
"
,
nr_pages
/
m
*
10
);
nr_pages
/
m
*
10
);
nr_pages
++
;
nr_pages
++
;
}
}
err2
=
hib_wait_io
(
&
hb
);
err2
=
hib_wait_io
(
&
hb
);
...
@@ -566,7 +567,7 @@ static int save_image(struct swap_map_handle *handle,
...
@@ -566,7 +567,7 @@ static int save_image(struct swap_map_handle *handle,
if
(
!
ret
)
if
(
!
ret
)
ret
=
err2
;
ret
=
err2
;
if
(
!
ret
)
if
(
!
ret
)
pr
intk
(
KERN_INFO
"PM: Image saving done.
\n
"
);
pr
_info
(
"Image saving done
\n
"
);
swsusp_show_speed
(
start
,
stop
,
nr_to_write
,
"Wrote"
);
swsusp_show_speed
(
start
,
stop
,
nr_to_write
,
"Wrote"
);
return
ret
;
return
ret
;
}
}
...
@@ -692,14 +693,14 @@ static int save_image_lzo(struct swap_map_handle *handle,
...
@@ -692,14 +693,14 @@ static int save_image_lzo(struct swap_map_handle *handle,
page
=
(
void
*
)
__get_free_page
(
__GFP_RECLAIM
|
__GFP_HIGH
);
page
=
(
void
*
)
__get_free_page
(
__GFP_RECLAIM
|
__GFP_HIGH
);
if
(
!
page
)
{
if
(
!
page
)
{
pr
intk
(
KERN_ERR
"PM:
Failed to allocate LZO page
\n
"
);
pr
_err
(
"
Failed to allocate LZO page
\n
"
);
ret
=
-
ENOMEM
;
ret
=
-
ENOMEM
;
goto
out_clean
;
goto
out_clean
;
}
}
data
=
vmalloc
(
sizeof
(
*
data
)
*
nr_threads
);
data
=
vmalloc
(
sizeof
(
*
data
)
*
nr_threads
);
if
(
!
data
)
{
if
(
!
data
)
{
pr
intk
(
KERN_ERR
"PM:
Failed to allocate LZO data
\n
"
);
pr
_err
(
"
Failed to allocate LZO data
\n
"
);
ret
=
-
ENOMEM
;
ret
=
-
ENOMEM
;
goto
out_clean
;
goto
out_clean
;
}
}
...
@@ -708,7 +709,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
...
@@ -708,7 +709,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
crc
=
kmalloc
(
sizeof
(
*
crc
),
GFP_KERNEL
);
crc
=
kmalloc
(
sizeof
(
*
crc
),
GFP_KERNEL
);
if
(
!
crc
)
{
if
(
!
crc
)
{
pr
intk
(
KERN_ERR
"PM:
Failed to allocate crc
\n
"
);
pr
_err
(
"
Failed to allocate crc
\n
"
);
ret
=
-
ENOMEM
;
ret
=
-
ENOMEM
;
goto
out_clean
;
goto
out_clean
;
}
}
...
@@ -726,8 +727,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
...
@@ -726,8 +727,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
"image_compress/%u"
,
thr
);
"image_compress/%u"
,
thr
);
if
(
IS_ERR
(
data
[
thr
].
thr
))
{
if
(
IS_ERR
(
data
[
thr
].
thr
))
{
data
[
thr
].
thr
=
NULL
;
data
[
thr
].
thr
=
NULL
;
printk
(
KERN_ERR
pr_err
(
"Cannot start compression threads
\n
"
);
"PM: Cannot start compression threads
\n
"
);
ret
=
-
ENOMEM
;
ret
=
-
ENOMEM
;
goto
out_clean
;
goto
out_clean
;
}
}
...
@@ -749,7 +749,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
...
@@ -749,7 +749,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
crc
->
thr
=
kthread_run
(
crc32_threadfn
,
crc
,
"image_crc32"
);
crc
->
thr
=
kthread_run
(
crc32_threadfn
,
crc
,
"image_crc32"
);
if
(
IS_ERR
(
crc
->
thr
))
{
if
(
IS_ERR
(
crc
->
thr
))
{
crc
->
thr
=
NULL
;
crc
->
thr
=
NULL
;
pr
intk
(
KERN_ERR
"PM:
Cannot start CRC32 thread
\n
"
);
pr
_err
(
"
Cannot start CRC32 thread
\n
"
);
ret
=
-
ENOMEM
;
ret
=
-
ENOMEM
;
goto
out_clean
;
goto
out_clean
;
}
}
...
@@ -760,10 +760,9 @@ static int save_image_lzo(struct swap_map_handle *handle,
...
@@ -760,10 +760,9 @@ static int save_image_lzo(struct swap_map_handle *handle,
*/
*/
handle
->
reqd_free_pages
=
reqd_free_pages
();
handle
->
reqd_free_pages
=
reqd_free_pages
();
printk
(
KERN_INFO
pr_info
(
"Using %u thread(s) for compression
\n
"
,
nr_threads
);
"PM: Using %u thread(s) for compression.
\n
"
pr_info
(
"Compressing and saving image data (%u pages)...
\n
"
,
"PM: Compressing and saving image data (%u pages)...
\n
"
,
nr_to_write
);
nr_threads
,
nr_to_write
);
m
=
nr_to_write
/
10
;
m
=
nr_to_write
/
10
;
if
(
!
m
)
if
(
!
m
)
m
=
1
;
m
=
1
;
...
@@ -783,10 +782,8 @@ static int save_image_lzo(struct swap_map_handle *handle,
...
@@ -783,10 +782,8 @@ static int save_image_lzo(struct swap_map_handle *handle,
data_of
(
*
snapshot
),
PAGE_SIZE
);
data_of
(
*
snapshot
),
PAGE_SIZE
);
if
(
!
(
nr_pages
%
m
))
if
(
!
(
nr_pages
%
m
))
printk
(
KERN_INFO
pr_info
(
"Image saving progress: %3d%%
\n
"
,
"PM: Image saving progress: "
nr_pages
/
m
*
10
);
"%3d%%
\n
"
,
nr_pages
/
m
*
10
);
nr_pages
++
;
nr_pages
++
;
}
}
if
(
!
off
)
if
(
!
off
)
...
@@ -813,15 +810,14 @@ static int save_image_lzo(struct swap_map_handle *handle,
...
@@ -813,15 +810,14 @@ static int save_image_lzo(struct swap_map_handle *handle,
ret
=
data
[
thr
].
ret
;
ret
=
data
[
thr
].
ret
;
if
(
ret
<
0
)
{
if
(
ret
<
0
)
{
pr
intk
(
KERN_ERR
"PM:
LZO compression failed
\n
"
);
pr
_err
(
"
LZO compression failed
\n
"
);
goto
out_finish
;
goto
out_finish
;
}
}
if
(
unlikely
(
!
data
[
thr
].
cmp_len
||
if
(
unlikely
(
!
data
[
thr
].
cmp_len
||
data
[
thr
].
cmp_len
>
data
[
thr
].
cmp_len
>
lzo1x_worst_compress
(
data
[
thr
].
unc_len
)))
{
lzo1x_worst_compress
(
data
[
thr
].
unc_len
)))
{
printk
(
KERN_ERR
pr_err
(
"Invalid LZO compressed length
\n
"
);
"PM: Invalid LZO compressed length
\n
"
);
ret
=
-
1
;
ret
=
-
1
;
goto
out_finish
;
goto
out_finish
;
}
}
...
@@ -857,7 +853,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
...
@@ -857,7 +853,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
if
(
!
ret
)
if
(
!
ret
)
ret
=
err2
;
ret
=
err2
;
if
(
!
ret
)
if
(
!
ret
)
pr
intk
(
KERN_INFO
"PM: Image saving done.
\n
"
);
pr
_info
(
"Image saving done
\n
"
);
swsusp_show_speed
(
start
,
stop
,
nr_to_write
,
"Wrote"
);
swsusp_show_speed
(
start
,
stop
,
nr_to_write
,
"Wrote"
);
out_clean:
out_clean:
if
(
crc
)
{
if
(
crc
)
{
...
@@ -888,7 +884,7 @@ static int enough_swap(unsigned int nr_pages, unsigned int flags)
...
@@ -888,7 +884,7 @@ static int enough_swap(unsigned int nr_pages, unsigned int flags)
unsigned
int
free_swap
=
count_swap_pages
(
root_swap
,
1
);
unsigned
int
free_swap
=
count_swap_pages
(
root_swap
,
1
);
unsigned
int
required
;
unsigned
int
required
;
pr_debug
(
"
PM:
Free swap pages: %u
\n
"
,
free_swap
);
pr_debug
(
"Free swap pages: %u
\n
"
,
free_swap
);
required
=
PAGES_FOR_IO
+
nr_pages
;
required
=
PAGES_FOR_IO
+
nr_pages
;
return
free_swap
>
required
;
return
free_swap
>
required
;
...
@@ -915,12 +911,12 @@ int swsusp_write(unsigned int flags)
...
@@ -915,12 +911,12 @@ int swsusp_write(unsigned int flags)
pages
=
snapshot_get_image_size
();
pages
=
snapshot_get_image_size
();
error
=
get_swap_writer
(
&
handle
);
error
=
get_swap_writer
(
&
handle
);
if
(
error
)
{
if
(
error
)
{
pr
intk
(
KERN_ERR
"PM:
Cannot get swap writer
\n
"
);
pr
_err
(
"
Cannot get swap writer
\n
"
);
return
error
;
return
error
;
}
}
if
(
flags
&
SF_NOCOMPRESS_MODE
)
{
if
(
flags
&
SF_NOCOMPRESS_MODE
)
{
if
(
!
enough_swap
(
pages
,
flags
))
{
if
(
!
enough_swap
(
pages
,
flags
))
{
pr
intk
(
KERN_ERR
"PM:
Not enough free swap
\n
"
);
pr
_err
(
"
Not enough free swap
\n
"
);
error
=
-
ENOSPC
;
error
=
-
ENOSPC
;
goto
out_finish
;
goto
out_finish
;
}
}
...
@@ -1068,8 +1064,7 @@ static int load_image(struct swap_map_handle *handle,
...
@@ -1068,8 +1064,7 @@ static int load_image(struct swap_map_handle *handle,
hib_init_batch
(
&
hb
);
hib_init_batch
(
&
hb
);
clean_pages_on_read
=
true
;
clean_pages_on_read
=
true
;
printk
(
KERN_INFO
"PM: Loading image data pages (%u pages)...
\n
"
,
pr_info
(
"Loading image data pages (%u pages)...
\n
"
,
nr_to_read
);
nr_to_read
);
m
=
nr_to_read
/
10
;
m
=
nr_to_read
/
10
;
if
(
!
m
)
if
(
!
m
)
m
=
1
;
m
=
1
;
...
@@ -1087,8 +1082,8 @@ static int load_image(struct swap_map_handle *handle,
...
@@ -1087,8 +1082,8 @@ static int load_image(struct swap_map_handle *handle,
if
(
ret
)
if
(
ret
)
break
;
break
;
if
(
!
(
nr_pages
%
m
))
if
(
!
(
nr_pages
%
m
))
pr
intk
(
KERN_INFO
"PM:
Image loading progress: %3d%%
\n
"
,
pr
_info
(
"
Image loading progress: %3d%%
\n
"
,
nr_pages
/
m
*
10
);
nr_pages
/
m
*
10
);
nr_pages
++
;
nr_pages
++
;
}
}
err2
=
hib_wait_io
(
&
hb
);
err2
=
hib_wait_io
(
&
hb
);
...
@@ -1096,7 +1091,7 @@ static int load_image(struct swap_map_handle *handle,
...
@@ -1096,7 +1091,7 @@ static int load_image(struct swap_map_handle *handle,
if
(
!
ret
)
if
(
!
ret
)
ret
=
err2
;
ret
=
err2
;
if
(
!
ret
)
{
if
(
!
ret
)
{
pr
intk
(
KERN_INFO
"PM: Image loading done.
\n
"
);
pr
_info
(
"Image loading done
\n
"
);
snapshot_write_finalize
(
snapshot
);
snapshot_write_finalize
(
snapshot
);
if
(
!
snapshot_image_loaded
(
snapshot
))
if
(
!
snapshot_image_loaded
(
snapshot
))
ret
=
-
ENODATA
;
ret
=
-
ENODATA
;
...
@@ -1190,14 +1185,14 @@ static int load_image_lzo(struct swap_map_handle *handle,
...
@@ -1190,14 +1185,14 @@ static int load_image_lzo(struct swap_map_handle *handle,
page
=
vmalloc
(
sizeof
(
*
page
)
*
LZO_MAX_RD_PAGES
);
page
=
vmalloc
(
sizeof
(
*
page
)
*
LZO_MAX_RD_PAGES
);
if
(
!
page
)
{
if
(
!
page
)
{
pr
intk
(
KERN_ERR
"PM:
Failed to allocate LZO page
\n
"
);
pr
_err
(
"
Failed to allocate LZO page
\n
"
);
ret
=
-
ENOMEM
;
ret
=
-
ENOMEM
;
goto
out_clean
;
goto
out_clean
;
}
}
data
=
vmalloc
(
sizeof
(
*
data
)
*
nr_threads
);
data
=
vmalloc
(
sizeof
(
*
data
)
*
nr_threads
);
if
(
!
data
)
{
if
(
!
data
)
{
pr
intk
(
KERN_ERR
"PM:
Failed to allocate LZO data
\n
"
);
pr
_err
(
"
Failed to allocate LZO data
\n
"
);
ret
=
-
ENOMEM
;
ret
=
-
ENOMEM
;
goto
out_clean
;
goto
out_clean
;
}
}
...
@@ -1206,7 +1201,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
...
@@ -1206,7 +1201,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
crc
=
kmalloc
(
sizeof
(
*
crc
),
GFP_KERNEL
);
crc
=
kmalloc
(
sizeof
(
*
crc
),
GFP_KERNEL
);
if
(
!
crc
)
{
if
(
!
crc
)
{
pr
intk
(
KERN_ERR
"PM:
Failed to allocate crc
\n
"
);
pr
_err
(
"
Failed to allocate crc
\n
"
);
ret
=
-
ENOMEM
;
ret
=
-
ENOMEM
;
goto
out_clean
;
goto
out_clean
;
}
}
...
@@ -1226,8 +1221,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
...
@@ -1226,8 +1221,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
"image_decompress/%u"
,
thr
);
"image_decompress/%u"
,
thr
);
if
(
IS_ERR
(
data
[
thr
].
thr
))
{
if
(
IS_ERR
(
data
[
thr
].
thr
))
{
data
[
thr
].
thr
=
NULL
;
data
[
thr
].
thr
=
NULL
;
printk
(
KERN_ERR
pr_err
(
"Cannot start decompression threads
\n
"
);
"PM: Cannot start decompression threads
\n
"
);
ret
=
-
ENOMEM
;
ret
=
-
ENOMEM
;
goto
out_clean
;
goto
out_clean
;
}
}
...
@@ -1249,7 +1243,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
...
@@ -1249,7 +1243,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
crc
->
thr
=
kthread_run
(
crc32_threadfn
,
crc
,
"image_crc32"
);
crc
->
thr
=
kthread_run
(
crc32_threadfn
,
crc
,
"image_crc32"
);
if
(
IS_ERR
(
crc
->
thr
))
{
if
(
IS_ERR
(
crc
->
thr
))
{
crc
->
thr
=
NULL
;
crc
->
thr
=
NULL
;
pr
intk
(
KERN_ERR
"PM:
Cannot start CRC32 thread
\n
"
);
pr
_err
(
"
Cannot start CRC32 thread
\n
"
);
ret
=
-
ENOMEM
;
ret
=
-
ENOMEM
;
goto
out_clean
;
goto
out_clean
;
}
}
...
@@ -1274,8 +1268,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
...
@@ -1274,8 +1268,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
if
(
!
page
[
i
])
{
if
(
!
page
[
i
])
{
if
(
i
<
LZO_CMP_PAGES
)
{
if
(
i
<
LZO_CMP_PAGES
)
{
ring_size
=
i
;
ring_size
=
i
;
printk
(
KERN_ERR
pr_err
(
"Failed to allocate LZO pages
\n
"
);
"PM: Failed to allocate LZO pages
\n
"
);
ret
=
-
ENOMEM
;
ret
=
-
ENOMEM
;
goto
out_clean
;
goto
out_clean
;
}
else
{
}
else
{
...
@@ -1285,10 +1278,9 @@ static int load_image_lzo(struct swap_map_handle *handle,
...
@@ -1285,10 +1278,9 @@ static int load_image_lzo(struct swap_map_handle *handle,
}
}
want
=
ring_size
=
i
;
want
=
ring_size
=
i
;
printk
(
KERN_INFO
pr_info
(
"Using %u thread(s) for decompression
\n
"
,
nr_threads
);
"PM: Using %u thread(s) for decompression.
\n
"
pr_info
(
"Loading and decompressing image data (%u pages)...
\n
"
,
"PM: Loading and decompressing image data (%u pages)...
\n
"
,
nr_to_read
);
nr_threads
,
nr_to_read
);
m
=
nr_to_read
/
10
;
m
=
nr_to_read
/
10
;
if
(
!
m
)
if
(
!
m
)
m
=
1
;
m
=
1
;
...
@@ -1348,8 +1340,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
...
@@ -1348,8 +1340,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
if
(
unlikely
(
!
data
[
thr
].
cmp_len
||
if
(
unlikely
(
!
data
[
thr
].
cmp_len
||
data
[
thr
].
cmp_len
>
data
[
thr
].
cmp_len
>
lzo1x_worst_compress
(
LZO_UNC_SIZE
)))
{
lzo1x_worst_compress
(
LZO_UNC_SIZE
)))
{
printk
(
KERN_ERR
pr_err
(
"Invalid LZO compressed length
\n
"
);
"PM: Invalid LZO compressed length
\n
"
);
ret
=
-
1
;
ret
=
-
1
;
goto
out_finish
;
goto
out_finish
;
}
}
...
@@ -1400,16 +1391,14 @@ static int load_image_lzo(struct swap_map_handle *handle,
...
@@ -1400,16 +1391,14 @@ static int load_image_lzo(struct swap_map_handle *handle,
ret
=
data
[
thr
].
ret
;
ret
=
data
[
thr
].
ret
;
if
(
ret
<
0
)
{
if
(
ret
<
0
)
{
printk
(
KERN_ERR
pr_err
(
"LZO decompression failed
\n
"
);
"PM: LZO decompression failed
\n
"
);
goto
out_finish
;
goto
out_finish
;
}
}
if
(
unlikely
(
!
data
[
thr
].
unc_len
||
if
(
unlikely
(
!
data
[
thr
].
unc_len
||
data
[
thr
].
unc_len
>
LZO_UNC_SIZE
||
data
[
thr
].
unc_len
>
LZO_UNC_SIZE
||
data
[
thr
].
unc_len
&
(
PAGE_SIZE
-
1
)))
{
data
[
thr
].
unc_len
&
(
PAGE_SIZE
-
1
)))
{
printk
(
KERN_ERR
pr_err
(
"Invalid LZO uncompressed length
\n
"
);
"PM: Invalid LZO uncompressed length
\n
"
);
ret
=
-
1
;
ret
=
-
1
;
goto
out_finish
;
goto
out_finish
;
}
}
...
@@ -1420,10 +1409,8 @@ static int load_image_lzo(struct swap_map_handle *handle,
...
@@ -1420,10 +1409,8 @@ static int load_image_lzo(struct swap_map_handle *handle,
data
[
thr
].
unc
+
off
,
PAGE_SIZE
);
data
[
thr
].
unc
+
off
,
PAGE_SIZE
);
if
(
!
(
nr_pages
%
m
))
if
(
!
(
nr_pages
%
m
))
printk
(
KERN_INFO
pr_info
(
"Image loading progress: %3d%%
\n
"
,
"PM: Image loading progress: "
nr_pages
/
m
*
10
);
"%3d%%
\n
"
,
nr_pages
/
m
*
10
);
nr_pages
++
;
nr_pages
++
;
ret
=
snapshot_write_next
(
snapshot
);
ret
=
snapshot_write_next
(
snapshot
);
...
@@ -1448,15 +1435,14 @@ static int load_image_lzo(struct swap_map_handle *handle,
...
@@ -1448,15 +1435,14 @@ static int load_image_lzo(struct swap_map_handle *handle,
}
}
stop
=
ktime_get
();
stop
=
ktime_get
();
if
(
!
ret
)
{
if
(
!
ret
)
{
pr
intk
(
KERN_INFO
"PM: Image loading done.
\n
"
);
pr
_info
(
"Image loading done
\n
"
);
snapshot_write_finalize
(
snapshot
);
snapshot_write_finalize
(
snapshot
);
if
(
!
snapshot_image_loaded
(
snapshot
))
if
(
!
snapshot_image_loaded
(
snapshot
))
ret
=
-
ENODATA
;
ret
=
-
ENODATA
;
if
(
!
ret
)
{
if
(
!
ret
)
{
if
(
swsusp_header
->
flags
&
SF_CRC32_MODE
)
{
if
(
swsusp_header
->
flags
&
SF_CRC32_MODE
)
{
if
(
handle
->
crc32
!=
swsusp_header
->
crc32
)
{
if
(
handle
->
crc32
!=
swsusp_header
->
crc32
)
{
printk
(
KERN_ERR
pr_err
(
"Invalid image CRC32!
\n
"
);
"PM: Invalid image CRC32!
\n
"
);
ret
=
-
ENODATA
;
ret
=
-
ENODATA
;
}
}
}
}
...
@@ -1513,9 +1499,9 @@ int swsusp_read(unsigned int *flags_p)
...
@@ -1513,9 +1499,9 @@ int swsusp_read(unsigned int *flags_p)
swap_reader_finish
(
&
handle
);
swap_reader_finish
(
&
handle
);
end:
end:
if
(
!
error
)
if
(
!
error
)
pr_debug
(
"
PM:
Image successfully loaded
\n
"
);
pr_debug
(
"Image successfully loaded
\n
"
);
else
else
pr_debug
(
"
PM:
Error %d resuming
\n
"
,
error
);
pr_debug
(
"Error %d resuming
\n
"
,
error
);
return
error
;
return
error
;
}
}
...
@@ -1552,13 +1538,13 @@ int swsusp_check(void)
...
@@ -1552,13 +1538,13 @@ int swsusp_check(void)
if
(
error
)
if
(
error
)
blkdev_put
(
hib_resume_bdev
,
FMODE_READ
);
blkdev_put
(
hib_resume_bdev
,
FMODE_READ
);
else
else
pr_debug
(
"
PM:
Image signature found, resuming
\n
"
);
pr_debug
(
"Image signature found, resuming
\n
"
);
}
else
{
}
else
{
error
=
PTR_ERR
(
hib_resume_bdev
);
error
=
PTR_ERR
(
hib_resume_bdev
);
}
}
if
(
error
)
if
(
error
)
pr_debug
(
"
PM:
Image not found (code %d)
\n
"
,
error
);
pr_debug
(
"Image not found (code %d)
\n
"
,
error
);
return
error
;
return
error
;
}
}
...
@@ -1570,7 +1556,7 @@ int swsusp_check(void)
...
@@ -1570,7 +1556,7 @@ int swsusp_check(void)
void
swsusp_close
(
fmode_t
mode
)
void
swsusp_close
(
fmode_t
mode
)
{
{
if
(
IS_ERR
(
hib_resume_bdev
))
{
if
(
IS_ERR
(
hib_resume_bdev
))
{
pr_debug
(
"
PM:
Image device not initialised
\n
"
);
pr_debug
(
"Image device not initialised
\n
"
);
return
;
return
;
}
}
...
@@ -1594,7 +1580,7 @@ int swsusp_unmark(void)
...
@@ -1594,7 +1580,7 @@ int swsusp_unmark(void)
swsusp_resume_block
,
swsusp_resume_block
,
swsusp_header
,
NULL
);
swsusp_header
,
NULL
);
}
else
{
}
else
{
pr
intk
(
KERN_ERR
"PM:
Cannot find swsusp signature!
\n
"
);
pr
_err
(
"
Cannot find swsusp signature!
\n
"
);
error
=
-
ENODEV
;
error
=
-
ENODEV
;
}
}
...
...
This diff is collapsed.
Click to expand it.
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录
新手
引导
客服
返回
顶部