Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
机器未来
Paddle
提交
511981fb
P
Paddle
项目概览
机器未来
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
511981fb
编写于
9月 30, 2019
作者:
Z
Zeng Jinle
提交者:
GitHub
9月 30, 2019
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
add deprecated memory optimize doc, test=release/1.6, test=document_fix (#20112)
上级
4ad66c77
变更
2
显示空白变更内容
内联
并排
Showing
2 changed file
with
6 addition
and
75 deletion
+6
-75
paddle/fluid/API.spec
paddle/fluid/API.spec
+2
-2
python/paddle/fluid/transpiler/memory_optimization_transpiler.py
...paddle/fluid/transpiler/memory_optimization_transpiler.py
+4
-73
未找到文件。
paddle/fluid/API.spec
浏览文件 @
511981fb
...
@@ -42,8 +42,8 @@ paddle.fluid.DistributeTranspiler.get_pserver_programs (ArgSpec(args=['self', 'e
...
@@ -42,8 +42,8 @@ paddle.fluid.DistributeTranspiler.get_pserver_programs (ArgSpec(args=['self', 'e
paddle.fluid.DistributeTranspiler.get_startup_program (ArgSpec(args=['self', 'endpoint', 'pserver_program', 'startup_program'], varargs=None, keywords=None, defaults=(None, None)), ('document', '90a40b80e0106f69262cc08b861c3e39'))
paddle.fluid.DistributeTranspiler.get_startup_program (ArgSpec(args=['self', 'endpoint', 'pserver_program', 'startup_program'], varargs=None, keywords=None, defaults=(None, None)), ('document', '90a40b80e0106f69262cc08b861c3e39'))
paddle.fluid.DistributeTranspiler.get_trainer_program (ArgSpec(args=['self', 'wait_port'], varargs=None, keywords=None, defaults=(True,)), ('document', '0e47f020304e2b824e87ff03475c17cd'))
paddle.fluid.DistributeTranspiler.get_trainer_program (ArgSpec(args=['self', 'wait_port'], varargs=None, keywords=None, defaults=(True,)), ('document', '0e47f020304e2b824e87ff03475c17cd'))
paddle.fluid.DistributeTranspiler.transpile (ArgSpec(args=['self', 'trainer_id', 'program', 'pservers', 'trainers', 'sync_mode', 'startup_program', 'current_endpoint'], varargs=None, keywords=None, defaults=(None, '127.0.0.1:6174', 1, True, None, '127.0.0.1:6174')), ('document', '418c7e8b268e9be4104f2809e654c2f7'))
paddle.fluid.DistributeTranspiler.transpile (ArgSpec(args=['self', 'trainer_id', 'program', 'pservers', 'trainers', 'sync_mode', 'startup_program', 'current_endpoint'], varargs=None, keywords=None, defaults=(None, '127.0.0.1:6174', 1, True, None, '127.0.0.1:6174')), ('document', '418c7e8b268e9be4104f2809e654c2f7'))
paddle.fluid.memory_optimize (ArgSpec(args=['input_program', 'skip_opt_set', 'print_log', 'level', 'skip_grads'], varargs=None, keywords=None, defaults=(None, False, 0, True)), ('document', '2
348247f684bfd5bb9466470f35be06
4'))
paddle.fluid.memory_optimize (ArgSpec(args=['input_program', 'skip_opt_set', 'print_log', 'level', 'skip_grads'], varargs=None, keywords=None, defaults=(None, False, 0, True)), ('document', '2
be29dc8ecdec9baa7728fb0c7f80e2
4'))
paddle.fluid.release_memory (ArgSpec(args=['input_program', 'skip_opt_set'], varargs=None, keywords=None, defaults=(None,)), ('document', '
d38c5b8b2b2e0bb19bcf1b581a80a7e
4'))
paddle.fluid.release_memory (ArgSpec(args=['input_program', 'skip_opt_set'], varargs=None, keywords=None, defaults=(None,)), ('document', '
2be29dc8ecdec9baa7728fb0c7f80e2
4'))
paddle.fluid.DistributeTranspilerConfig ('paddle.fluid.transpiler.distribute_transpiler.DistributeTranspilerConfig', ('document', 'beac6f89fe97eb8c66a25de5a09c56d2'))
paddle.fluid.DistributeTranspilerConfig ('paddle.fluid.transpiler.distribute_transpiler.DistributeTranspilerConfig', ('document', 'beac6f89fe97eb8c66a25de5a09c56d2'))
paddle.fluid.DistributeTranspilerConfig.__init__ (ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None), ('document', '6adf97f83acf6453d4a6a4b1070f3754'))
paddle.fluid.DistributeTranspilerConfig.__init__ (ArgSpec(args=['self'], varargs=None, keywords=None, defaults=None), ('document', '6adf97f83acf6453d4a6a4b1070f3754'))
paddle.fluid.ParallelExecutor ('paddle.fluid.parallel_executor.ParallelExecutor', ('document', '2b4d2e859f2e0c6161f4fed995f7956d'))
paddle.fluid.ParallelExecutor ('paddle.fluid.parallel_executor.ParallelExecutor', ('document', '2b4d2e859f2e0c6161f4fed995f7956d'))
...
...
python/paddle/fluid/transpiler/memory_optimization_transpiler.py
浏览文件 @
511981fb
...
@@ -21,56 +21,8 @@ def memory_optimize(input_program,
...
@@ -21,56 +21,8 @@ def memory_optimize(input_program,
level
=
0
,
level
=
0
,
skip_grads
=
True
):
skip_grads
=
True
):
"""
"""
| Legacy memory optimization strategy, reduce total memory consumption by reuse variable memory between different operators.
This API is deprecated since 1.6. Please do not use it. The better
| Simple sample to explain the algorithm:
memory optimization strategies are enabled by default.
.. code-block:: python
c = a + b # assume this is the last time a is used
d = b * c
| since **a** will not be used anymore after **"c = a + b"**, and the size of **a** and **d** are the same,
we can use variable **a** to replace variable **d**, so actually we can optimize the above code to below:
.. code-block:: python
c = a + b
a = b * c
| Please notice that, in this legacy design, we are using variable **a** to replace **d** directly, which means
after you call this API, some variables may disappear, and some variables may hold unexpected values, like
the above case, actually **a** holds the value of **d** after execution.
| So to protect important variables from being reused/removed in the optimization, we provide skip_opt_set
to allow you specify a variable whitelist.
The variables in the skip_opt_set will not be affected by memory_optimize API.
Note:
| **This API is deprecated, please avoid to use it in your new code.**
| Does not support operators which will create sub-block like While, IfElse etc.
Args:
input_program(str): Input Program
skip_opt_set(set): vars wil be skipped in memory optimze
print_log(bool): whether to print debug log.
level(int): 0 or 1, 0 means we replace a with b only when a.size == b.size, 1 means we can replace a with b if a.size <= b.size
Returns:
None
Examples:
.. code-block:: python
import paddle.fluid as fluid
main_prog = fluid.Program()
startup_prog = fluid.Program()
place = fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(startup_prog)
fluid.memory_optimize(main_prog)
"""
"""
logging
.
warn
(
logging
.
warn
(
'Caution! paddle.fluid.memory_optimize() is deprecated '
'Caution! paddle.fluid.memory_optimize() is deprecated '
...
@@ -89,29 +41,8 @@ def memory_optimize(input_program,
...
@@ -89,29 +41,8 @@ def memory_optimize(input_program,
def
release_memory
(
input_program
,
skip_opt_set
=
None
):
def
release_memory
(
input_program
,
skip_opt_set
=
None
):
"""
"""
Modify the input program and insert :code:`delete_op` to early drop not used
This API is deprecated since 1.6. Please do not use it. The better
variables. The modification will be performed inplace.
memory optimization strategies are enabled by default.
Notes: This is an experimental API and could be removed in next few
releases. Users should not use this API.
Args:
input_program(Program): The program will be inserted :code:`delete_op`.
skip_opt_set(set): vars wil be skipped in memory optimze
Returns:
None
Examples:
.. code-block:: python
import paddle.fluid as fluid
# build network
# ...
# deprecated API
fluid.release_memory(fluid.default_main_program())
"""
"""
logging
.
warn
(
'paddle.fluid.release_memory() is deprecated, it would not'
logging
.
warn
(
'paddle.fluid.release_memory() is deprecated, it would not'
' take any memory release on your program'
)
' take any memory release on your program'
)
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录