Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
机器未来
Paddle
提交
8746725a
P
Paddle
项目概览
机器未来
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
8746725a
编写于
6月 19, 2018
作者:
F
fengjiayi
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
fix errors
上级
8ea54e2f
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
16 addition
and
15 deletion
+16
-15
python/paddle/fluid/io.py
python/paddle/fluid/io.py
+16
-15
未找到文件。
python/paddle/fluid/io.py
浏览文件 @
8746725a
...
...
@@ -407,7 +407,7 @@ def load_vars(executor,
def
load_params
(
executor
,
dirname
,
main_program
=
None
,
filename
=
None
):
"""
This function filters out all parameters from the give `main_program`
and then try to load these parameters from the folder `dirname` or
and then try
s
to load these parameters from the folder `dirname` or
the file `filename`.
Use the `dirname` to specify the folder where parameters were saved. If
...
...
@@ -586,6 +586,7 @@ def save_inference_model(dirname,
Examples:
.. code-block:: python
exe = fluid.Executor(fluid.CPUPlace())
path = "./infer_model"
fluid.io.save_inference_model(dirname=path, feeded_var_names=['img'],
...
...
@@ -693,7 +694,7 @@ def load_inference_model(dirname,
feed={feed_target_names[0]: tensor_img},
fetch_list=fetch_targets)
# In this exsample, the inference program
i
s saved in the
# In this exsample, the inference program
wa
s saved in the
# "./infer_model/__model__" and parameters were saved in
# separate files in ""./infer_model".
# After getting inference program, feed target names and
...
...
@@ -804,20 +805,20 @@ def save_checkpoint(executor,
trainer_args
=
None
,
main_program
=
None
,
max_num_checkpoints
=
3
):
"""
"
"""
This function filters out all checkpoint variables from the give
main_program and then saves these variables to the
'checkpoint_dir'
main_program and then saves these variables to the
`checkpoint_dir`
directory.
In the training precess, we generally save a checkpoint in each
iteration. So there might be a lot of checkpoints in the
'checkpoint_dir'
. To avoid them taking too much disk space, the
`checkpoint_dir`
. To avoid them taking too much disk space, the
`max_num_checkpoints` are introduced to limit the total number of
checkpoints. If the number of existing checkpints is greater than
the `max_num_checkpoints`,
the
oldest ones will be scroll deleted.
the `max_num_checkpoints`, oldest ones will be scroll deleted.
A variable is a checkpoint variable and will be
load
ed if it meets
all
the
following conditions:
A variable is a checkpoint variable and will be
sav
ed if it meets
all following conditions:
1. It's persistable.
2. It's type is not FEED_MINIBATCH nor FETCH_LIST nor RAW.
3. It's name contains no "@GRAD" nor ".trainer_" nor ".block".
...
...
@@ -882,16 +883,16 @@ def load_checkpoint(executor, checkpoint_dir, serial, main_program):
"""
This function filters out all checkpoint variables from the give
main_program and then try to load these variables from the
'checkpoint_dir'
directory.
`checkpoint_dir`
directory.
In the training precess, we generally save a checkpoint in each
iteration. So there are more than one checkpoint in the
'checkpoint_dir'
(each checkpoint has its own sub folder), use
'serial'
to specify which serial of checkpoint you would like to
`checkpoint_dir`
(each checkpoint has its own sub folder), use
`serial`
to specify which serial of checkpoint you would like to
load.
A variable is a checkpoint variable and will be loaded if it meets
all
the
following conditions:
all following conditions:
1. It's persistable.
2. It's type is not FEED_MINIBATCH nor FETCH_LIST nor RAW.
3. It's name contains no "@GRAD" nor ".trainer_" nor ".block".
...
...
@@ -962,9 +963,9 @@ def load_persist_vars_without_grad(executor,
has_model_dir
=
False
):
"""
This function filters out all checkpoint variables from the give
program and then try to load these variables from the given directory.
program and then try
s
to load these variables from the given directory.
A variable is a checkpoint variable if it meets all
the
following
A variable is a checkpoint variable if it meets all following
conditions:
1. It's persistable.
2. It's type is not FEED_MINIBATCH nor FETCH_LIST nor RAW.
...
...
@@ -1014,7 +1015,7 @@ def save_persist_vars_without_grad(executor, dirname, program):
program and then save these variables to a sub-folder '__model__' of
the given directory.
A variable is a checkpoint variable if it meets all
the
following
A variable is a checkpoint variable if it meets all following
conditions:
1. It's persistable.
2. It's type is not FEED_MINIBATCH nor FETCH_LIST nor RAW.
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录