Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleHub
提交
77ea5ef8
P
PaddleHub
项目概览
PaddlePaddle
/
PaddleHub
大约 1 年 前同步成功
通知
282
Star
12117
Fork
2091
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
200
列表
看板
标记
里程碑
合并请求
4
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleHub
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
200
Issue
200
列表
看板
标记
里程碑
合并请求
4
合并请求
4
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
77ea5ef8
编写于
12月 31, 2020
作者:
W
wuzewu
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Fix save_inference_model bug
上级
f67ad5be
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
33 addition
and
1 deletion
+33
-1
paddlehub/compat/module/module_v1.py
paddlehub/compat/module/module_v1.py
+33
-1
未找到文件。
paddlehub/compat/module/module_v1.py
浏览文件 @
77ea5ef8
...
...
@@ -98,6 +98,9 @@ class ModuleV1(object):
log
.
logger
.
info
(
'{} pretrained paramaters loaded by PaddleHub'
.
format
(
num_param_loaded
))
def
_load_extra_info
(
self
):
if
not
'extra_info'
in
self
.
desc
:
return
for
key
,
value
in
self
.
desc
.
extra_info
.
items
():
self
.
__dict__
[
'get_{}'
.
format
(
key
)]
=
value
...
...
@@ -108,7 +111,7 @@ class ModuleV1(object):
def
_load_model
(
self
):
model_path
=
os
.
path
.
join
(
self
.
directory
,
'model'
)
exe
=
paddle
.
static
.
Executor
(
paddle
.
CPUPlace
())
self
.
program
,
_
,
_
=
paddle
.
static
.
load_inference_model
(
model_path
,
executor
=
exe
)
self
.
program
,
_
,
_
=
paddle
.
fluid
.
io
.
load_inference_model
(
model_path
,
executor
=
exe
)
# Clear the callstack since it may leak the privacy of the creator.
for
block
in
self
.
program
.
blocks
:
...
...
@@ -240,6 +243,9 @@ class ModuleV1(object):
def
assets_path
(
self
):
return
os
.
path
.
join
(
self
.
directory
,
'assets'
)
def
get_name_prefix
(
self
):
return
self
.
desc
.
name_prefix
@
property
def
is_runnable
(
self
):
'''
...
...
@@ -247,3 +253,29 @@ class ModuleV1(object):
`hub run` command.
'''
return
self
.
default_signature
!=
None
def
save_inference_model
(
self
,
dirname
:
str
,
model_filename
:
str
=
None
,
params_filename
:
str
=
None
,
combined
:
bool
=
False
):
if
hasattr
(
self
,
'processor'
):
if
hasattr
(
self
.
processor
,
'save_inference_model'
):
return
self
.
processor
.
save_inference_model
(
dirname
,
model_filename
,
params_filename
,
combined
)
if
combined
:
model_filename
=
'__model__'
if
not
model_filename
else
model_filename
params_filename
=
'__params__'
if
not
params_filename
else
params_filename
place
=
paddle
.
CPUPlace
()
exe
=
paddle
.
static
.
Executor
(
place
)
feed_dict
,
fetch_dict
,
program
=
self
.
context
(
for_test
=
True
,
trainable
=
False
)
paddle
.
fluid
.
io
.
save_inference_model
(
dirname
=
dirname
,
main_program
=
program
,
executor
=
exe
,
feeded_var_names
=
[
var
.
name
for
var
in
list
(
feed_dict
.
values
())],
target_vars
=
list
(
fetch_dict
.
values
()),
model_filename
=
model_filename
,
params_filename
=
params_filename
)
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录