Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
机器未来
Paddle
提交
6ae7cbe2
P
Paddle
项目概览
机器未来
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
6ae7cbe2
编写于
6月 04, 2018
作者:
T
tensor-tang
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
follow comments
上级
99d00cce
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
13 addition
and
11 deletion
+13
-11
paddle/fluid/inference/tests/book/CMakeLists.txt
paddle/fluid/inference/tests/book/CMakeLists.txt
+2
-1
paddle/fluid/inference/tests/book/test_inference_nlp.cc
paddle/fluid/inference/tests/book/test_inference_nlp.cc
+11
-10
未找到文件。
paddle/fluid/inference/tests/book/CMakeLists.txt
浏览文件 @
6ae7cbe2
...
...
@@ -40,8 +40,9 @@ inference_test(recommender_system)
inference_test
(
word2vec
)
# This is an unly work around to make this test run
# TODO(TJ): clean me up
cc_test
(
test_inference_nlp
SRCS test_inference_nlp.cc
DEPS paddle_fluid
ARGS
--modelpath=
${
PADDLE_BINARY_DIR
}
/python/paddle/fluid/tests/book/recognize_digits_mlp.inference.model
)
--model
_
path=
${
PADDLE_BINARY_DIR
}
/python/paddle/fluid/tests/book/recognize_digits_mlp.inference.model
)
paddle/fluid/inference/tests/book/test_inference_nlp.cc
浏览文件 @
6ae7cbe2
...
...
@@ -24,8 +24,8 @@ limitations under the License. */
#include <omp.h>
#endif
DEFINE_string
(
modelpath
,
""
,
"Directory of the inference model."
);
DEFINE_string
(
datafile
,
""
,
"File of input index data."
);
DEFINE_string
(
model
_
path
,
""
,
"Directory of the inference model."
);
DEFINE_string
(
data
_
file
,
""
,
"File of input index data."
);
DEFINE_int32
(
repeat
,
100
,
"Running the inference program repeat times"
);
DEFINE_bool
(
use_mkldnn
,
false
,
"Use MKLDNN to run inference"
);
DEFINE_bool
(
prepare_vars
,
true
,
"Prepare variables before executor"
);
...
...
@@ -65,6 +65,7 @@ size_t LoadData(std::vector<paddle::framework::LoDTensor>* out,
ids
.
push_back
(
stoi
(
field
));
}
if
(
ids
.
size
()
>=
1024
)
{
// Synced with NLP guys, they will ignore input larger then 1024
continue
;
}
...
...
@@ -142,18 +143,18 @@ void ThreadRunInfer(
}
TEST
(
inference
,
nlp
)
{
if
(
FLAGS_modelpath
.
empty
())
{
LOG
(
FATAL
)
<<
"Usage: ./example --modelpath=path/to/your/model"
;
if
(
FLAGS_model
_
path
.
empty
())
{
LOG
(
FATAL
)
<<
"Usage: ./example --model
_
path=path/to/your/model"
;
}
if
(
FLAGS_datafile
.
empty
())
{
LOG
(
WARNING
)
<<
"
Not
data file provided, will use dummy data!"
if
(
FLAGS_data
_
file
.
empty
())
{
LOG
(
WARNING
)
<<
"
No
data file provided, will use dummy data!"
<<
"Note: if you use nlp model, please provide data file."
;
}
LOG
(
INFO
)
<<
"Model Path: "
<<
FLAGS_modelpath
;
LOG
(
INFO
)
<<
"Data File: "
<<
FLAGS_datafile
;
LOG
(
INFO
)
<<
"Model Path: "
<<
FLAGS_model
_
path
;
LOG
(
INFO
)
<<
"Data File: "
<<
FLAGS_data
_
file
;
std
::
vector
<
paddle
::
framework
::
LoDTensor
>
datasets
;
size_t
num_total_words
=
LoadData
(
&
datasets
,
FLAGS_datafile
);
size_t
num_total_words
=
LoadData
(
&
datasets
,
FLAGS_data
_
file
);
LOG
(
INFO
)
<<
"Number of samples (seq_len<1024): "
<<
datasets
.
size
();
LOG
(
INFO
)
<<
"Total number of words: "
<<
num_total_words
;
...
...
@@ -168,7 +169,7 @@ TEST(inference, nlp) {
// 2. Initialize the inference_program and load parameters
std
::
unique_ptr
<
paddle
::
framework
::
ProgramDesc
>
inference_program
;
inference_program
=
InitProgram
(
&
executor
,
scope
.
get
(),
FLAGS_modelpath
,
model_combined
);
InitProgram
(
&
executor
,
scope
.
get
(),
FLAGS_model
_
path
,
model_combined
);
if
(
FLAGS_use_mkldnn
)
{
EnableMKLDNN
(
inference_program
);
}
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录