Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleX
提交
56acdf95
P
PaddleX
项目概览
PaddlePaddle
/
PaddleX
通知
138
Star
4
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
43
列表
看板
标记
里程碑
合并请求
5
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleX
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
43
Issue
43
列表
看板
标记
里程碑
合并请求
5
合并请求
5
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
56acdf95
编写于
7月 02, 2020
作者:
Z
Zeyu Chen
提交者:
GitHub
7月 02, 2020
浏览文件
操作
浏览文件
下载
差异文件
Remove batch_size function parameter of create_predictor
上级
8b937b4d
c1d794bd
变更
6
隐藏空白更改
内联
并排
Showing
6 changed file
with
10 addition
and
17 deletion
+10
-17
deploy/cpp/cmake/yaml-cpp.cmake
deploy/cpp/cmake/yaml-cpp.cmake
+0
-2
deploy/cpp/demo/classifier.cpp
deploy/cpp/demo/classifier.cpp
+1
-2
deploy/cpp/demo/detector.cpp
deploy/cpp/demo/detector.cpp
+1
-2
deploy/cpp/demo/segmenter.cpp
deploy/cpp/demo/segmenter.cpp
+1
-2
deploy/cpp/include/paddlex/paddlex.h
deploy/cpp/include/paddlex/paddlex.h
+3
-6
deploy/cpp/src/paddlex.cpp
deploy/cpp/src/paddlex.cpp
+4
-3
未找到文件。
deploy/cpp/cmake/yaml-cpp.cmake
浏览文件 @
56acdf95
find_package
(
Git REQUIRED
)
include
(
ExternalProject
)
message
(
"
${
CMAKE_BUILD_TYPE
}
"
)
...
...
deploy/cpp/demo/classifier.cpp
浏览文件 @
56acdf95
...
...
@@ -57,8 +57,7 @@ int main(int argc, char** argv) {
FLAGS_use_gpu
,
FLAGS_use_trt
,
FLAGS_gpu_id
,
FLAGS_key
,
FLAGS_batch_size
);
FLAGS_key
);
// 进行预测
double
total_running_time_s
=
0.0
;
...
...
deploy/cpp/demo/detector.cpp
浏览文件 @
56acdf95
...
...
@@ -62,8 +62,7 @@ int main(int argc, char** argv) {
FLAGS_use_gpu
,
FLAGS_use_trt
,
FLAGS_gpu_id
,
FLAGS_key
,
FLAGS_batch_size
);
FLAGS_key
);
double
total_running_time_s
=
0.0
;
double
total_imread_time_s
=
0.0
;
...
...
deploy/cpp/demo/segmenter.cpp
浏览文件 @
56acdf95
...
...
@@ -59,8 +59,7 @@ int main(int argc, char** argv) {
FLAGS_use_gpu
,
FLAGS_use_trt
,
FLAGS_gpu_id
,
FLAGS_key
,
FLAGS_batch_size
);
FLAGS_key
);
double
total_running_time_s
=
0.0
;
double
total_imread_time_s
=
0.0
;
...
...
deploy/cpp/include/paddlex/paddlex.h
浏览文件 @
56acdf95
...
...
@@ -72,23 +72,20 @@ class Model {
* @param use_trt: use Tensor RT or not when infering
* @param gpu_id: the id of gpu when infering with using gpu
* @param key: the key of encryption when using encrypted model
* @param batch_size: batch size of infering
* */
void
Init
(
const
std
::
string
&
model_dir
,
bool
use_gpu
=
false
,
bool
use_trt
=
false
,
int
gpu_id
=
0
,
std
::
string
key
=
""
,
int
batch_size
=
1
)
{
create_predictor
(
model_dir
,
use_gpu
,
use_trt
,
gpu_id
,
key
,
batch_size
);
std
::
string
key
=
""
)
{
create_predictor
(
model_dir
,
use_gpu
,
use_trt
,
gpu_id
,
key
);
}
void
create_predictor
(
const
std
::
string
&
model_dir
,
bool
use_gpu
=
false
,
bool
use_trt
=
false
,
int
gpu_id
=
0
,
std
::
string
key
=
""
,
int
batch_size
=
1
);
std
::
string
key
=
""
);
/*
* @brief
...
...
deploy/cpp/src/paddlex.cpp
浏览文件 @
56acdf95
...
...
@@ -22,8 +22,7 @@ void Model::create_predictor(const std::string& model_dir,
bool
use_gpu
,
bool
use_trt
,
int
gpu_id
,
std
::
string
key
,
int
batch_size
)
{
std
::
string
key
)
{
paddle
::
AnalysisConfig
config
;
std
::
string
model_file
=
model_dir
+
OS_PATH_SEP
+
"__model__"
;
std
::
string
params_file
=
model_dir
+
OS_PATH_SEP
+
"__params__"
;
...
...
@@ -76,7 +75,6 @@ void Model::create_predictor(const std::string& model_dir,
false
/* use_calib_mode*/
);
}
predictor_
=
std
::
move
(
CreatePaddlePredictor
(
config
));
inputs_batch_
.
assign
(
batch_size
,
ImageBlob
());
}
bool
Model
::
load_config
(
const
std
::
string
&
yaml_input
)
{
...
...
@@ -192,6 +190,7 @@ bool Model::predict(const std::vector<cv::Mat>& im_batch,
"to function predict()!"
<<
std
::
endl
;
return
false
;
}
inputs_batch_
.
assign
(
im_batch
.
size
(),
ImageBlob
());
// 处理输入图像
if
(
!
preprocess
(
im_batch
,
&
inputs_batch_
,
thread_num
))
{
std
::
cerr
<<
"Preprocess failed!"
<<
std
::
endl
;
...
...
@@ -356,6 +355,7 @@ bool Model::predict(const std::vector<cv::Mat>& im_batch,
return
false
;
}
inputs_batch_
.
assign
(
im_batch
.
size
(),
ImageBlob
());
int
batch_size
=
im_batch
.
size
();
// 处理输入图像
if
(
!
preprocess
(
im_batch
,
&
inputs_batch_
,
thread_num
))
{
...
...
@@ -637,6 +637,7 @@ bool Model::predict(const std::vector<cv::Mat>& im_batch,
}
// 处理输入图像
inputs_batch_
.
assign
(
im_batch
.
size
(),
ImageBlob
());
if
(
!
preprocess
(
im_batch
,
&
inputs_batch_
,
thread_num
))
{
std
::
cerr
<<
"Preprocess failed!"
<<
std
::
endl
;
return
false
;
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录