Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleX
提交
d8fe85c4
P
PaddleX
项目概览
PaddlePaddle
/
PaddleX
通知
138
Star
4
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
43
列表
看板
标记
里程碑
合并请求
5
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleX
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
43
Issue
43
列表
看板
标记
里程碑
合并请求
5
合并请求
5
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
d8fe85c4
编写于
6月 18, 2020
作者:
J
jack
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
add comment in paddlex.h
上级
bcbff526
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
129 addition
and
8 deletion
+129
-8
deploy/cpp/include/paddlex/paddlex.h
deploy/cpp/include/paddlex/paddlex.h
+129
-4
deploy/cpp/src/paddlex.cpp
deploy/cpp/src/paddlex.cpp
+0
-4
未找到文件。
deploy/cpp/include/paddlex/paddlex.h
浏览文件 @
d8fe85c4
...
@@ -39,14 +39,44 @@
...
@@ -39,14 +39,44 @@
namespace
PaddleX
{
namespace
PaddleX
{
/*
* @brief
* This class encapsulates all necessary proccess steps of model infering, which
* include image matrix preprocessing, model predicting and results postprocessing.
* The entire process of model infering can be simplified as below:
* 1. preprocess image matrix (resize, padding, ......)
* 2. model infer
* 3. postprocess the results which generated from model infering
*
* @example
* PaddleX::Model cls_model;
* // initialize model configuration
* cls_model.Init(cls_model_dir, use_gpu, use_trt, gpu_id, encryption_key);
* // define a Classification result object
* PaddleX::ClsResult cls_result;
* // get image matrix from image file
* cv::Mat im = cv::imread(image_file_path, 1);
* cls_model.predict(im, &cls_result);
* */
class
Model
{
class
Model
{
public:
public:
/*
* @brief
* This method aims to initialize the model configuration
*
* @param model_dir: the directory which contains model.yml
* @param use_gpu: use gpu or not when infering
* @param use_trt: use Tensor RT or not when infering
* @param gpu_id: the id of gpu when infering with using gpu
* @param key: the key of encryption when using encrypted model
* @param batch_size: batch size of infering
* */
void
Init
(
const
std
::
string
&
model_dir
,
void
Init
(
const
std
::
string
&
model_dir
,
bool
use_gpu
=
false
,
bool
use_gpu
=
false
,
bool
use_trt
=
false
,
bool
use_trt
=
false
,
int
gpu_id
=
0
,
int
gpu_id
=
0
,
std
::
string
key
=
""
,
std
::
string
key
=
""
,
int
batch_size
=
1
)
{
int
batch_size
=
1
)
{
create_predictor
(
model_dir
,
use_gpu
,
use_trt
,
gpu_id
,
key
,
batch_size
);
create_predictor
(
model_dir
,
use_gpu
,
use_trt
,
gpu_id
,
key
,
batch_size
);
}
}
...
@@ -55,33 +85,128 @@ class Model {
...
@@ -55,33 +85,128 @@ class Model {
bool
use_trt
=
false
,
bool
use_trt
=
false
,
int
gpu_id
=
0
,
int
gpu_id
=
0
,
std
::
string
key
=
""
,
std
::
string
key
=
""
,
int
batch_size
=
1
);
int
batch_size
=
1
);
/*
* @brief
* This method aims to load model configurations which include
* transform steps and label list
*
* @param model_dir: the directory which contains model.yml
* @return true if load configuration successfully
* */
bool
load_config
(
const
std
::
string
&
model_dir
);
bool
load_config
(
const
std
::
string
&
model_dir
);
/*
* @brief
* This method aims to transform single image matrix, the result will be
* returned at second parameter.
*
* @param input_im: single image matrix to be transformed
* @param blob: the raw data of single image matrix after transformed
* @return true if preprocess image matrix successfully
* */
bool
preprocess
(
const
cv
::
Mat
&
input_im
,
ImageBlob
*
blob
);
bool
preprocess
(
const
cv
::
Mat
&
input_im
,
ImageBlob
*
blob
);
/*
* @brief
* This method aims to transform mutiple image matrixs, the result will be
* returned at second parameter.
*
* @param input_im_batch: a batch of image matrixs to be transformed
* @param blob_blob: raw data of a batch of image matrixs after transformed
* @param thread_num: the number of preprocessing threads,
* each thread run preprocess on single image matrix
* @return true if preprocess a batch of image matrixs successfully
* */
bool
preprocess
(
const
std
::
vector
<
cv
::
Mat
>
&
input_im_batch
,
std
::
vector
<
ImageBlob
>
&
blob_batch
,
int
thread_num
=
1
);
bool
preprocess
(
const
std
::
vector
<
cv
::
Mat
>
&
input_im_batch
,
std
::
vector
<
ImageBlob
>
&
blob_batch
,
int
thread_num
=
1
);
/*
* @brief
* This method aims to execute classification model prediction on single image matrix,
* the result will be returned at second parameter.
*
* @param im: single image matrix to be predicted
* @param result: classification prediction result data after postprocessed
* @return true if predict successfully
* */
bool
predict
(
const
cv
::
Mat
&
im
,
ClsResult
*
result
);
bool
predict
(
const
cv
::
Mat
&
im
,
ClsResult
*
result
);
/*
* @brief
* This method aims to execute classification model prediction on a batch of image matrixs,
* the result will be returned at second parameter.
*
* @param im: a batch of image matrixs to be predicted
* @param results: a batch of classification prediction result data after postprocessed
* @param thread_num: the number of predicting threads, each thread run prediction
* on single image matrix
* @return true if predict successfully
* */
bool
predict
(
const
std
::
vector
<
cv
::
Mat
>
&
im_batch
,
std
::
vector
<
ClsResult
>
&
results
,
int
thread_num
=
1
);
bool
predict
(
const
std
::
vector
<
cv
::
Mat
>
&
im_batch
,
std
::
vector
<
ClsResult
>
&
results
,
int
thread_num
=
1
);
/*
* @brief
* This method aims to execute detection or instance segmentation model prediction
* on single image matrix, the result will be returned at second parameter.
*
* @param im: single image matrix to be predicted
* @param result: detection or instance segmentation prediction result data after postprocessed
* @return true if predict successfully
* */
bool
predict
(
const
cv
::
Mat
&
im
,
DetResult
*
result
);
bool
predict
(
const
cv
::
Mat
&
im
,
DetResult
*
result
);
/*
* @brief
* This method aims to execute detection or instance segmentation model prediction
* on a batch of image matrixs, the result will be returned at second parameter.
*
* @param im: a batch of image matrix to be predicted
* @param result: detection or instance segmentation prediction result data after postprocessed
* @param thread_num: the number of predicting threads, each thread run prediction
* on single image matrix
* @return true if predict successfully
* */
bool
predict
(
const
std
::
vector
<
cv
::
Mat
>
&
im_batch
,
std
::
vector
<
DetResult
>
&
result
,
int
thread_num
=
1
);
bool
predict
(
const
std
::
vector
<
cv
::
Mat
>
&
im_batch
,
std
::
vector
<
DetResult
>
&
result
,
int
thread_num
=
1
);
/*
* @brief
* This method aims to execute segmentation model prediction on single image matrix,
* the result will be returned at second parameter.
*
* @param im: single image matrix to be predicted
* @param result: segmentation prediction result data after postprocessed
* @return true if predict successfully
* */
bool
predict
(
const
cv
::
Mat
&
im
,
SegResult
*
result
);
bool
predict
(
const
cv
::
Mat
&
im
,
SegResult
*
result
);
/*
* @brief
* This method aims to execute segmentation model prediction on a batch of image matrix,
* the result will be returned at second parameter.
*
* @param im: a batch of image matrix to be predicted
* @param result: segmentation prediction result data after postprocessed
* @param thread_num: the number of predicting threads, each thread run prediction
* on single image matrix
* @return true if predict successfully
* */
bool
predict
(
const
std
::
vector
<
cv
::
Mat
>
&
im_batch
,
std
::
vector
<
SegResult
>
&
result
,
int
thread_num
=
1
);
bool
predict
(
const
std
::
vector
<
cv
::
Mat
>
&
im_batch
,
std
::
vector
<
SegResult
>
&
result
,
int
thread_num
=
1
);
// model type, include 3 type: classifier, detector, segmenter
std
::
string
type
;
std
::
string
type
;
// model name, such as FasterRCNN, YOLOV3 and so on.
std
::
string
name
;
std
::
string
name
;
std
::
map
<
int
,
std
::
string
>
labels
;
std
::
map
<
int
,
std
::
string
>
labels
;
// transform(preprocessing) pipeline manager
Transforms
transforms_
;
Transforms
transforms_
;
// single input preprocessed data
ImageBlob
inputs_
;
ImageBlob
inputs_
;
// batch input preprocessed data
std
::
vector
<
ImageBlob
>
inputs_batch_
;
std
::
vector
<
ImageBlob
>
inputs_batch_
;
// raw data of predicting results
std
::
vector
<
float
>
outputs_
;
std
::
vector
<
float
>
outputs_
;
// a predictor which run the model predicting
std
::
unique_ptr
<
paddle
::
PaddlePredictor
>
predictor_
;
std
::
unique_ptr
<
paddle
::
PaddlePredictor
>
predictor_
;
};
};
}
// namespce of PaddleX
}
// namespce of PaddleX
deploy/cpp/src/paddlex.cpp
浏览文件 @
d8fe85c4
...
@@ -101,8 +101,6 @@ bool Model::load_config(const std::string& model_dir) {
...
@@ -101,8 +101,6 @@ bool Model::load_config(const std::string& model_dir) {
bool
Model
::
preprocess
(
const
cv
::
Mat
&
input_im
,
ImageBlob
*
blob
)
{
bool
Model
::
preprocess
(
const
cv
::
Mat
&
input_im
,
ImageBlob
*
blob
)
{
cv
::
Mat
im
=
input_im
.
clone
();
cv
::
Mat
im
=
input_im
.
clone
();
int
max_h
=
im
.
rows
;
int
max_w
=
im
.
cols
;
if
(
!
transforms_
.
Run
(
&
im
,
blob
))
{
if
(
!
transforms_
.
Run
(
&
im
,
blob
))
{
return
false
;
return
false
;
}
}
...
@@ -113,8 +111,6 @@ bool Model::preprocess(const cv::Mat& input_im, ImageBlob* blob) {
...
@@ -113,8 +111,6 @@ bool Model::preprocess(const cv::Mat& input_im, ImageBlob* blob) {
bool
Model
::
preprocess
(
const
std
::
vector
<
cv
::
Mat
>
&
input_im_batch
,
std
::
vector
<
ImageBlob
>
&
blob_batch
,
int
thread_num
)
{
bool
Model
::
preprocess
(
const
std
::
vector
<
cv
::
Mat
>
&
input_im_batch
,
std
::
vector
<
ImageBlob
>
&
blob_batch
,
int
thread_num
)
{
int
batch_size
=
input_im_batch
.
size
();
int
batch_size
=
input_im_batch
.
size
();
bool
success
=
true
;
bool
success
=
true
;
int
max_h
=
-
1
;
int
max_w
=
-
1
;
thread_num
=
std
::
min
(
thread_num
,
batch_size
);
thread_num
=
std
::
min
(
thread_num
,
batch_size
);
#pragma omp parallel for num_threads(thread_num)
#pragma omp parallel for num_threads(thread_num)
for
(
int
i
=
0
;
i
<
input_im_batch
.
size
();
++
i
)
{
for
(
int
i
=
0
;
i
<
input_im_batch
.
size
();
++
i
)
{
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录