未验证 提交 4deef845 编写于 作者: W wangguanzhong 提交者: GitHub

[MOT]add doc & move code to cpp (#4573)

* add doc & move code to cpp

* add check for track_model and det_model
上级 f03948d8
# C++端预测部署
在PaddlePaddle中预测引擎和训练引擎底层有着不同的优化方法, 预测引擎使用了AnalysisPredictor,专门针对推理进行了优化,该引擎可以对模型进行多项图优化,减少不必要的内存拷贝。如果用户在部署已训练模型的过程中对性能有较高的要求,我们提供了独立于PaddleDetection的预测脚本,方便用户直接集成部署。当前C++部署支持基于Fairmot的单镜头类别预测部署,并支持人流量统计、出入口计数功能。
主要包含三个步骤:
- 准备环境
- 导出预测模型
- C++预测
## 一、准备环境
环境要求:
- GCC 8.2
- CUDA 10.1/10.2/11.1; CUDNN 7.6/8.1
- CMake 3.0+
- TensorRT 6/7
NVIDIA Jetson用户请参考[Jetson平台编译指南](../../cpp/Jetson_build.md#jetson环境搭建)完成JetPack安装
### 1. 下载代码
```
git clone https://github.com/PaddlePaddle/PaddleDetection.git
# C++部署代码与其他目录代码独立
cd deploy/pptracking/cpp
```
### 2. 下载或编译PaddlePaddle C++预测库
请根据环境选择适当的预测库进行下载,参考[C++预测库下载列表](https://paddleinference.paddlepaddle.org.cn/user_guides/download_lib.html)
下载并解压后`./paddle_inference`目录包含内容为:
```
paddle_inference
├── paddle # paddle核心库和头文件
|
├── third_party # 第三方依赖库和头文件
|
└── version.txt # 版本和编译信息
```
**注意:** 如果用户环境与官网提供环境不一致(如cuda 、cudnn、tensorrt版本不一致等),或对飞桨源代码有修改需求,或希望进行定制化构建,可参考[文档](https://paddleinference.paddlepaddle.org.cn/user_guides/source_compile.html)自行源码编译预测库。
### 3. 编译
编译`cmake`的命令在`scripts/build.sh`中,请根据实际情况修改主要参数,其主要内容说明如下:
```
# 是否使用GPU(即是否使用 CUDA)
WITH_GPU=ON
# 是否使用MKL or openblas,TX2需要设置为OFF
WITH_MKL=OFF
# 是否集成 TensorRT(仅WITH_GPU=ON 有效)
WITH_TENSORRT=ON
# TensorRT 的include路径
TENSORRT_INC_DIR=/path/to/TensorRT/include
# TensorRT 的lib路径
TENSORRT_LIB_DIR=/path/to/TensorRT/lib
# Paddle 预测库路径
PADDLE_DIR=/path/to/paddle_inference/
# Paddle 预测库名称
PADDLE_LIB_NAME=libpaddle_inference
# CUDA 的 lib 路径
CUDA_LIB=/path/to/cuda/lib
# CUDNN 的 lib 路径
CUDNN_LIB=/path/to/cudnn/lib
# OPENCV路径
OPENCV_DIR=/path/to/opencv
```
修改脚本设置好主要参数后,执行```build.sh```脚本:
```
sh ./scripts/build.sh
```
**注意:**
1. `TX2`平台的`CUDA``CUDNN`需要通过`JetPack`安装。
2. 已提供linux和tx2平台的opencv下载方式,其他环境请自行安装[opencv](https://opencv.org/)
## 二、导出预测模型
将训练保存的权重导出为预测库需要的模型格式,使用PaddleDetection下的```tools/export_model.py```导出模型
```
python tools/export_model.py -c configs/mot/fairmot/fairmot_hrnetv2_w18_dlafpn_30e_576x320.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/fairmot_hrnetv2_w18_dlafpn_30e_576x320.pdparams
```
预测模型会默认导出到```output_inference/fairmot_hrnetv2_w18_dlafpn_30e_576x320```目录下,包括```infer_cfg.yml```, ```model.pdiparams```, ```model.pdiparams.info```, ```model.pdmodel```
导出模型也可以通过[预测模型列表]()直接下载使用
## 三、C++预测
完成以上步骤后,可以通过```build/main```进行预测,参数列表如下:
| 参数 | 说明 |
| ---- | ---- |
| --track_model_dir | 导出的跟踪预测模型所在路径 |
| --video_file | 要预测的视频文件路径 |
| --device | 运行时的设备,可选择`CPU/GPU/XPU`,默认为`CPU`|
| --gpu_id | 指定进行推理的GPU device id(默认值为0)|
| --run_mode | 使用GPU时,默认为fluid, 可选(fluid/trt_fp32/trt_fp16/trt_int8)|
| --output_dir | 输出图片所在的文件夹, 默认为output |
| --use_mkldnn | CPU预测中是否开启MKLDNN加速 |
| --cpu_threads | 设置cpu线程数,默认为1 |
| --do_entrance_counting | 是否进行出入口流量统计,默认为否 |
| --save_result | 是否保存跟踪结果 |
样例一:
```shell
# 使用CPU测试视频 `test.mp4` , 模型和测试视频均移至`build`目录下
./main --track_model_dir=./fairmot_hrnetv2_w18_dlafpn_30e_576x320 --video_file=test.mp4
# 视频可视化预测结果默认保存在当前目录下output/test.mp4文件中
```
样例二:
```shell
# 使用GPU测试视频 `test.mp4` , 模型和测试视频均移至`build`目录下,实现出入口计数功能,并保存跟踪结果
./main -video_file=test.mp4 -track_model_dir=./fairmot_dla34_30e_1088x608/ --device=gpu --do_entrance_counting=True --save_result=True
# 视频可视化预测结果默认保存在当前目录下`output/test.mp4`中
# 跟踪结果保存在`output/mot_output.txt`中
# 计数结果保存在`output/flow_statistic.txt`中
```
...@@ -17,8 +17,8 @@ ...@@ -17,8 +17,8 @@
// Ths copyright of gatagat/lap is as follows: // Ths copyright of gatagat/lap is as follows:
// MIT License // MIT License
#ifndef DEPLOY_PPTRACKING_INCLUDE_LAPJV_H_ #ifndef DEPLOY_PPTRACKING_CPP_INCLUDE_LAPJV_H_
#define DEPLOY_PPTRACKING_INCLUDE_LAPJV_H_ #define DEPLOY_PPTRACKING_CPP_INCLUDE_LAPJV_H_
#define LARGE 1000000 #define LARGE 1000000
#if !defined TRUE #if !defined TRUE
...@@ -61,4 +61,4 @@ int lapjv_internal(const cv::Mat &cost, ...@@ -61,4 +61,4 @@ int lapjv_internal(const cv::Mat &cost,
} // namespace PaddleDetection } // namespace PaddleDetection
#endif // DEPLOY_PPTRACKING_INCLUDE_LAPJV_H_ #endif // DEPLOY_PPTRACKING_CPP_INCLUDE_LAPJV_H_
...@@ -12,8 +12,8 @@ ...@@ -12,8 +12,8 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
#ifndef DEPLOY_PPTRACKING_INCLUDE_PIPELINE_H_ #ifndef DEPLOY_PPTRACKING_CPP_INCLUDE_PIPELINE_H_
#define DEPLOY_PPTRACKING_INCLUDE_PIPELINE_H_ #define DEPLOY_PPTRACKING_CPP_INCLUDE_PIPELINE_H_
#include <glog/logging.h> #include <glog/logging.h>
...@@ -139,4 +139,4 @@ class Pipeline { ...@@ -139,4 +139,4 @@ class Pipeline {
} // namespace PaddleDetection } // namespace PaddleDetection
#endif // DEPLOY_PPTRACKING_INCLUDE_PIPELINE_H_ #endif // DEPLOY_PPTRACKING_CPP_INCLUDE_PIPELINE_H_
...@@ -18,10 +18,10 @@ ...@@ -18,10 +18,10 @@
#include <ctime> #include <ctime>
#include <memory> #include <memory>
#include <set>
#include <string> #include <string>
#include <utility> #include <utility>
#include <vector> #include <vector>
#include <set>
#include <opencv2/core/core.hpp> #include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp> #include <opencv2/highgui/highgui.hpp>
...@@ -54,7 +54,6 @@ void FlowStatistic(const MOTResult& results, ...@@ -54,7 +54,6 @@ void FlowStatistic(const MOTResult& results,
std::map<int, std::vector<float>>* prev_center, std::map<int, std::vector<float>>* prev_center,
std::vector<std::string>* records); std::vector<std::string>* records);
// Save Tracking Results // Save Tracking Results
void SaveMOTResult(const MOTResult& results, void SaveMOTResult(const MOTResult& results,
const int frame_id, const int frame_id,
......
...@@ -14,61 +14,81 @@ ...@@ -14,61 +14,81 @@
#pragma once #pragma once
#include <string> #include <ctime>
#include <vector>
#include <memory> #include <memory>
#include <string>
#include <utility> #include <utility>
#include <ctime> #include <vector>
#include <opencv2/core/core.hpp> #include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp> #include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include "paddle_inference_api.h" // NOLINT #include "paddle_inference_api.h" // NOLINT
#include "include/preprocess_op.h"
#include "include/config_parser.h" #include "include/config_parser.h"
#include "include/jde_predictor.h" #include "include/jde_predictor.h"
#include "include/preprocess_op.h"
#include "include/sde_predictor.h" #include "include/sde_predictor.h"
using namespace paddle_infer; // NOLINT
using namespace paddle_infer;
namespace PaddleDetection { namespace PaddleDetection {
class Predictor { class Predictor {
public: public:
explicit Predictor(const std::string& device="CPU", explicit Predictor(const std::string& device = "CPU",
const std::string& track_model_dir="", const std::string& track_model_dir = "",
const std::string& det_model_dir="", const std::string& det_model_dir = "",
const std::string& reid_model_dir="", const std::string& reid_model_dir = "",
const double threshold=-1., const double threshold = -1.,
const std::string& run_mode="fluid", const std::string& run_mode = "fluid",
const int gpu_id=0, const int gpu_id = 0,
const bool use_mkldnn=false, const bool use_mkldnn = false,
const int cpu_threads=1, const int cpu_threads = 1,
bool trt_calib_mode=false, bool trt_calib_mode = false,
const int min_box_area=200) { const int min_box_area = 200) {
if (track_model_dir.empty() && det_model_dir.empty()) { if (track_model_dir.empty() && det_model_dir.empty()) {
throw "Predictor must receive track_model or det_model!"; throw "Predictor must receive track_model or det_model!";
}
if (!track_model_dir.empty() && !det_model_dir.empty()) {
throw "Predictor only receive one of track_model or det_model!";
} }
if (!track_model_dir.empty()) { if (!track_model_dir.empty()) {
jde_sct_ = std::make_shared<PaddleDetection::JDEPredictor>(device, track_model_dir, threshold, run_mode, gpu_id, use_mkldnn, cpu_threads, trt_calib_mode, min_box_area); jde_sct_ =
std::make_shared<PaddleDetection::JDEPredictor>(device,
track_model_dir,
threshold,
run_mode,
gpu_id,
use_mkldnn,
cpu_threads,
trt_calib_mode,
min_box_area);
use_jde_ = true; use_jde_ = true;
} }
if (!det_model_dir.empty()) { if (!det_model_dir.empty()) {
sde_sct_ = std::make_shared<PaddleDetection::SDEPredictor>(device, det_model_dir, reid_model_dir, threshold, run_mode, gpu_id, use_mkldnn, cpu_threads, trt_calib_mode, min_box_area); sde_sct_ = std::make_shared<PaddleDetection::SDEPredictor>(device,
det_model_dir,
reid_model_dir,
threshold,
run_mode,
gpu_id,
use_mkldnn,
cpu_threads,
trt_calib_mode,
min_box_area);
use_jde_ = false; use_jde_ = false;
} }
} }
// Run predictor // Run predictor
void Predict(const std::vector<cv::Mat> imgs, void Predict(const std::vector<cv::Mat> imgs,
const double threshold = 0.5, const double threshold = 0.5,
MOTResult* result = nullptr, MOTResult* result = nullptr,
std::vector<double>* times = nullptr); std::vector<double>* times = nullptr);
private: private:
std::shared_ptr<PaddleDetection::JDEPredictor> jde_sct_; std::shared_ptr<PaddleDetection::JDEPredictor> jde_sct_;
......
...@@ -105,34 +105,34 @@ void FlowStatistic(const MOTResult& results, ...@@ -105,34 +105,34 @@ void FlowStatistic(const MOTResult& results,
const int secs_interval, const int secs_interval,
const bool do_entrance_counting, const bool do_entrance_counting,
const int video_fps, const int video_fps,
const Rect entrance, const Rect entrance,
std::set<int>* id_set, std::set<int>* id_set,
std::set<int>* interval_id_set, std::set<int>* interval_id_set,
std::vector<int>* in_id_list, std::vector<int>* in_id_list,
std::vector<int>* out_id_list, std::vector<int>* out_id_list,
std::map<int, std::vector<float>>* prev_center, std::map<int, std::vector<float>>* prev_center,
std::vector<std::string>* records) { std::vector<std::string>* records) {
if (frame_id == 0) if (frame_id == 0) interval_id_set->clear();
interval_id_set->clear();
if (do_entrance_counting) { if (do_entrance_counting) {
// Count in and out number: // Count in and out number:
// Use horizontal center line as the entrance just for simplification. // Use horizontal center line as the entrance just for simplification.
// If a person located in the above the horizontal center line // If a person located in the above the horizontal center line
// at the previous frame and is in the below the line at the current frame, // at the previous frame and is in the below the line at the current frame,
// the in number is increased by one. // the in number is increased by one.
// If a person was in the below the horizontal center line // If a person was in the below the horizontal center line
// at the previous frame and locates in the below the line at the current frame, // at the previous frame and locates in the below the line at the current
// frame,
// the out number is increased by one. // the out number is increased by one.
// TODO: if the entrance is not the horizontal center line, // TODO(qianhui): if the entrance is not the horizontal center line,
// the counting method should be optimized. // the counting method should be optimized.
float entrance_y = entrance.top; float entrance_y = entrance.top;
for (const auto& result : results) { for (const auto& result : results) {
float center_x = (result.rects.left + result.rects.right) / 2; float center_x = (result.rects.left + result.rects.right) / 2;
float center_y = (result.rects.top + result.rects.bottom) / 2; float center_y = (result.rects.top + result.rects.bottom) / 2;
int ids = result.ids; int ids = result.ids;
std::map<int, std::vector<float>>::iterator iter; std::map<int, std::vector<float>>::iterator iter;
iter = prev_center->find(ids); iter = prev_center->find(ids);
if (iter != prev_center->end()) { if (iter != prev_center->end()) {
if (iter->second[1] <= entrance_y && center_y > entrance_y) { if (iter->second[1] <= entrance_y && center_y > entrance_y) {
...@@ -145,7 +145,7 @@ void FlowStatistic(const MOTResult& results, ...@@ -145,7 +145,7 @@ void FlowStatistic(const MOTResult& results,
(*prev_center)[ids][1] = center_y; (*prev_center)[ids][1] = center_y;
} else { } else {
prev_center->insert( prev_center->insert(
std::pair<int, std::vector<float>>(ids, {center_x, center_y})); std::pair<int, std::vector<float>>(ids, {center_x, center_y}));
} }
} }
} }
...@@ -157,8 +157,7 @@ void FlowStatistic(const MOTResult& results, ...@@ -157,8 +157,7 @@ void FlowStatistic(const MOTResult& results,
} }
std::ostringstream os; std::ostringstream os;
os << "Frame id: " << frame_id os << "Frame id: " << frame_id << ", Total count: " << id_set->size();
<< ", Total count: " << id_set->size();
if (do_entrance_counting) { if (do_entrance_counting) {
os << ", In count: " << in_id_list->size() os << ", In count: " << in_id_list->size()
<< ", Out count: " << out_id_list->size(); << ", Out count: " << out_id_list->size();
......
...@@ -13,19 +13,18 @@ ...@@ -13,19 +13,18 @@
// limitations under the License. // limitations under the License.
#include <sstream> #include <sstream>
// for setprecision // for setprecision
#include <iomanip>
#include <chrono> #include <chrono>
#include <iomanip>
#include "include/predictor.h" #include "include/predictor.h"
using namespace paddle_infer; // NOLINT
using namespace paddle_infer;
namespace PaddleDetection { namespace PaddleDetection {
void Predictor::Predict(const std::vector<cv::Mat> imgs, void Predictor::Predict(const std::vector<cv::Mat> imgs,
const double threshold, const double threshold,
MOTResult* result, MOTResult* result,
std::vector<double>* times) { std::vector<double>* times) {
if (use_jde_) { if (use_jde_) {
jde_sct_->Predict(imgs, threshold, result, times); jde_sct_->Predict(imgs, threshold, result, times);
} else { } else {
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册