未验证 提交 cd08a245 编写于 作者: Z zhiboniu 提交者: GitHub

fix some problem in lite deploy (#5488)

上级 d859c0e8
...@@ -35,16 +35,19 @@ PP-TinyPose是PaddleDetecion针对移动端设备优化的实时关键点检测 ...@@ -35,16 +35,19 @@ PP-TinyPose是PaddleDetecion针对移动端设备优化的实时关键点检测
## 模型库 ## 模型库
### 关键点检测模型 ### 关键点检测模型
| 模型 | 输入尺寸 | AP (COCO Val) | 单人推理耗时 (FP32)| 单人推理耗时(FP16) | 配置文件 | 模型权重 | 预测部署模型 | Paddle-Lite部署模型(FP32) | Paddle-Lite部署模型(FP16)| | 模型 | 输入尺寸 | AP (COCO Val) | 单人推理耗时 (FP32) | 单人推理耗时(FP16) | 配置文件 | 模型权重 | 预测部署模型 | Paddle-Lite部署模型(FP32) | Paddle-Lite部署模型(FP16) |
| :------------------------ | :-------: | :------: | :------: |:---: | :---: | :---: | :---: | :---: | :---: | | :---------- | :------: | :-----------: | :-----------------: | :-----------------: | :------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: |
| PP-TinyPose | 128*96 | 58.1 | 4.57ms | 3.27ms | [Config](./tinypose_128x96.yml) |[Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.tar) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.nb) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96_fp16.nb) | | PP-TinyPose | 128*96 | 58.1 | 4.57ms | 3.27ms | [Config](./tinypose_128x96.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.tar) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.tar) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96_fp16.tar) |
| PP-TinyPose | 256*192 | 68.8 | 14.07ms | 8.33ms | [Config](./tinypose_256x192.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.tar) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.nb) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192_fp16.nb) | | PP-TinyPose | 256*192 | 68.8 | 14.07ms | 8.33ms | [Config](./tinypose_256x192.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.tar) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.tar) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192_fp16.tar) |
### 行人检测模型 ### 行人检测模型
| 模型 | 输入尺寸 | mAP (COCO Val) | 平均推理耗时 (FP32) | 平均推理耗时 (FP16) | 配置文件 | 模型权重 | 预测部署模型 | Paddle-Lite部署模型(FP32) | Paddle-Lite部署模型(FP16)| | 模型 | 输入尺寸 | mAP (COCO Val) | 平均推理耗时 (FP32) | 平均推理耗时 (FP16) | 配置文件 | 模型权重 | 预测部署模型 | Paddle-Lite部署模型(FP32) | Paddle-Lite部署模型(FP16) |
| :------------------------ | :-------: | :------: | :------: | :---: | :---: | :---: | :---: | :---: | :---: | | :------------------- | :------: | :------------: | :-----------------: | :-----------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: |
| PicoDet-S-Pedestrian | 192*192 | 29.0 | 4.30ms | 2.37ms | [Config](../../picodet/application/pedestrian_detection/picodet_s_192_pedestrian.yml) |[Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.tar) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.nb) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian_fp16.nb) | | PicoDet-S-Pedestrian | 192*192 | 29.0 | 4.30ms | 2.37ms | [Config](../../picodet/application/pedestrian_detection/picodet_s_192_pedestrian.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.tar) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.tar) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian_fp16.tar) |
| PicoDet-S-Pedestrian | 320*320 | 38.5 | 10.26ms | 6.30ms | [Config](../../picodet/application/pedestrian_detection/picodet_s_320_pedestrian.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.tar) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.nb) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian_fp16.nb) | | PicoDet-S-Pedestrian | 320*320 | 38.5 | 10.26ms | 6.30ms | [Config](../../picodet/application/pedestrian_detection/picodet_s_320_pedestrian.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.pdparams) | [预测部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.tar) | [Lite部署模型](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.tar) | [Lite部署模型(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian_fp16.tar) |
**说明** **说明**
......
...@@ -37,14 +37,14 @@ If you want to deploy it on the mobile devives, you also need: ...@@ -37,14 +37,14 @@ If you want to deploy it on the mobile devives, you also need:
### Keypoint Detection Model ### Keypoint Detection Model
| Model | Input Size | AP (COCO Val) | Inference Time for Single Person (FP32)| Inference Time for Single Person(FP16) | Config | Model Weights | Deployment Model | Paddle-Lite Model(FP32) | Paddle-Lite Model(FP16)| | Model | Input Size | AP (COCO Val) | Inference Time for Single Person (FP32)| Inference Time for Single Person(FP16) | Config | Model Weights | Deployment Model | Paddle-Lite Model(FP32) | Paddle-Lite Model(FP16)|
| :------------------------ | :-------: | :------: | :------: |:---: | :---: | :---: | :---: | :---: | :---: | | :------------------------ | :-------: | :------: | :------: |:---: | :---: | :---: | :---: | :---: | :---: |
| PP-TinyPose | 128*96 | 58.1 | 4.57ms | 3.27ms | [Config](./tinypose_128x96.yml) |[Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.pdparams) | [Deployment Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.tar) | [Lite Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.nb) | [Lite Model(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96_fp16.nb) | | PP-TinyPose | 128*96 | 58.1 | 4.57ms | 3.27ms | [Config](./tinypose_128x96.yml) |[Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.pdparams) | [Deployment Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.tar) | [Lite Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96.tar) | [Lite Model(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_128x96_fp16.tar) |
| PP-TinyPose | 256*192 | 68.8 | 14.07ms | 8.33ms | [Config](./tinypose_256x192.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.pdparams) | [Deployment Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.tar) | [Lite Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.nb) | [Lite Model(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192_fp16.nb) | | PP-TinyPose | 256*192 | 68.8 | 14.07ms | 8.33ms | [Config](./tinypose_256x192.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.pdparams) | [Deployment Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.tar) | [Lite Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192.tar) | [Lite Model(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/tinypose_256x192_fp16.tar) |
### Pedestrian Detection Model ### Pedestrian Detection Model
| Model | Input Size | mAP (COCO Val) | Average Inference Time (FP32)| Average Inference Time (FP16) | Config | Model Weights | Deployment Model | Paddle-Lite Model(FP32) | Paddle-Lite Model(FP16)| | Model | Input Size | mAP (COCO Val) | Average Inference Time (FP32)| Average Inference Time (FP16) | Config | Model Weights | Deployment Model | Paddle-Lite Model(FP32) | Paddle-Lite Model(FP16)|
| :------------------------ | :-------: | :------: | :------: | :---: | :---: | :---: | :---: | :---: | :---: | | :------------------------ | :-------: | :------: | :------: | :---: | :---: | :---: | :---: | :---: | :---: |
| PicoDet-S-Pedestrian | 192*192 | 29.0 | 4.30ms | 2.37ms | [Config](../../picodet/application/pedestrian_detection/picodet_s_192_pedestrian.yml) |[Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.pdparams) | [Deployment Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.tar) | [Lite Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.nb) | [Lite Model(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian_fp16.nb) | | PicoDet-S-Pedestrian | 192*192 | 29.0 | 4.30ms | 2.37ms | [Config](../../picodet/application/pedestrian_detection/picodet_s_192_pedestrian.yml) |[Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.pdparams) | [Deployment Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.tar) | [Lite Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian.tar) | [Lite Model(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_192_pedestrian_fp16.tar) |
| PicoDet-S-Pedestrian | 320*320 | 38.5 | 10.26ms | 6.30ms | [Config](../../picodet/application/pedestrian_detection/picodet_s_320_pedestrian.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.pdparams) | [Deployment Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.tar) | [Lite Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.nb) | [Lite Model(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian_fp16.nb) | | PicoDet-S-Pedestrian | 320*320 | 38.5 | 10.26ms | 6.30ms | [Config](../../picodet/application/pedestrian_detection/picodet_s_320_pedestrian.yml) | [Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.pdparams) | [Deployment Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.tar) | [Lite Model](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian.tar) | [Lite Model(FP16)](https://bj.bcebos.com/v1/paddledet/models/keypoint/picodet_s_320_pedestrian_fp16.tar) |
**Tips** **Tips**
......
...@@ -12,7 +12,12 @@ Paddle Lite是飞桨轻量化推理引擎,为手机、IOT端提供高效推理 ...@@ -12,7 +12,12 @@ Paddle Lite是飞桨轻量化推理引擎,为手机、IOT端提供高效推理
### 1.1 准备交叉编译环境 ### 1.1 准备交叉编译环境
交叉编译环境用于编译 Paddle Lite 和 PaddleDetection 的C++ demo。 交叉编译环境用于编译 Paddle Lite 和 PaddleDetection 的C++ demo。
支持多种开发环境,不同开发环境的编译流程请参考对应文档,请确保安装完成Java jdk、Android NDK(R17以上)。 支持多种开发环境,不同开发环境的编译流程请参考对应文档,请确保安装完成Java jdk、Android NDK(R17 < version < R21,其他版本以上未做测试)。
设置NDK_ROOT命令:
```shell
export NDK_ROOT=[YOUR_NDK_PATH]/android-ndk-r17c
```
1. [Docker](https://paddle-lite.readthedocs.io/zh/latest/source_compile/compile_env.html#docker) 1. [Docker](https://paddle-lite.readthedocs.io/zh/latest/source_compile/compile_env.html#docker)
2. [Linux](https://paddle-lite.readthedocs.io/zh/latest/source_compile/compile_env.html#linux) 2. [Linux](https://paddle-lite.readthedocs.io/zh/latest/source_compile/compile_env.html#linux)
...@@ -21,7 +26,7 @@ Paddle Lite是飞桨轻量化推理引擎,为手机、IOT端提供高效推理 ...@@ -21,7 +26,7 @@ Paddle Lite是飞桨轻量化推理引擎,为手机、IOT端提供高效推理
### 1.2 准备预测库 ### 1.2 准备预测库
预测库有两种获取方式: 预测库有两种获取方式:
1. [**建议**]直接下载,预测库下载链接如下: 1. [**建议**]直接下载,预测库下载链接如下:(请注意使用模型FP32/16版本需要与库相对应)
|平台| 架构 | 预测库下载链接| |平台| 架构 | 预测库下载链接|
|-|-|-| |-|-|-|
|Android| arm7 | [inference_lite_lib](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.10-rc/inference_lite_lib.android.armv7.clang.c++_static.with_extra.with_cv.tar.gz) | |Android| arm7 | [inference_lite_lib](https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.10-rc/inference_lite_lib.android.armv7.clang.c++_static.with_extra.with_cv.tar.gz) |
...@@ -31,7 +36,7 @@ Paddle Lite是飞桨轻量化推理引擎,为手机、IOT端提供高效推理 ...@@ -31,7 +36,7 @@ Paddle Lite是飞桨轻量化推理引擎,为手机、IOT端提供高效推理
**注意**:1. 如果是从 Paddle-Lite [官方文档](https://paddle-lite.readthedocs.io/zh/latest/quick_start/release_lib.html#android-toolchain-gcc)下载的预测库,注意选择`with_extra=ON,with_cv=ON`的下载链接。2. 目前只提供Android端demo,IOS端demo可以参考[Paddle-Lite IOS demo](https://github.com/PaddlePaddle/Paddle-Lite-Demo/tree/master/PaddleLite-ios-demo) **注意**:1. 如果是从 Paddle-Lite [官方文档](https://paddle-lite.readthedocs.io/zh/latest/quick_start/release_lib.html#android-toolchain-gcc)下载的预测库,注意选择`with_extra=ON,with_cv=ON`的下载链接。2. 目前只提供Android端demo,IOS端demo可以参考[Paddle-Lite IOS demo](https://github.com/PaddlePaddle/Paddle-Lite-Demo/tree/master/PaddleLite-ios-demo)
2. 编译Paddle-Lite得到预测库,Paddle-Lite的编译方式如下: 2. 编译Paddle-Lite得到预测库,Paddle-Lite的编译方式如下(Lite库在不断更新,如若下列命令无效,请以Lite官方repo为主)
```shell ```shell
git clone https://github.com/PaddlePaddle/Paddle-Lite.git git clone https://github.com/PaddlePaddle/Paddle-Lite.git
cd Paddle-Lite cd Paddle-Lite
......
...@@ -29,7 +29,7 @@ ...@@ -29,7 +29,7 @@
namespace PaddleDetection { namespace PaddleDetection {
void load_jsonf(std::string jsonfile, Json::Value& jsondata); void load_jsonf(std::string jsonfile, const Json::Value& jsondata);
// Inference model configuration parser // Inference model configuration parser
class ConfigPaser { class ConfigPaser {
...@@ -43,13 +43,14 @@ class ConfigPaser { ...@@ -43,13 +43,14 @@ class ConfigPaser {
Json::Value config; Json::Value config;
load_jsonf(model_dir + OS_PATH_SEP + cfg + ".json", config); load_jsonf(model_dir + OS_PATH_SEP + cfg + ".json", config);
// Get model arch : YOLO, SSD, RetinaNet, RCNN, Face // Get model arch : YOLO, SSD, RetinaNet, RCNN, Face, PicoDet, HRNet
if (config.isMember("arch")) { if (config.isMember("arch")) {
arch_ = config["arch"].as<std::string>(); arch_ = config["arch"].as<std::string>();
} else { } else {
std::cerr << "Please set model arch," std::cerr
<< "support value : YOLO, SSD, RetinaNet, RCNN, Face." << "Please set model arch,"
<< std::endl; << "support value : YOLO, SSD, RetinaNet, RCNN, Face, PicoDet, HRNet."
<< std::endl;
return false; return false;
} }
......
...@@ -16,7 +16,7 @@ ...@@ -16,7 +16,7 @@
namespace PaddleDetection { namespace PaddleDetection {
void load_jsonf(std::string jsonfile, Json::Value &jsondata) { void load_jsonf(std::string jsonfile, const Json::Value &jsondata) {
std::ifstream ifs; std::ifstream ifs;
ifs.open(jsonfile); ifs.open(jsonfile);
......
...@@ -43,10 +43,8 @@ void PrintBenchmarkLog(std::vector<double> det_time, int img_num) { ...@@ -43,10 +43,8 @@ void PrintBenchmarkLog(std::vector<double> det_time, int img_num) {
<< std::endl; << std::endl;
RT_Config["model_dir_det"].as<std::string>().erase( RT_Config["model_dir_det"].as<std::string>().erase(
RT_Config["model_dir_det"].as<std::string>().find_last_not_of("/") + 1); RT_Config["model_dir_det"].as<std::string>().find_last_not_of("/") + 1);
std::cout std::cout << "detection model_name: "
<< "detection model_name: " << RT_Config["model_dir_det"].as<std::string>() << std::endl;
<< RT_Config["model_dir_det"].as<std::string>()
<< std::endl;
std::cout << "----------------------- Perf info ------------------------" std::cout << "----------------------- Perf info ------------------------"
<< std::endl; << std::endl;
std::cout << "Total number of predicted data: " << img_num std::cout << "Total number of predicted data: " << img_num
...@@ -59,7 +57,7 @@ void PrintBenchmarkLog(std::vector<double> det_time, int img_num) { ...@@ -59,7 +57,7 @@ void PrintBenchmarkLog(std::vector<double> det_time, int img_num) {
<< ", postprocess_time(ms): " << det_time[2] / img_num << std::endl; << ", postprocess_time(ms): " << det_time[2] / img_num << std::endl;
} }
void PrintKptsBenchmarkLog(std::vector<double> det_time, int img_num){ void PrintKptsBenchmarkLog(std::vector<double> det_time, int img_num) {
std::cout << "----------------------- Data info -----------------------" std::cout << "----------------------- Data info -----------------------"
<< std::endl; << std::endl;
std::cout << "batch_size_keypoint: " std::cout << "batch_size_keypoint: "
...@@ -69,16 +67,16 @@ void PrintKptsBenchmarkLog(std::vector<double> det_time, int img_num){ ...@@ -69,16 +67,16 @@ void PrintKptsBenchmarkLog(std::vector<double> det_time, int img_num){
RT_Config["model_dir_keypoint"].as<std::string>().erase( RT_Config["model_dir_keypoint"].as<std::string>().erase(
RT_Config["model_dir_keypoint"].as<std::string>().find_last_not_of("/") + RT_Config["model_dir_keypoint"].as<std::string>().find_last_not_of("/") +
1); 1);
std::cout std::cout << "keypoint model_name: "
<< "keypoint model_name: " << RT_Config["model_dir_keypoint"].as<std::string>() << std::endl;
<< RT_Config["model_dir_keypoint"].as<std::string>() << std::endl;
std::cout << "----------------------- Perf info ------------------------" std::cout << "----------------------- Perf info ------------------------"
<< std::endl; << std::endl;
std::cout << "Total number of predicted data: " << img_num std::cout << "Total number of predicted data: " << img_num
<< " and total time spent(ms): " << " and total time spent(ms): "
<< std::accumulate(det_time.begin(), det_time.end(), 0.) << std::endl; << std::accumulate(det_time.begin(), det_time.end(), 0.)
<< std::endl;
img_num = std::max(1, img_num); img_num = std::max(1, img_num);
std::cout << "Average time cost per person:" << std::endl std::cout << "Average time cost per person:" << std::endl
<< "preproce_time(ms): " << det_time[0] / img_num << "preproce_time(ms): " << det_time[0] / img_num
<< ", inference_time(ms): " << det_time[1] / img_num << ", inference_time(ms): " << det_time[1] / img_num
<< ", postprocess_time(ms): " << det_time[2] / img_num << std::endl; << ", postprocess_time(ms): " << det_time[2] / img_num << std::endl;
...@@ -136,7 +134,7 @@ void PredictImage(const std::vector<std::string> all_img_paths, ...@@ -136,7 +134,7 @@ void PredictImage(const std::vector<std::string> all_img_paths,
PaddleDetection::KeyPointDetector* keypoint, PaddleDetection::KeyPointDetector* keypoint,
const std::string& output_dir = "output") { const std::string& output_dir = "output") {
std::vector<double> det_t = {0, 0, 0}; std::vector<double> det_t = {0, 0, 0};
int steps = ceil(float(all_img_paths.size()) / batch_size_det); int steps = ceil(static_cast<float>(all_img_paths.size()) / batch_size_det);
int kpts_imgs = 0; int kpts_imgs = 0;
std::vector<double> keypoint_t = {0, 0, 0}; std::vector<double> keypoint_t = {0, 0, 0};
double midtimecost = 0; double midtimecost = 0;
...@@ -243,7 +241,7 @@ void PredictImage(const std::vector<std::string> all_img_paths, ...@@ -243,7 +241,7 @@ void PredictImage(const std::vector<std::string> all_img_paths,
std::chrono::duration<float> midtimediff = std::chrono::duration<float> midtimediff =
keypoint_crop_time - keypoint_start_time; keypoint_crop_time - keypoint_start_time;
midtimecost += double(midtimediff.count() * 1000); midtimecost += static_cast<double>(midtimediff.count() * 1000);
if (imgs_kpts.size() == RT_Config["batch_size_keypoint"].as<int>() || if (imgs_kpts.size() == RT_Config["batch_size_keypoint"].as<int>() ||
((i == imsize - 1) && !imgs_kpts.empty())) { ((i == imsize - 1) && !imgs_kpts.empty())) {
...@@ -275,8 +273,8 @@ void PredictImage(const std::vector<std::string> all_img_paths, ...@@ -275,8 +273,8 @@ void PredictImage(const std::vector<std::string> all_img_paths,
std::string kpts_savepath = std::string kpts_savepath =
output_path + "keypoint_" + output_path + "keypoint_" +
image_file_path.substr(image_file_path.find_last_of('/') + 1); image_file_path.substr(image_file_path.find_last_of('/') + 1);
cv::Mat kpts_vis_img = cv::Mat kpts_vis_img = VisualizeKptsResult(
VisualizeKptsResult(im, result_kpts, colormap_kpts, keypoint->get_threshold()); im, result_kpts, colormap_kpts, keypoint->get_threshold());
cv::imwrite(kpts_savepath, kpts_vis_img, compression_params); cv::imwrite(kpts_savepath, kpts_vis_img, compression_params);
printf("Visualized output saved as %s\n", kpts_savepath.c_str()); printf("Visualized output saved as %s\n", kpts_savepath.c_str());
} else { } else {
...@@ -298,23 +296,22 @@ void PredictImage(const std::vector<std::string> all_img_paths, ...@@ -298,23 +296,22 @@ void PredictImage(const std::vector<std::string> all_img_paths,
PrintBenchmarkLog(det_t, all_img_paths.size()); PrintBenchmarkLog(det_t, all_img_paths.size());
if (keypoint) { if (keypoint) {
PrintKptsBenchmarkLog(keypoint_t, kpts_imgs); PrintKptsBenchmarkLog(keypoint_t, kpts_imgs);
PrintTotalIimeLog((det_t[0] + det_t[1] + det_t[2]) / all_img_paths.size(), PrintTotalIimeLog(
(keypoint_t[0] + keypoint_t[1] + keypoint_t[2]) / all_img_paths.size(), (det_t[0] + det_t[1] + det_t[2]) / all_img_paths.size(),
midtimecost / all_img_paths.size()); (keypoint_t[0] + keypoint_t[1] + keypoint_t[2]) / all_img_paths.size(),
midtimecost / all_img_paths.size());
} }
} }
int main(int argc, char** argv) { int main(int argc, char** argv) {
std::cout << "Usage: " << argv[0] std::cout << "Usage: " << argv[0] << " [config_path] [image_dir](option)\n";
<< " [config_path](option) [image_dir](option)\n";
if (argc < 2) { if (argc < 2) {
std::cout << "Usage: ./main det_runtime_config.json" << std::endl; std::cout << "Usage: ./main det_runtime_config.json" << std::endl;
return -1; return -1;
} }
std::string config_path = argv[1]; std::string config_path = argv[1];
std::string img_path = ""; std::string img_path = "";
if (argc >= 3) { if (argc >= 3) {
img_path = argv[2]; img_path = argv[2];
} }
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册