diff --git a/.github/ISSUE_TEMPLATE/1_data.md b/.github/ISSUE_TEMPLATE/1_data.md new file mode 100644 index 0000000000000000000000000000000000000000..05627aa353d1cf06074445d2bb5344d94727fedf --- /dev/null +++ b/.github/ISSUE_TEMPLATE/1_data.md @@ -0,0 +1,6 @@ +--- +name: 1. 数据类问题 +about: 数据标注、格式转换等问题 +--- + +说明数据类型(图像分类、目标检测、实例分割或语义分割) diff --git a/.github/ISSUE_TEMPLATE/2_train.md b/.github/ISSUE_TEMPLATE/2_train.md new file mode 100644 index 0000000000000000000000000000000000000000..489159731bfef42773dffa15cd30582d5c53f992 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/2_train.md @@ -0,0 +1,6 @@ +--- +name: 2. 模型训练 +about: 模型训练中的问题 +--- + +如模型训练出错,建议贴上模型训练代码,以便开发人员分析,并快速响应 diff --git a/.github/ISSUE_TEMPLATE/3_deploy.md b/.github/ISSUE_TEMPLATE/3_deploy.md new file mode 100644 index 0000000000000000000000000000000000000000..d012d10125c957e702f3877dc087b7331baceb0a --- /dev/null +++ b/.github/ISSUE_TEMPLATE/3_deploy.md @@ -0,0 +1,6 @@ +--- +name: 3. 模型部署 +about: 模型部署相关问题,包括C++、Python、Paddle Lite等 +--- + +说明您的部署环境,部署需求,模型类型和应用场景等,便于开发人员快速响应。 diff --git a/.github/ISSUE_TEMPLATE/4_gui.md b/.github/ISSUE_TEMPLATE/4_gui.md new file mode 100644 index 0000000000000000000000000000000000000000..780c8b903b9137f72037e311213443c8678f61d9 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/4_gui.md @@ -0,0 +1,6 @@ +--- +name: 4. PaddleX GUI使用问题 +about: Paddle GUI客户端使用问题 +--- + +PaddleX GUI: https://www.paddlepaddle.org.cn/paddle/paddleX (请在ISSUE内容中保留此行内容) diff --git a/.github/ISSUE_TEMPLATE/5_other.md b/.github/ISSUE_TEMPLATE/5_other.md new file mode 100644 index 0000000000000000000000000000000000000000..8ddfe49b544621918355f5c114c1124bdecc8ef3 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/5_other.md @@ -0,0 +1,4 @@ +--- +name: 5. 其它类型问题 +about: 所有问题都可以在这里提 +--- diff --git a/README.md b/README.md new file mode 100644 index 0000000000000000000000000000000000000000..add63566f2632a0e535504a94da0605ce0618bc7 --- /dev/null +++ b/README.md @@ -0,0 +1,125 @@ + + + +

+ PaddleX +

+ + +

PaddleX -- 飞桨全流程开发套件,以低代码的形式支持开发者快速实现产业实际项目落地

+ +[![License](https://img.shields.io/badge/license-Apache%202-red.svg)](LICENSE) +[![Version](https://img.shields.io/github/release/PaddlePaddle/PaddleX.svg)](https://github.com/PaddlePaddle/PaddleX/releases) +![python version](https://img.shields.io/badge/python-3.6+-orange.svg) +![support os](https://img.shields.io/badge/os-linux%2C%20win%2C%20mac-yellow.svg) +![QQGroup](https://img.shields.io/badge/QQ_Group-1045148026-52B6EF?style=social&logo=tencent-qq&logoColor=000&logoWidth=20) + +集成飞桨智能视觉领域**图像分类**、**目标检测**、**语义分割**、**实例分割**任务能力,将深度学习开发全流程从**数据准备**、**模型训练与优化**到**多端部署**端到端打通,并提供**统一任务API接口**及**图形化开发界面Demo**。开发者无需分别安装不同套件,以**低代码**的形式即可快速完成飞桨全流程开发。 + +**PaddleX** 经过**质检**、**安防**、**巡检**、**遥感**、**零售**、**医疗**等十多个行业实际应用场景验证,沉淀产业实际经验,**并提供丰富的案例实践教程**,全程助力开发者产业实践落地。 + + + +## 安装 + +**PaddleX提供两种开发模式,满足用户的不同需求:** + +1. **Python开发模式:** + + 通过简洁易懂的Python API,在兼顾功能全面性、开发灵活性、集成方便性的基础上,给开发者最流畅的深度学习开发体验。
+ + **前置依赖** +> - paddlepaddle >= 1.8.0 +> - python >= 3.6 +> - cython +> - pycocotools + +``` +pip install paddlex -i https://mirror.baidu.com/pypi/simple +``` +详细安装方法请参考[PaddleX安装](https://paddlex.readthedocs.io/zh_CN/develop/install.html) + + +2. **Padlde GUI模式:** + + 无代码开发的可视化客户端,应用Paddle API实现,使开发者快速进行产业项目验证,并为用户开发自有深度学习软件/应用提供参照。 + +- 前往[PaddleX官网](https://www.paddlepaddle.org.cn/paddle/paddlex),申请下载Paddle X GUI一键绿色安装包。 + +- 前往[PaddleX GUI使用教程](./docs/gui/how_to_use.md)了解PaddleX GUI使用详情。 + + + +## 产品模块说明 + +- **数据准备**:兼容ImageNet、VOC、COCO等常用数据协议,同时与Labelme、精灵标注助手、[EasyData智能数据服务平台](https://ai.baidu.com/easydata/)等无缝衔接,全方位助力开发者更快完成数据准备工作。 + +- **数据预处理及增强**:提供极简的图像预处理和增强方法--Transforms,适配imgaug图像增强库,支持**上百种数据增强策略**,是开发者快速缓解小样本数据训练的问题。 + +- **模型训练**:集成[PaddleClas](https://github.com/PaddlePaddle/PaddleClas), [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection), [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)视觉开发套件,提供大量精选的、经过产业实践的高质量预训练模型,使开发者更快实现工业级模型效果。 + +- **模型调优**:内置模型可解释性模块、[VisualDL](https://github.com/PaddlePaddle/VisualDL)可视化分析工具。使开发者可以更直观的理解模型的特征提取区域、训练过程参数变化,从而快速优化模型。 + +- **多端安全部署**:内置[PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim)模型压缩工具和**模型加密部署模块**,与飞桨原生预测库Paddle Inference及高性能端侧推理引擎[Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite) 无缝打通,使开发者快速实现模型的多端、高性能、安全部署。 + + + +## 完整使用文档及API说明 + +- [完整PaddleX在线使用文档目录](https://paddlex.readthedocs.io/zh_CN/develop/index.html) + +- [10分钟快速上手系列教程](https://paddlex.readthedocs.io/zh_CN/develop/quick_start.html) +- [PaddleX模型训练教程集合](https://paddlex.readthedocs.io/zh_CN/develop/train/index.html) +- [PaddleX API接口说明](https://paddlex.readthedocs.io/zh_CN/develop/apis/index.html) + +### 在线项目示例 + +为了使开发者更快掌握PaddleX API,我们创建了一系列完整的示例教程,您可通过AIStudio一站式开发平台,快速在线运行PaddleX的项目。 + +- [PaddleX快速上手CV模型训练](https://aistudio.baidu.com/aistudio/projectdetail/450925) +- [PaddleX快速上手——MobileNetV3-ssld 化妆品分类](https://aistudio.baidu.com/aistudio/projectdetail/450220) +- [PaddleX快速上手——Faster-RCNN AI识虫](https://aistudio.baidu.com/aistudio/projectdetail/439888) +- [PaddleX快速上手——DeepLabv3+ 视盘分割](https://aistudio.baidu.com/aistudio/projectdetail/440197) + + + +## 全流程产业应用案例 + +(continue to be updated) + +* 工业巡检: + * [工业表计读数](https://paddlex.readthedocs.io/zh_CN/develop/examples/meter_reader.html) + +* 工业质检: + * 电池隔膜缺陷检测(Coming Soon) + +* [人像分割](https://paddlex.readthedocs.io/zh_CN/develop/examples/human_segmentation.html) + + + +## [FAQ](./docs/gui/faq.md) + + + +## 交流与反馈 + +- 项目官网:https://www.paddlepaddle.org.cn/paddle/paddlex +- PaddleX用户交流群:1045148026 (手机QQ扫描如下二维码快速加入) + ![](./docs/gui/images/QR.jpg) + + + +## 更新日志 + +> [历史版本及更新内容](https://paddlex.readthedocs.io/zh_CN/develop/change_log.html) + +- 2020.07.13 v1.1.0 +- 2020.07.12 v1.0.8 +- 2020.05.20 v1.0.0 +- 2020.05.17 v0.1.8 + + + +## 贡献代码 + +我们非常欢迎您为PaddleX贡献代码或者提供使用建议。如果您可以修复某个issue或者增加一个新功能,欢迎给我们提交Pull Requests。 diff --git a/deploy/README.md b/deploy/README.md index 515b1a16878efe8b1d18622aa811a335a285cdac..7fe3219882c3c8d863824829baf6742b74759d2f 100644 --- a/deploy/README.md +++ b/deploy/README.md @@ -1,7 +1,16 @@ -# 多端安全部署 +# 模型部署 本目录为PaddleX模型部署代码,编译和使用教程参考: -- [服务端部署(支持Python部署、C++部署、模型加密部署)](../docs/tutorials/deploy/deploy_server/) -- [OpenVINO部署](../docs/tutorials/deploy/deploy_openvino.md) -- [移动端部署](../docs/tutorials/deploy/deploy_lite.md) +- [服务端部署](../docs/deploy/server/) + - [Python部署](../docs/deploy/server/python.md) + - [C++部署](../docs/deploy/server/cpp/) + - [Windows平台部署](../docs/deploy/server/cpp/windows.md) + - [Linux平台部署](../docs/deploy/server/cpp/linux.md) + - [模型加密部署](../docs/deploy/server/encryption.md) +- [Nvidia Jetson开发板部署](../docs/deploy/nvidia-jetson.md) +- [移动端部署](../docs/deploy/paddlelite/) + - [模型压缩](../docs/deploy/paddlelite/slim) + - [模型量化](../docs/deploy/paddlelite/slim/quant.md) + - [模型裁剪](../docs/deploy/paddlelite/slim/prune.md) + - [Android平台](../docs/deploy/paddlelite/android.md) diff --git a/deploy/cpp/CMakeLists.txt b/deploy/cpp/CMakeLists.txt index ceaa448253f18bb8ea55423ed323aeb3cb459fdc..349afa2cae5bf40721cafdf38bbf28ddd621beeb 100644 --- a/deploy/cpp/CMakeLists.txt +++ b/deploy/cpp/CMakeLists.txt @@ -3,7 +3,11 @@ project(PaddleX CXX C) option(WITH_MKL "Compile demo with MKL/OpenBlas support,defaultuseMKL." ON) option(WITH_GPU "Compile demo with GPU/CPU, default use CPU." ON) -option(WITH_STATIC_LIB "Compile demo with static/shared library, default use static." OFF) +if (NOT WIN32) + option(WITH_STATIC_LIB "Compile demo with static/shared library, default use static." OFF) +else() + option(WITH_STATIC_LIB "Compile demo with static/shared library, default use static." ON) +endif() option(WITH_TENSORRT "Compile demo with TensorRT." OFF) option(WITH_ENCRYPTION "Compile demo with encryption tool." OFF) @@ -46,7 +50,9 @@ endmacro() if (WITH_ENCRYPTION) -add_definitions( -DWITH_ENCRYPTION=${WITH_ENCRYPTION}) + if (NOT (${CMAKE_SYSTEM_PROCESSOR} STREQUAL "aarch64")) + add_definitions( -DWITH_ENCRYPTION=${WITH_ENCRYPTION}) + endif() endif() if (WITH_MKL) @@ -57,8 +63,10 @@ if (NOT DEFINED PADDLE_DIR OR ${PADDLE_DIR} STREQUAL "") message(FATAL_ERROR "please set PADDLE_DIR with -DPADDLE_DIR=/path/paddle_influence_dir") endif() -if (NOT DEFINED OPENCV_DIR OR ${OPENCV_DIR} STREQUAL "") +if (NOT (${CMAKE_SYSTEM_PROCESSOR} STREQUAL "aarch64")) + if (NOT DEFINED OPENCV_DIR OR ${OPENCV_DIR} STREQUAL "") message(FATAL_ERROR "please set OPENCV_DIR with -DOPENCV_DIR=/path/opencv") + endif() endif() include_directories("${CMAKE_SOURCE_DIR}/") @@ -106,10 +114,17 @@ if (WIN32) find_package(OpenCV REQUIRED PATHS ${OPENCV_DIR}/build/ NO_DEFAULT_PATH) unset(OpenCV_DIR CACHE) else () - find_package(OpenCV REQUIRED PATHS ${OPENCV_DIR}/share/OpenCV NO_DEFAULT_PATH) + if (${CMAKE_SYSTEM_PROCESSOR} STREQUAL "aarch64") # x86_64 aarch64 + set(OpenCV_INCLUDE_DIRS "/usr/include/opencv4") + file(GLOB OpenCV_LIBS /usr/lib/aarch64-linux-gnu/libopencv_*${CMAKE_SHARED_LIBRARY_SUFFIX}) + message("OpenCV libs: ${OpenCV_LIBS}") + else() + find_package(OpenCV REQUIRED PATHS ${OPENCV_DIR}/share/OpenCV NO_DEFAULT_PATH) + endif() include_directories("${PADDLE_DIR}/paddle/include") link_directories("${PADDLE_DIR}/paddle/lib") endif () + include_directories(${OpenCV_INCLUDE_DIRS}) if (WIN32) @@ -255,9 +270,11 @@ endif() if(WITH_ENCRYPTION) if(NOT WIN32) + if (NOT (${CMAKE_SYSTEM_PROCESSOR} STREQUAL "aarch64")) include_directories("${ENCRYPTION_DIR}/include") link_directories("${ENCRYPTION_DIR}/lib") set(DEPS ${DEPS} ${ENCRYPTION_DIR}/lib/libpmodel-decrypt${CMAKE_SHARED_LIBRARY_SUFFIX}) + endif() else() include_directories("${ENCRYPTION_DIR}/include") link_directories("${ENCRYPTION_DIR}/lib") @@ -271,6 +288,7 @@ if (NOT WIN32) endif() set(DEPS ${DEPS} ${OpenCV_LIBS}) + add_library(paddlex_inference SHARED src/visualize src/transforms.cpp src/paddlex.cpp) ADD_DEPENDENCIES(paddlex_inference ext-yaml-cpp) target_link_libraries(paddlex_inference ${DEPS}) @@ -287,29 +305,61 @@ add_executable(segmenter demo/segmenter.cpp src/transforms.cpp src/paddlex.cpp s ADD_DEPENDENCIES(segmenter ext-yaml-cpp) target_link_libraries(segmenter ${DEPS}) +add_executable(video_classifier demo/video_classifier.cpp src/transforms.cpp src/paddlex.cpp src/visualize.cpp) +ADD_DEPENDENCIES(video_classifier ext-yaml-cpp) +target_link_libraries(video_classifier ${DEPS}) + +add_executable(video_detector demo/video_detector.cpp src/transforms.cpp src/paddlex.cpp src/visualize.cpp) +ADD_DEPENDENCIES(video_detector ext-yaml-cpp) +target_link_libraries(video_detector ${DEPS}) + +add_executable(video_segmenter demo/video_segmenter.cpp src/transforms.cpp src/paddlex.cpp src/visualize.cpp) +ADD_DEPENDENCIES(video_segmenter ext-yaml-cpp) +target_link_libraries(video_segmenter ${DEPS}) + + if (WIN32 AND WITH_MKL) add_custom_command(TARGET classifier POST_BUILD - COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.dll ./mklml.dll - COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.dll ./libiomp5md.dll - COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mkldnn/lib/mkldnn.dll ./mkldnn.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.dll ./paddlex_inference/Release/mklml.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.dll ./paddlex_inference/Release/libiomp5md.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mkldnn/lib/mkldnn.dll ./paddlex_inference/Release/mkldnn.dll COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.dll ./release/mklml.dll COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.dll ./release/libiomp5md.dll - COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mkldnn/lib/mkldnn.dll ./release/mkldnn.dll ) add_custom_command(TARGET detector POST_BUILD - COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.dll ./mklml.dll - COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.dll ./libiomp5md.dll - COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mkldnn/lib/mkldnn.dll ./mkldnn.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.dll ./paddlex_inference/Release/mklml.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.dll ./paddlex_inference/Release/libiomp5md.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mkldnn/lib/mkldnn.dll ./paddlex_inference/Release/mkldnn.dll COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.dll ./release/mklml.dll COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.dll ./release/libiomp5md.dll ) add_custom_command(TARGET segmenter POST_BUILD - COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.dll ./mklml.dll - COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.dll ./libiomp5md.dll - COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mkldnn/lib/mkldnn.dll ./mkldnn.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.dll ./paddlex_inference/Release/mklml.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.dll ./paddlex_inference/Release/libiomp5md.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mkldnn/lib/mkldnn.dll ./paddlex_inference/Release/mkldnn.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.dll ./release/mklml.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.dll ./release/libiomp5md.dll + ) + add_custom_command(TARGET video_classifier POST_BUILD + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.dll ./paddlex_inference/Release/mklml.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.dll ./paddlex_inference/Release/libiomp5md.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mkldnn/lib/mkldnn.dll ./paddlex_inference/Release/mkldnn.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.dll ./release/mklml.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.dll ./release/libiomp5md.dll + ) + add_custom_command(TARGET video_detector POST_BUILD + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.dll ./paddlex_inference/Release/mklml.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.dll ./paddlex_inference/Release/libiomp5md.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mkldnn/lib/mkldnn.dll ./paddlex_inference/Release/mkldnn.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.dll ./release/mklml.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.dll ./release/libiomp5md.dll + ) + add_custom_command(TARGET video_segmenter POST_BUILD + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.dll ./paddlex_inference/Release/mklml.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.dll ./paddlex_inference/Release/libiomp5md.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mkldnn/lib/mkldnn.dll ./paddlex_inference/Release/mkldnn.dll COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/mklml.dll ./release/mklml.dll COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mklml/lib/libiomp5md.dll ./release/libiomp5md.dll - COMMAND ${CMAKE_COMMAND} -E copy_if_different ${PADDLE_DIR}/third_party/install/mkldnn/lib/mkldnn.dll ./release/mkldnn.dll ) # for encryption if (EXISTS "${ENCRYPTION_DIR}/lib/pmodel-decrypt.dll") @@ -325,6 +375,18 @@ if (WIN32 AND WITH_MKL) COMMAND ${CMAKE_COMMAND} -E copy_if_different ${ENCRYPTION_DIR}/lib/pmodel-decrypt.dll ./pmodel-decrypt.dll COMMAND ${CMAKE_COMMAND} -E copy_if_different ${ENCRYPTION_DIR}/lib/pmodel-decrypt.dll ./release/pmodel-decrypt.dll ) + add_custom_command(TARGET video_classifier POST_BUILD + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${ENCRYPTION_DIR}/lib/pmodel-decrypt.dll ./pmodel-decrypt.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${ENCRYPTION_DIR}/lib/pmodel-decrypt.dll ./release/pmodel-decrypt.dll + ) + add_custom_command(TARGET video_detector POST_BUILD + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${ENCRYPTION_DIR}/lib/pmodel-decrypt.dll ./pmodel-decrypt.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${ENCRYPTION_DIR}/lib/pmodel-decrypt.dll ./release/pmodel-decrypt.dll + ) + add_custom_command(TARGET video_segmenter POST_BUILD + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${ENCRYPTION_DIR}/lib/pmodel-decrypt.dll ./pmodel-decrypt.dll + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${ENCRYPTION_DIR}/lib/pmodel-decrypt.dll ./release/pmodel-decrypt.dll + ) endif() endif() diff --git a/deploy/cpp/demo/classifier.cpp b/deploy/cpp/demo/classifier.cpp index 6fd354d3f9cb6a366f0efb0b31e7ae073a90b4ad..cf3bb5ccf64c43ec42d59a9b73fdced6b50b8dc5 100644 --- a/deploy/cpp/demo/classifier.cpp +++ b/deploy/cpp/demo/classifier.cpp @@ -37,7 +37,6 @@ DEFINE_int32(batch_size, 1, "Batch size of infering"); DEFINE_int32(thread_num, omp_get_num_procs(), "Number of preprocessing threads"); -DEFINE_bool(use_ir_optim, true, "use ir optimization"); int main(int argc, char** argv) { // Parsing command-line @@ -52,18 +51,15 @@ int main(int argc, char** argv) { return -1; } - // 加载模型 + // Load model PaddleX::Model model; model.Init(FLAGS_model_dir, FLAGS_use_gpu, FLAGS_use_trt, FLAGS_gpu_id, - FLAGS_key, - FLAGS_use_ir_optim); + FLAGS_key); - // 进行预测 - double total_running_time_s = 0.0; - double total_imread_time_s = 0.0; + // Predict int imgs = 1; if (FLAGS_image_list != "") { std::ifstream inf(FLAGS_image_list); @@ -71,7 +67,7 @@ int main(int argc, char** argv) { std::cerr << "Fail to open file " << FLAGS_image_list << std::endl; return -1; } - // 多batch预测 + // Mini-batch predict std::string image_path; std::vector image_paths; while (getline(inf, image_path)) { @@ -79,8 +75,7 @@ int main(int argc, char** argv) { } imgs = image_paths.size(); for (int i = 0; i < image_paths.size(); i += FLAGS_batch_size) { - auto start = system_clock::now(); - // 读图像 + // Read image int im_vec_size = std::min(static_cast(image_paths.size()), i + FLAGS_batch_size); std::vector im_vec(im_vec_size - i); @@ -91,19 +86,7 @@ int main(int argc, char** argv) { for (int j = i; j < im_vec_size; ++j) { im_vec[j - i] = std::move(cv::imread(image_paths[j], 1)); } - auto imread_end = system_clock::now(); model.predict(im_vec, &results, thread_num); - - auto imread_duration = duration_cast(imread_end - start); - total_imread_time_s += static_cast(imread_duration.count()) * - microseconds::period::num / - microseconds::period::den; - - auto end = system_clock::now(); - auto duration = duration_cast(end - start); - total_running_time_s += static_cast(duration.count()) * - microseconds::period::num / - microseconds::period::den; for (int j = i; j < im_vec_size; ++j) { std::cout << "Path:" << image_paths[j] << ", predict label: " << results[j - i].category @@ -112,23 +95,12 @@ int main(int argc, char** argv) { } } } else { - auto start = system_clock::now(); PaddleX::ClsResult result; cv::Mat im = cv::imread(FLAGS_image, 1); model.predict(im, &result); - auto end = system_clock::now(); - auto duration = duration_cast(end - start); - total_running_time_s += static_cast(duration.count()) * - microseconds::period::num / - microseconds::period::den; std::cout << "Predict label: " << result.category << ", label_id:" << result.category_id << ", score: " << result.score << std::endl; } - std::cout << "Total running time: " << total_running_time_s - << " s, average running time: " << total_running_time_s / imgs - << " s/img, total read img time: " << total_imread_time_s - << " s, average read time: " << total_imread_time_s / imgs - << " s/img, batch_size = " << FLAGS_batch_size << std::endl; return 0; } diff --git a/deploy/cpp/demo/detector.cpp b/deploy/cpp/demo/detector.cpp index 54f93d2995fa24af73bba2855b6b26466129fa20..ef7fd782715bef5d9cc1dae43c87ceaa123e914f 100644 --- a/deploy/cpp/demo/detector.cpp +++ b/deploy/cpp/demo/detector.cpp @@ -43,10 +43,9 @@ DEFINE_double(threshold, DEFINE_int32(thread_num, omp_get_num_procs(), "Number of preprocessing threads"); -DEFINE_bool(use_ir_optim, true, "use ir optimization"); int main(int argc, char** argv) { - // 解析命令行参数 + // Parsing command-line google::ParseCommandLineFlags(&argc, &argv, true); if (FLAGS_model_dir == "") { @@ -57,21 +56,16 @@ int main(int argc, char** argv) { std::cerr << "--image or --image_list need to be defined" << std::endl; return -1; } - // 加载模型 + // Load model PaddleX::Model model; model.Init(FLAGS_model_dir, FLAGS_use_gpu, FLAGS_use_trt, FLAGS_gpu_id, - FLAGS_key, - FLAGS_use_ir_optim); - - double total_running_time_s = 0.0; - double total_imread_time_s = 0.0; + FLAGS_key); int imgs = 1; - auto colormap = PaddleX::GenerateColorMap(model.labels.size()); std::string save_dir = "output"; - // 进行预测 + // Predict if (FLAGS_image_list != "") { std::ifstream inf(FLAGS_image_list); if (!inf) { @@ -85,7 +79,6 @@ int main(int argc, char** argv) { } imgs = image_paths.size(); for (int i = 0; i < image_paths.size(); i += FLAGS_batch_size) { - auto start = system_clock::now(); int im_vec_size = std::min(static_cast(image_paths.size()), i + FLAGS_batch_size); std::vector im_vec(im_vec_size - i); @@ -96,18 +89,8 @@ int main(int argc, char** argv) { for (int j = i; j < im_vec_size; ++j) { im_vec[j - i] = std::move(cv::imread(image_paths[j], 1)); } - auto imread_end = system_clock::now(); model.predict(im_vec, &results, thread_num); - auto imread_duration = duration_cast(imread_end - start); - total_imread_time_s += static_cast(imread_duration.count()) * - microseconds::period::num / - microseconds::period::den; - auto end = system_clock::now(); - auto duration = duration_cast(end - start); - total_running_time_s += static_cast(duration.count()) * - microseconds::period::num / - microseconds::period::den; - // 输出结果目标框 + // Output predicted bounding boxes for (int j = 0; j < im_vec_size - i; ++j) { for (int k = 0; k < results[j].boxes.size(); ++k) { std::cout << "image file: " << image_paths[i + j] << ", "; @@ -121,10 +104,10 @@ int main(int argc, char** argv) { << results[j].boxes[k].coordinate[3] << ")" << std::endl; } } - // 可视化 + // Visualize results for (int j = 0; j < im_vec_size - i; ++j) { cv::Mat vis_img = PaddleX::Visualize( - im_vec[j], results[j], model.labels, colormap, FLAGS_threshold); + im_vec[j], results[j], model.labels, FLAGS_threshold); std::string save_path = PaddleX::generate_save_path(FLAGS_save_dir, image_paths[i + j]); cv::imwrite(save_path, vis_img); @@ -132,16 +115,10 @@ int main(int argc, char** argv) { } } } else { - auto start = system_clock::now(); PaddleX::DetResult result; cv::Mat im = cv::imread(FLAGS_image, 1); model.predict(im, &result); - auto end = system_clock::now(); - auto duration = duration_cast(end - start); - total_running_time_s += static_cast(duration.count()) * - microseconds::period::num / - microseconds::period::den; - // 输出结果目标框 + // Output predicted bounding boxes for (int i = 0; i < result.boxes.size(); ++i) { std::cout << "image file: " << FLAGS_image << std::endl; std::cout << ", predict label: " << result.boxes[i].category @@ -153,9 +130,9 @@ int main(int argc, char** argv) { << result.boxes[i].coordinate[3] << ")" << std::endl; } - // 可视化 + // Visualize results cv::Mat vis_img = - PaddleX::Visualize(im, result, model.labels, colormap, FLAGS_threshold); + PaddleX::Visualize(im, result, model.labels, FLAGS_threshold); std::string save_path = PaddleX::generate_save_path(FLAGS_save_dir, FLAGS_image); cv::imwrite(save_path, vis_img); @@ -163,11 +140,5 @@ int main(int argc, char** argv) { std::cout << "Visualized output saved as " << save_path << std::endl; } - std::cout << "Total running time: " << total_running_time_s - << " s, average running time: " << total_running_time_s / imgs - << " s/img, total read img time: " << total_imread_time_s - << " s, average read img time: " << total_imread_time_s / imgs - << " s, batch_size = " << FLAGS_batch_size << std::endl; - return 0; } diff --git a/deploy/cpp/demo/segmenter.cpp b/deploy/cpp/demo/segmenter.cpp index 90adb5aea860bf5ad9f6cb9019990a188c37f871..d13a328f5beecc90fe9257a4f32ee63a8fe609a5 100644 --- a/deploy/cpp/demo/segmenter.cpp +++ b/deploy/cpp/demo/segmenter.cpp @@ -39,10 +39,9 @@ DEFINE_int32(batch_size, 1, "Batch size of infering"); DEFINE_int32(thread_num, omp_get_num_procs(), "Number of preprocessing threads"); -DEFINE_bool(use_ir_optim, false, "use ir optimization"); int main(int argc, char** argv) { - // 解析命令行参数 + // Parsing command-line google::ParseCommandLineFlags(&argc, &argv, true); if (FLAGS_model_dir == "") { @@ -54,20 +53,15 @@ int main(int argc, char** argv) { return -1; } - // 加载模型 + // Load model PaddleX::Model model; model.Init(FLAGS_model_dir, FLAGS_use_gpu, FLAGS_use_trt, FLAGS_gpu_id, - FLAGS_key, - FLAGS_use_ir_optim); - - double total_running_time_s = 0.0; - double total_imread_time_s = 0.0; + FLAGS_key); int imgs = 1; - auto colormap = PaddleX::GenerateColorMap(model.labels.size()); - // 进行预测 + // Predict if (FLAGS_image_list != "") { std::ifstream inf(FLAGS_image_list); if (!inf) { @@ -81,7 +75,6 @@ int main(int argc, char** argv) { } imgs = image_paths.size(); for (int i = 0; i < image_paths.size(); i += FLAGS_batch_size) { - auto start = system_clock::now(); int im_vec_size = std::min(static_cast(image_paths.size()), i + FLAGS_batch_size); std::vector im_vec(im_vec_size - i); @@ -92,21 +85,11 @@ int main(int argc, char** argv) { for (int j = i; j < im_vec_size; ++j) { im_vec[j - i] = std::move(cv::imread(image_paths[j], 1)); } - auto imread_end = system_clock::now(); model.predict(im_vec, &results, thread_num); - auto imread_duration = duration_cast(imread_end - start); - total_imread_time_s += static_cast(imread_duration.count()) * - microseconds::period::num / - microseconds::period::den; - auto end = system_clock::now(); - auto duration = duration_cast(end - start); - total_running_time_s += static_cast(duration.count()) * - microseconds::period::num / - microseconds::period::den; - // 可视化 + // Visualize results for (int j = 0; j < im_vec_size - i; ++j) { cv::Mat vis_img = - PaddleX::Visualize(im_vec[j], results[j], model.labels, colormap); + PaddleX::Visualize(im_vec[j], results[j], model.labels); std::string save_path = PaddleX::generate_save_path(FLAGS_save_dir, image_paths[i + j]); cv::imwrite(save_path, vis_img); @@ -114,28 +97,16 @@ int main(int argc, char** argv) { } } } else { - auto start = system_clock::now(); PaddleX::SegResult result; cv::Mat im = cv::imread(FLAGS_image, 1); model.predict(im, &result); - auto end = system_clock::now(); - auto duration = duration_cast(end - start); - total_running_time_s += static_cast(duration.count()) * - microseconds::period::num / - microseconds::period::den; - // 可视化 - cv::Mat vis_img = PaddleX::Visualize(im, result, model.labels, colormap); + // Visualize results + cv::Mat vis_img = PaddleX::Visualize(im, result, model.labels); std::string save_path = PaddleX::generate_save_path(FLAGS_save_dir, FLAGS_image); cv::imwrite(save_path, vis_img); result.clear(); std::cout << "Visualized output saved as " << save_path << std::endl; } - std::cout << "Total running time: " << total_running_time_s - << " s, average running time: " << total_running_time_s / imgs - << " s/img, total read img time: " << total_imread_time_s - << " s, average read img time: " << total_imread_time_s / imgs - << " s, batch_size = " << FLAGS_batch_size << std::endl; - return 0; } diff --git a/deploy/cpp/demo/video_classifier.cpp b/deploy/cpp/demo/video_classifier.cpp new file mode 100644 index 0000000000000000000000000000000000000000..96be867d40800455184b7938dc829e8a0b8f8390 --- /dev/null +++ b/deploy/cpp/demo/video_classifier.cpp @@ -0,0 +1,186 @@ +// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#include +#include + +#include +#include // NOLINT +#include +#include +#include +#include +#include + +#include "include/paddlex/paddlex.h" +#include "include/paddlex/visualize.h" + +#if defined(__arm__) || defined(__aarch64__) +#include +#endif + +using namespace std::chrono; // NOLINT + +DEFINE_string(model_dir, "", "Path of inference model"); +DEFINE_bool(use_gpu, false, "Infering with GPU or CPU"); +DEFINE_bool(use_trt, false, "Infering with TensorRT"); +DEFINE_int32(gpu_id, 0, "GPU card id"); +DEFINE_string(key, "", "key of encryption"); +DEFINE_bool(use_camera, false, "Infering with Camera"); +DEFINE_int32(camera_id, 0, "Camera id"); +DEFINE_string(video_path, "", "Path of input video"); +DEFINE_bool(show_result, false, "show the result of each frame with a window"); +DEFINE_bool(save_result, true, "save the result of each frame to a video"); +DEFINE_string(save_dir, "output", "Path to save visualized image"); + +int main(int argc, char** argv) { + // Parsing command-line + google::ParseCommandLineFlags(&argc, &argv, true); + + if (FLAGS_model_dir == "") { + std::cerr << "--model_dir need to be defined" << std::endl; + return -1; + } + if (FLAGS_video_path == "" & FLAGS_use_camera == false) { + std::cerr << "--video_path or --use_camera need to be defined" << std::endl; + return -1; + } + + // Load model + PaddleX::Model model; + model.Init(FLAGS_model_dir, + FLAGS_use_gpu, + FLAGS_use_trt, + FLAGS_gpu_id, + FLAGS_key); + + // Open video + cv::VideoCapture capture; + if (FLAGS_use_camera) { + capture.open(FLAGS_camera_id); + if (!capture.isOpened()) { + std::cout << "Can not open the camera " + << FLAGS_camera_id << "." + << std::endl; + return -1; + } + } else { + capture.open(FLAGS_video_path); + if (!capture.isOpened()) { + std::cout << "Can not open the video " + << FLAGS_video_path << "." + << std::endl; + return -1; + } + } + + // Create a VideoWriter + cv::VideoWriter video_out; + std::string video_out_path; + if (FLAGS_save_result) { + // Get video information: resolution, fps + int video_width = static_cast(capture.get(CV_CAP_PROP_FRAME_WIDTH)); + int video_height = static_cast(capture.get(CV_CAP_PROP_FRAME_HEIGHT)); + int video_fps = static_cast(capture.get(CV_CAP_PROP_FPS)); + int video_fourcc; + if (FLAGS_use_camera) { + video_fourcc = 828601953; + } else { + video_fourcc = static_cast(capture.get(CV_CAP_PROP_FOURCC)); + } + + if (FLAGS_use_camera) { + time_t now = time(0); + video_out_path = + PaddleX::generate_save_path(FLAGS_save_dir, + std::to_string(now) + ".mp4"); + } else { + video_out_path = + PaddleX::generate_save_path(FLAGS_save_dir, FLAGS_video_path); + } + video_out.open(video_out_path.c_str(), + video_fourcc, + video_fps, + cv::Size(video_width, video_height), + true); + if (!video_out.isOpened()) { + std::cout << "Create video writer failed!" << std::endl; + return -1; + } + } + + PaddleX::ClsResult result; + cv::Mat frame; + int key; + while (capture.read(frame)) { + if (FLAGS_show_result || FLAGS_use_camera) { + key = cv::waitKey(1); + // When pressing `ESC`, then exit program and result video is saved + if (key == 27) { + break; + } + } else if (frame.empty()) { + break; + } + // Begin to predict + model.predict(frame, &result); + // Visualize results + cv::Mat vis_img = frame.clone(); + auto colormap = PaddleX::GenerateColorMap(model.labels.size()); + int c1 = colormap[3 * result.category_id + 0]; + int c2 = colormap[3 * result.category_id + 1]; + int c3 = colormap[3 * result.category_id + 2]; + cv::Scalar text_color = cv::Scalar(c1, c2, c3); + std::string text = result.category; + text += std::to_string(static_cast(result.score * 100)) + "%"; + int font_face = cv::FONT_HERSHEY_SIMPLEX; + double font_scale = 0.5f; + float thickness = 0.5; + cv::Size text_size = + cv::getTextSize(text, font_face, font_scale, thickness, nullptr); + cv::Point origin; + origin.x = frame.cols / 2; + origin.y = frame.rows / 2; + cv::Rect text_back = cv::Rect(origin.x, + origin.y - text_size.height, + text_size.width, + text_size.height); + cv::rectangle(vis_img, text_back, text_color, -1); + cv::putText(vis_img, + text, + origin, + font_face, + font_scale, + cv::Scalar(255, 255, 255), + thickness); + if (FLAGS_show_result || FLAGS_use_camera) { + cv::imshow("video_classifier", vis_img); + } + if (FLAGS_save_result) { + video_out.write(vis_img); + } + std::cout << "Predict label: " << result.category + << ", label_id:" << result.category_id + << ", score: " << result.score << std::endl; + } + capture.release(); + if (FLAGS_save_result) { + video_out.release(); + std::cout << "Visualized output saved as " << video_out_path << std::endl; + } + if (FLAGS_show_result || FLAGS_use_camera) { + cv::destroyAllWindows(); + } + return 0; +} diff --git a/deploy/cpp/demo/video_detector.cpp b/deploy/cpp/demo/video_detector.cpp new file mode 100644 index 0000000000000000000000000000000000000000..ee4d5bdb138d03020042e60d41ded0ca1efde46d --- /dev/null +++ b/deploy/cpp/demo/video_detector.cpp @@ -0,0 +1,159 @@ +// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#include +#include + +#include +#include // NOLINT +#include +#include +#include +#include +#include + +#include "include/paddlex/paddlex.h" +#include "include/paddlex/visualize.h" + +#if defined(__arm__) || defined(__aarch64__) +#include +#endif + +using namespace std::chrono; // NOLINT + +DEFINE_string(model_dir, "", "Path of inference model"); +DEFINE_bool(use_gpu, false, "Infering with GPU or CPU"); +DEFINE_bool(use_trt, false, "Infering with TensorRT"); +DEFINE_int32(gpu_id, 0, "GPU card id"); +DEFINE_bool(use_camera, false, "Infering with Camera"); +DEFINE_int32(camera_id, 0, "Camera id"); +DEFINE_string(video_path, "", "Path of input video"); +DEFINE_bool(show_result, false, "show the result of each frame with a window"); +DEFINE_bool(save_result, true, "save the result of each frame to a video"); +DEFINE_string(key, "", "key of encryption"); +DEFINE_string(save_dir, "output", "Path to save visualized image"); +DEFINE_double(threshold, + 0.5, + "The minimum scores of target boxes which are shown"); + +int main(int argc, char** argv) { + // Parsing command-line + google::ParseCommandLineFlags(&argc, &argv, true); + + if (FLAGS_model_dir == "") { + std::cerr << "--model_dir need to be defined" << std::endl; + return -1; + } + if (FLAGS_video_path == "" & FLAGS_use_camera == false) { + std::cerr << "--video_path or --use_camera need to be defined" << std::endl; + return -1; + } + // Load model + PaddleX::Model model; + model.Init(FLAGS_model_dir, + FLAGS_use_gpu, + FLAGS_use_trt, + FLAGS_gpu_id, + FLAGS_key); + // Open video + cv::VideoCapture capture; + if (FLAGS_use_camera) { + capture.open(FLAGS_camera_id); + if (!capture.isOpened()) { + std::cout << "Can not open the camera " + << FLAGS_camera_id << "." + << std::endl; + return -1; + } + } else { + capture.open(FLAGS_video_path); + if (!capture.isOpened()) { + std::cout << "Can not open the video " + << FLAGS_video_path << "." + << std::endl; + return -1; + } + } + + // Create a VideoWriter + cv::VideoWriter video_out; + std::string video_out_path; + if (FLAGS_save_result) { + // Get video information: resolution, fps + int video_width = static_cast(capture.get(CV_CAP_PROP_FRAME_WIDTH)); + int video_height = static_cast(capture.get(CV_CAP_PROP_FRAME_HEIGHT)); + int video_fps = static_cast(capture.get(CV_CAP_PROP_FPS)); + int video_fourcc; + if (FLAGS_use_camera) { + video_fourcc = 828601953; + } else { + video_fourcc = static_cast(capture.get(CV_CAP_PROP_FOURCC)); + } + + if (FLAGS_use_camera) { + time_t now = time(0); + video_out_path = + PaddleX::generate_save_path(FLAGS_save_dir, + std::to_string(now) + ".mp4"); + } else { + video_out_path = + PaddleX::generate_save_path(FLAGS_save_dir, FLAGS_video_path); + } + video_out.open(video_out_path.c_str(), + video_fourcc, + video_fps, + cv::Size(video_width, video_height), + true); + if (!video_out.isOpened()) { + std::cout << "Create video writer failed!" << std::endl; + return -1; + } + } + + PaddleX::DetResult result; + cv::Mat frame; + int key; + while (capture.read(frame)) { + if (FLAGS_show_result || FLAGS_use_camera) { + key = cv::waitKey(1); + // When pressing `ESC`, then exit program and result video is saved + if (key == 27) { + break; + } + } else if (frame.empty()) { + break; + } + // Begin to predict + model.predict(frame, &result); + // Visualize results + cv::Mat vis_img = + PaddleX::Visualize(frame, result, model.labels, FLAGS_threshold); + if (FLAGS_show_result || FLAGS_use_camera) { + cv::imshow("video_detector", vis_img); + } + if (FLAGS_save_result) { + video_out.write(vis_img); + } + result.clear(); + } + capture.release(); + if (FLAGS_save_result) { + std::cout << "Visualized output saved as " << video_out_path << std::endl; + video_out.release(); + } + if (FLAGS_show_result || FLAGS_use_camera) { + cv::destroyAllWindows(); + } + return 0; +} diff --git a/deploy/cpp/demo/video_segmenter.cpp b/deploy/cpp/demo/video_segmenter.cpp new file mode 100644 index 0000000000000000000000000000000000000000..6a835117cd1434b5f26e0fb660e6fe07ef56e607 --- /dev/null +++ b/deploy/cpp/demo/video_segmenter.cpp @@ -0,0 +1,157 @@ +// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#include +#include + +#include +#include // NOLINT +#include +#include +#include +#include +#include +#include +#include "include/paddlex/paddlex.h" +#include "include/paddlex/visualize.h" + +#if defined(__arm__) || defined(__aarch64__) +#include +#endif + +using namespace std::chrono; // NOLINT + +DEFINE_string(model_dir, "", "Path of inference model"); +DEFINE_bool(use_gpu, false, "Infering with GPU or CPU"); +DEFINE_bool(use_trt, false, "Infering with TensorRT"); +DEFINE_int32(gpu_id, 0, "GPU card id"); +DEFINE_string(key, "", "key of encryption"); +DEFINE_bool(use_camera, false, "Infering with Camera"); +DEFINE_int32(camera_id, 0, "Camera id"); +DEFINE_string(video_path, "", "Path of input video"); +DEFINE_bool(show_result, false, "show the result of each frame with a window"); +DEFINE_bool(save_result, true, "save the result of each frame to a video"); +DEFINE_string(save_dir, "output", "Path to save visualized image"); + +int main(int argc, char** argv) { + // Parsing command-line + google::ParseCommandLineFlags(&argc, &argv, true); + + if (FLAGS_model_dir == "") { + std::cerr << "--model_dir need to be defined" << std::endl; + return -1; + } + if (FLAGS_video_path == "" & FLAGS_use_camera == false) { + std::cerr << "--video_path or --use_camera need to be defined" << std::endl; + return -1; + } + + // Load model + PaddleX::Model model; + model.Init(FLAGS_model_dir, + FLAGS_use_gpu, + FLAGS_use_trt, + FLAGS_gpu_id, + FLAGS_key); + // Open video + cv::VideoCapture capture; + if (FLAGS_use_camera) { + capture.open(FLAGS_camera_id); + if (!capture.isOpened()) { + std::cout << "Can not open the camera " + << FLAGS_camera_id << "." + << std::endl; + return -1; + } + } else { + capture.open(FLAGS_video_path); + if (!capture.isOpened()) { + std::cout << "Can not open the video " + << FLAGS_video_path << "." + << std::endl; + return -1; + } + } + + + // Create a VideoWriter + cv::VideoWriter video_out; + std::string video_out_path; + if (FLAGS_save_result) { + // Get video information: resolution, fps + int video_width = static_cast(capture.get(CV_CAP_PROP_FRAME_WIDTH)); + int video_height = static_cast(capture.get(CV_CAP_PROP_FRAME_HEIGHT)); + int video_fps = static_cast(capture.get(CV_CAP_PROP_FPS)); + int video_fourcc; + if (FLAGS_use_camera) { + video_fourcc = 828601953; + } else { + video_fourcc = static_cast(capture.get(CV_CAP_PROP_FOURCC)); + } + + if (FLAGS_use_camera) { + time_t now = time(0); + video_out_path = + PaddleX::generate_save_path(FLAGS_save_dir, + std::to_string(now) + ".mp4"); + } else { + video_out_path = + PaddleX::generate_save_path(FLAGS_save_dir, FLAGS_video_path); + } + video_out.open(video_out_path.c_str(), + video_fourcc, + video_fps, + cv::Size(video_width, video_height), + true); + if (!video_out.isOpened()) { + std::cout << "Create video writer failed!" << std::endl; + return -1; + } + } + + PaddleX::SegResult result; + cv::Mat frame; + int key; + while (capture.read(frame)) { + if (FLAGS_show_result || FLAGS_use_camera) { + key = cv::waitKey(1); + // When pressing `ESC`, then exit program and result video is saved + if (key == 27) { + break; + } + } else if (frame.empty()) { + break; + } + // Begin to predict + model.predict(frame, &result); + // Visualize results + cv::Mat vis_img = PaddleX::Visualize(frame, result, model.labels); + if (FLAGS_show_result || FLAGS_use_camera) { + cv::imshow("video_segmenter", vis_img); + } + if (FLAGS_save_result) { + video_out.write(vis_img); + } + result.clear(); + } + capture.release(); + if (FLAGS_save_result) { + video_out.release(); + std::cout << "Visualized output saved as " << video_out_path << std::endl; + } + if (FLAGS_show_result || FLAGS_use_camera) { + cv::destroyAllWindows(); + } + return 0; +} diff --git a/deploy/cpp/include/paddlex/paddlex.h b/deploy/cpp/include/paddlex/paddlex.h index e0d0569341198d0a0b2a8c6d0637c3f5a61e1f3f..00b1a05ac8127d403dd7325f3357ece75ec23a58 100644 --- a/deploy/cpp/include/paddlex/paddlex.h +++ b/deploy/cpp/include/paddlex/paddlex.h @@ -175,7 +175,7 @@ class Model { * @return true if predict successfully * */ bool predict(const std::vector &im_batch, - std::vector *result, + std::vector *results, int thread_num = 1); /* @@ -201,7 +201,7 @@ class Model { * @return true if predict successfully * */ bool predict(const std::vector &im_batch, - std::vector *result, + std::vector *results, int thread_num = 1); // model type, include 3 type: classifier, detector, segmenter diff --git a/deploy/cpp/include/paddlex/transforms.h b/deploy/cpp/include/paddlex/transforms.h index c1ffd7e1de8a28f88a571e7b9d029585806cf59d..7e936dc17f4b6e58cdb8cdc36639173ccc24177c 100644 --- a/deploy/cpp/include/paddlex/transforms.h +++ b/deploy/cpp/include/paddlex/transforms.h @@ -214,6 +214,12 @@ class Padding : public Transform { height_ = item["target_size"].as>()[1]; } } + if (item["im_padding_value"].IsDefined()) { + im_value_ = item["im_padding_value"].as>(); + } + else { + im_value_ = {0, 0, 0}; + } } virtual bool Run(cv::Mat* im, ImageBlob* data); @@ -221,6 +227,7 @@ class Padding : public Transform { int coarsest_stride_ = -1; int width_ = 0; int height_ = 0; + std::vector im_value_; }; /* * @brief diff --git a/deploy/cpp/include/paddlex/visualize.h b/deploy/cpp/include/paddlex/visualize.h index 9ddba5387b427c60645db7c96a54bcba76fa9898..873cea10ad5f725a4a4c477559de0b659f94a7b5 100644 --- a/deploy/cpp/include/paddlex/visualize.h +++ b/deploy/cpp/include/paddlex/visualize.h @@ -23,9 +23,9 @@ #else // Linux/Unix #include // #include -#ifdef __arm__ // for arm -#include -#include +#if defined(__arm__) || defined(__aarch64__) // for arm +#include +#include #else #include #include @@ -65,13 +65,12 @@ std::vector GenerateColorMap(int num_class); * @param img: initial image matrix * @param results: the detection result * @param labels: label map - * @param colormap: visualization color map + * @param threshold: minimum confidence to display * @return visualized image matrix * */ cv::Mat Visualize(const cv::Mat& img, const DetResult& results, const std::map& labels, - const std::vector& colormap, float threshold = 0.5); /* @@ -81,13 +80,11 @@ cv::Mat Visualize(const cv::Mat& img, * @param img: initial image matrix * @param results: the detection result * @param labels: label map - * @param colormap: visualization color map * @return visualized image matrix * */ cv::Mat Visualize(const cv::Mat& img, const SegResult& result, - const std::map& labels, - const std::vector& colormap); + const std::map& labels); /* * @brief diff --git a/deploy/cpp/scripts/bootstrap.sh b/deploy/cpp/scripts/bootstrap.sh index 283d75928a68a507d852ec61eb89e115e581146f..bb9756204e9e610365f67aa37dc78d1b5eaf80b8 100644 --- a/deploy/cpp/scripts/bootstrap.sh +++ b/deploy/cpp/scripts/bootstrap.sh @@ -7,12 +7,12 @@ if [ ! -d "./paddlex-encryption" ]; then fi # download pre-compiled opencv lib -OPENCV_URL=https://paddleseg.bj.bcebos.com/deploy/docker/opencv3gcc4.8.tar.bz2 -if [ ! -d "./deps/opencv3gcc4.8" ]; then +OPENCV_URL=https://bj.bcebos.com/paddleseg/deploy/opencv3.4.6gcc4.8ffmpeg.tar.gz2 +if [ ! -d "./deps/opencv3.4.6gcc4.8ffmpeg/" ]; then mkdir -p deps cd deps wget -c ${OPENCV_URL} - tar xvfj opencv3gcc4.8.tar.bz2 - rm -rf opencv3gcc4.8.tar.bz2 + tar xvfj opencv3.4.6gcc4.8ffmpeg.tar.gz2 + rm -rf opencv3.4.6gcc4.8ffmpeg.tar.gz2 cd .. fi diff --git a/deploy/cpp/scripts/build.sh b/deploy/cpp/scripts/build.sh index e87d7bf4797f1833d88379df0587733958639b06..6d6ad25b24170a27639f9b1d651888c4027dbeed 100644 --- a/deploy/cpp/scripts/build.sh +++ b/deploy/cpp/scripts/build.sh @@ -24,7 +24,7 @@ ENCRYPTION_DIR=$(pwd)/paddlex-encryption # OPENCV 路径, 如果使用自带预编译版本可不修改 sh $(pwd)/scripts/bootstrap.sh # 下载预编译版本的opencv -OPENCV_DIR=$(pwd)/deps/opencv3gcc4.8/ +OPENCV_DIR=$(pwd)/deps/opencv3.4.6gcc4.8ffmpeg/ # 以下无需改动 rm -rf build @@ -42,4 +42,4 @@ cmake .. \ -DCUDNN_LIB=${CUDNN_LIB} \ -DENCRYPTION_DIR=${ENCRYPTION_DIR} \ -DOPENCV_DIR=${OPENCV_DIR} -make +make -j16 diff --git a/deploy/cpp/scripts/jetson_build.sh b/deploy/cpp/scripts/jetson_build.sh new file mode 100644 index 0000000000000000000000000000000000000000..bb2957e351900872189773eeaa41a75d36ec3471 --- /dev/null +++ b/deploy/cpp/scripts/jetson_build.sh @@ -0,0 +1,32 @@ +# 是否使用GPU(即是否使用 CUDA) +WITH_GPU=OFF +# 使用MKL or openblas +WITH_MKL=OFF +# 是否集成 TensorRT(仅WITH_GPU=ON 有效) +WITH_TENSORRT=OFF +# TensorRT 的路径,如果需要集成TensorRT,需修改为您实际安装的TensorRT路径 +TENSORRT_DIR=/root/projects/TensorRT/ +# Paddle 预测库路径, 请修改为您实际安装的预测库路径 +PADDLE_DIR=/root/projects/fluid_inference +# Paddle 的预测库是否使用静态库来编译 +# 使用TensorRT时,Paddle的预测库通常为动态库 +WITH_STATIC_LIB=OFF +# CUDA 的 lib 路径 +CUDA_LIB=/usr/local/cuda/lib64 +# CUDNN 的 lib 路径 +CUDNN_LIB=/usr/lib/aarch64-linux-gnu + +# 以下无需改动 +rm -rf build +mkdir -p build +cd build +cmake .. \ + -DWITH_GPU=${WITH_GPU} \ + -DWITH_MKL=${WITH_MKL} \ + -DWITH_TENSORRT=${WITH_TENSORRT} \ + -DTENSORRT_DIR=${TENSORRT_DIR} \ + -DPADDLE_DIR=${PADDLE_DIR} \ + -DWITH_STATIC_LIB=${WITH_STATIC_LIB} \ + -DCUDA_LIB=${CUDA_LIB} \ + -DCUDNN_LIB=${CUDNN_LIB} +make diff --git a/deploy/cpp/src/paddlex.cpp b/deploy/cpp/src/paddlex.cpp index cf1dfc955c43f9a61539e93a34c77c6ab4b198a9..47dc5b9e9e9104e2d4983a8ac077e5a0810610cf 100644 --- a/deploy/cpp/src/paddlex.cpp +++ b/deploy/cpp/src/paddlex.cpp @@ -65,7 +65,11 @@ void Model::create_predictor(const std::string& model_dir, config.SwitchUseFeedFetchOps(false); config.SwitchSpecifyInputNames(true); // 开启图优化 +#if defined(__arm__) || defined(__aarch64__) + config.SwitchIrOptim(false); +#else config.SwitchIrOptim(use_ir_optim); +#endif // 开启内存优化 config.EnableMemoryOptim(); if (use_trt) { @@ -225,6 +229,8 @@ bool Model::predict(const std::vector& im_batch, outputs_.resize(size); output_tensor->copy_to_cpu(outputs_.data()); // 对模型输出结果进行后处理 + (*results).clear(); + (*results).resize(batch_size); int single_batch_size = size / batch_size; for (int i = 0; i < batch_size; ++i) { auto start_ptr = std::begin(outputs_); @@ -343,7 +349,7 @@ bool Model::predict(const cv::Mat& im, DetResult* result) { } bool Model::predict(const std::vector& im_batch, - std::vector* result, + std::vector* results, int thread_num) { for (auto& inputs : inputs_batch_) { inputs.clear(); @@ -467,6 +473,8 @@ bool Model::predict(const std::vector& im_batch, auto lod_vector = output_box_tensor->lod(); int num_boxes = size / 6; // 解析预测框box + (*results).clear(); + (*results).resize(batch_size); for (int i = 0; i < lod_vector[0].size() - 1; ++i) { for (int j = lod_vector[0][i]; j < lod_vector[0][i + 1]; ++j) { Box box; @@ -480,7 +488,7 @@ bool Model::predict(const std::vector& im_batch, float w = xmax - xmin + 1; float h = ymax - ymin + 1; box.coordinate = {xmin, ymin, w, h}; - (*result)[i].boxes.push_back(std::move(box)); + (*results)[i].boxes.push_back(std::move(box)); } } @@ -499,9 +507,9 @@ bool Model::predict(const std::vector& im_batch, output_mask_tensor->copy_to_cpu(output_mask.data()); int mask_idx = 0; for (int i = 0; i < lod_vector[0].size() - 1; ++i) { - (*result)[i].mask_resolution = output_mask_shape[2]; - for (int j = 0; j < (*result)[i].boxes.size(); ++j) { - Box* box = &(*result)[i].boxes[j]; + (*results)[i].mask_resolution = output_mask_shape[2]; + for (int j = 0; j < (*results)[i].boxes.size(); ++j) { + Box* box = &(*results)[i].boxes[j]; int category_id = box->category_id; auto begin_mask = output_mask.begin() + (mask_idx * classes + category_id) * mask_pixels; @@ -624,7 +632,7 @@ bool Model::predict(const cv::Mat& im, SegResult* result) { } bool Model::predict(const std::vector& im_batch, - std::vector* result, + std::vector* results, int thread_num) { for (auto& inputs : inputs_batch_) { inputs.clear(); @@ -647,8 +655,8 @@ bool Model::predict(const std::vector& im_batch, } int batch_size = im_batch.size(); - (*result).clear(); - (*result).resize(batch_size); + (*results).clear(); + (*results).resize(batch_size); int h = inputs_batch_[0].new_im_size_[0]; int w = inputs_batch_[0].new_im_size_[1]; auto im_tensor = predictor_->GetInputTensor("image"); @@ -680,14 +688,14 @@ bool Model::predict(const std::vector& im_batch, int single_batch_size = size / batch_size; for (int i = 0; i < batch_size; ++i) { - (*result)[i].label_map.data.resize(single_batch_size); - (*result)[i].label_map.shape.push_back(1); + (*results)[i].label_map.data.resize(single_batch_size); + (*results)[i].label_map.shape.push_back(1); for (int j = 1; j < output_label_shape.size(); ++j) { - (*result)[i].label_map.shape.push_back(output_label_shape[j]); + (*results)[i].label_map.shape.push_back(output_label_shape[j]); } std::copy(output_labels_iter + i * single_batch_size, output_labels_iter + (i + 1) * single_batch_size, - (*result)[i].label_map.data.data()); + (*results)[i].label_map.data.data()); } // 获取预测置信度scoremap @@ -704,29 +712,29 @@ bool Model::predict(const std::vector& im_batch, int single_batch_score_size = size / batch_size; for (int i = 0; i < batch_size; ++i) { - (*result)[i].score_map.data.resize(single_batch_score_size); - (*result)[i].score_map.shape.push_back(1); + (*results)[i].score_map.data.resize(single_batch_score_size); + (*results)[i].score_map.shape.push_back(1); for (int j = 1; j < output_score_shape.size(); ++j) { - (*result)[i].score_map.shape.push_back(output_score_shape[j]); + (*results)[i].score_map.shape.push_back(output_score_shape[j]); } std::copy(output_scores_iter + i * single_batch_score_size, output_scores_iter + (i + 1) * single_batch_score_size, - (*result)[i].score_map.data.data()); + (*results)[i].score_map.data.data()); } // 解析输出结果到原图大小 for (int i = 0; i < batch_size; ++i) { - std::vector label_map((*result)[i].label_map.data.begin(), - (*result)[i].label_map.data.end()); - cv::Mat mask_label((*result)[i].label_map.shape[1], - (*result)[i].label_map.shape[2], + std::vector label_map((*results)[i].label_map.data.begin(), + (*results)[i].label_map.data.end()); + cv::Mat mask_label((*results)[i].label_map.shape[1], + (*results)[i].label_map.shape[2], CV_8UC1, label_map.data()); - cv::Mat mask_score((*result)[i].score_map.shape[2], - (*result)[i].score_map.shape[3], + cv::Mat mask_score((*results)[i].score_map.shape[2], + (*results)[i].score_map.shape[3], CV_32FC1, - (*result)[i].score_map.data.data()); + (*results)[i].score_map.data.data()); int idx = 1; int len_postprocess = inputs_batch_[i].im_size_before_resize_.size(); for (std::vector::reverse_iterator iter = @@ -762,12 +770,12 @@ bool Model::predict(const std::vector& im_batch, } ++idx; } - (*result)[i].label_map.data.assign(mask_label.begin(), + (*results)[i].label_map.data.assign(mask_label.begin(), mask_label.end()); - (*result)[i].label_map.shape = {mask_label.rows, mask_label.cols}; - (*result)[i].score_map.data.assign(mask_score.begin(), + (*results)[i].label_map.shape = {mask_label.rows, mask_label.cols}; + (*results)[i].score_map.data.assign(mask_score.begin(), mask_score.end()); - (*result)[i].score_map.shape = {mask_score.rows, mask_score.cols}; + (*results)[i].score_map.shape = {mask_score.rows, mask_score.cols}; } return true; } diff --git a/deploy/cpp/src/transforms.cpp b/deploy/cpp/src/transforms.cpp index 99a73ee7345bbc8cc672d1c42627a9326ded0cf7..f623fc664e9d66002e0eb0065d034d90965eddf7 100644 --- a/deploy/cpp/src/transforms.cpp +++ b/deploy/cpp/src/transforms.cpp @@ -15,6 +15,7 @@ #include #include #include +#include #include "include/paddlex/transforms.h" @@ -60,8 +61,8 @@ bool ResizeByShort::Run(cv::Mat* im, ImageBlob* data) { data->reshape_order_.push_back("resize"); float scale = GenerateScale(*im); - int width = static_cast(scale * im->cols); - int height = static_cast(scale * im->rows); + int width = static_cast(round(scale * im->cols)); + int height = static_cast(round(scale * im->rows)); cv::resize(*im, *im, cv::Size(width, height), 0, 0, cv::INTER_LINEAR); data->new_im_size_[0] = im->rows; @@ -110,8 +111,9 @@ bool Padding::Run(cv::Mat* im, ImageBlob* data) { << ", but they should be greater than 0." << std::endl; return false; } + cv::Scalar value = cv::Scalar(im_value_[0], im_value_[1], im_value_[2]); cv::copyMakeBorder( - *im, *im, 0, padding_h, 0, padding_w, cv::BORDER_CONSTANT, cv::Scalar(0)); + *im, *im, 0, padding_h, 0, padding_w, cv::BORDER_CONSTANT, value); data->new_im_size_[0] = im->rows; data->new_im_size_[1] = im->cols; return true; diff --git a/deploy/cpp/src/visualize.cpp b/deploy/cpp/src/visualize.cpp index 1511887f097e20826f13c8c1f098ceea4efc0b5b..afc1733b497269b706bf4e07d82f3a7aa43087f5 100644 --- a/deploy/cpp/src/visualize.cpp +++ b/deploy/cpp/src/visualize.cpp @@ -34,8 +34,8 @@ std::vector GenerateColorMap(int num_class) { cv::Mat Visualize(const cv::Mat& img, const DetResult& result, const std::map& labels, - const std::vector& colormap, float threshold) { + auto colormap = GenerateColorMap(labels.size()); cv::Mat vis_img = img.clone(); auto boxes = result.boxes; for (int i = 0; i < boxes.size(); ++i) { @@ -107,8 +107,8 @@ cv::Mat Visualize(const cv::Mat& img, cv::Mat Visualize(const cv::Mat& img, const SegResult& result, - const std::map& labels, - const std::vector& colormap) { + const std::map& labels) { + auto colormap = GenerateColorMap(labels.size()); std::vector label_map(result.label_map.data.begin(), result.label_map.data.end()); cv::Mat mask(result.label_map.shape[0], diff --git a/deploy/lite/android/demo/.gitignore b/deploy/lite/android/demo/.gitignore new file mode 100644 index 0000000000000000000000000000000000000000..2b75303ac58f551de0a327638a60b909c6d33ece --- /dev/null +++ b/deploy/lite/android/demo/.gitignore @@ -0,0 +1,13 @@ +*.iml +.gradle +/local.properties +/.idea/caches +/.idea/libraries +/.idea/modules.xml +/.idea/workspace.xml +/.idea/navEditor.xml +/.idea/assetWizardSettings.xml +.DS_Store +/build +/captures +.externalNativeBuild diff --git a/deploy/lite/android/demo/app/.gitignore b/deploy/lite/android/demo/app/.gitignore new file mode 100644 index 0000000000000000000000000000000000000000..796b96d1c402326528b4ba3c12ee9d92d0e212e9 --- /dev/null +++ b/deploy/lite/android/demo/app/.gitignore @@ -0,0 +1 @@ +/build diff --git a/deploy/lite/android/demo/app/build.gradle b/deploy/lite/android/demo/app/build.gradle new file mode 100644 index 0000000000000000000000000000000000000000..f743f1d23905566772c4e572e9700df5ad779ca0 --- /dev/null +++ b/deploy/lite/android/demo/app/build.gradle @@ -0,0 +1,119 @@ +import java.security.MessageDigest + +apply plugin: 'com.android.application' + +android { + compileSdkVersion 28 + defaultConfig { + applicationId "com.baidu.paddlex.lite.demo" + minSdkVersion 15 + targetSdkVersion 28 + versionCode 1 + versionName "1.0" + testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner" + } + buildTypes { + release { + minifyEnabled false + proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro' + } + } +} + +dependencies { + implementation fileTree(include: ['*.aar'], dir: 'libs') + implementation 'com.android.support:appcompat-v7:28.0.0' + implementation 'com.android.support.constraint:constraint-layout:1.1.3' + implementation 'com.android.support:design:28.0.0' + testImplementation 'junit:junit:4.12' + androidTestImplementation 'com.android.support.test:runner:1.0.2' + androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2' +} + + +def paddlexAndroidSdk = 'https://bj.bcebos.com/paddlex/deploy/lite/paddlex_lite_11cbd50e.tar.gz' + +task downloadAndExtractPaddleXAndroidSdk(type: DefaultTask) { + doFirst { + println "Downloading and extracting PaddleX Android SDK"} + doLast { + // Prepare cache folder for sdk + if (!file("cache").exists()) { + mkdir "cache" + } + // Generate cache name for sdk + MessageDigest messageDigest = MessageDigest.getInstance('MD5') + messageDigest.update(paddlexAndroidSdk.bytes) + String cacheName = new BigInteger(1, messageDigest.digest()).toString(32) + // Download sdk + if (!file("cache/${cacheName}.tar.gz").exists()) { + ant.get(src: paddlexAndroidSdk, dest: file("cache/${cacheName}.tar.gz")) + } + // Unpack sdk + copy { + from tarTree("cache/${cacheName}.tar.gz") + into "cache/${cacheName}" + } + // Copy sdk + if (!file("libs/paddlex.aar").exists()) { + copy { + from "cache/${cacheName}/paddlex.aar" + into "libs" + } + } + } +} + +preBuild.dependsOn downloadAndExtractPaddleXAndroidSdk + +def paddleXLiteModel = 'https://bj.bcebos.com/paddlex/deploy/lite/mobilenetv2_imagenet_lite2.6.1.tar.gz' +task downloadAndExtractPaddleXLiteModel(type: DefaultTask) { + doFirst { + println "Downloading and extracting PaddleX Android SDK"} + + doLast { + // Prepare cache folder for model + if (!file("cache").exists()) { + mkdir "cache" + } + // Generate cache name for model + MessageDigest messageDigest = MessageDigest.getInstance('MD5') + messageDigest.update(paddleXLiteModel.bytes) + String cacheName = new BigInteger(1, messageDigest.digest()).toString(32) + // Download sdk + if (!file("cache/${cacheName}.tar.gz").exists()) { + ant.get(src: paddleXLiteModel, dest: file("cache/${cacheName}.tar.gz")) + } + + // Unpack model + copy { + from tarTree("cache/${cacheName}.tar.gz") + into "cache/${cacheName}" + } + + // Copy model.nb + if (!file("src/main/assets/model/model.nb").exists()) { + copy { + from "cache/${cacheName}/model.nb" + into "src/main/assets/model/" + } + } + // Copy config file model.yml + if (!file("src/main/assets/config/model.yml").exists()) { + copy { + from "cache/${cacheName}/model.yml" + into "src/main/assets/config/" + } + } + // Copy config file model.yml + if (!file("src/main/assets/images/test.jpg").exists()) { + copy { + from "cache/${cacheName}/test.jpg" + into "src/main/assets/images/" + } + } + } + +} + +preBuild.dependsOn downloadAndExtractPaddleXLiteModel diff --git a/deploy/lite/android/demo/app/proguard-rules.pro b/deploy/lite/android/demo/app/proguard-rules.pro new file mode 100644 index 0000000000000000000000000000000000000000..f1b424510da51fd82143bc74a0a801ae5a1e2fcd --- /dev/null +++ b/deploy/lite/android/demo/app/proguard-rules.pro @@ -0,0 +1,21 @@ +# Add project specific ProGuard rules here. +# You can control the set of applied configuration files using the +# proguardFiles setting in build.gradle. +# +# For more details, see +# http://developer.android.com/guide/developing/tools/proguard.html + +# If your project uses WebView with JS, uncomment the following +# and specify the fully qualified class name to the JavaScript interface +# class: +#-keepclassmembers class fqcn.of.javascript.interface.for.webview { +# public *; +#} + +# Uncomment this to preserve the line number information for +# debugging stack traces. +#-keepattributes SourceFile,LineNumberTable + +# If you keep the line number information, uncomment this to +# hide the original source file name. +#-renamesourcefileattribute SourceFile diff --git a/deploy/lite/android/demo/app/src/androidTest/java/com/baidu/paddlex/lite/demo/ExampleInstrumentedTest.java b/deploy/lite/android/demo/app/src/androidTest/java/com/baidu/paddlex/lite/demo/ExampleInstrumentedTest.java new file mode 100644 index 0000000000000000000000000000000000000000..4b58dec6f5dd8bfa083ec951d659dd0690f67221 --- /dev/null +++ b/deploy/lite/android/demo/app/src/androidTest/java/com/baidu/paddlex/lite/demo/ExampleInstrumentedTest.java @@ -0,0 +1,32 @@ +package com.baidu.paddlex.lite.demo; + +import android.content.Context; +import android.content.res.AssetManager; +import android.support.test.InstrumentationRegistry; +import android.support.test.runner.AndroidJUnit4; + +import com.baidu.paddlex.config.ConfigParser; + +import org.junit.Test; +import org.junit.runner.RunWith; + +import java.io.IOException; +import java.io.InputStream; + +import static org.junit.Assert.assertEquals; + +/** + * Instrumented test, which will execute on an Android device. + * + * @see Testing documentation + */ +@RunWith(AndroidJUnit4.class) +public class ExampleInstrumentedTest { + @Test + public void useAppContext() throws IOException { + // Context of the app under test. + Context appContext = InstrumentationRegistry.getTargetContext(); + AssetManager ass = appContext.getAssets(); + assertEquals("com.baidu.paddlex.lite.demo", appContext.getPackageName()); + } +} diff --git a/deploy/lite/android/demo/app/src/main/AndroidManifest.xml b/deploy/lite/android/demo/app/src/main/AndroidManifest.xml new file mode 100644 index 0000000000000000000000000000000000000000..940c9692fcf6fdfe6b07e8f4641fe7e9a9e5ff5f --- /dev/null +++ b/deploy/lite/android/demo/app/src/main/AndroidManifest.xml @@ -0,0 +1,28 @@ + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/deploy/lite/android/demo/app/src/main/java/com/baidu/paddlex/lite/demo/AppCompatPreferenceActivity.java b/deploy/lite/android/demo/app/src/main/java/com/baidu/paddlex/lite/demo/AppCompatPreferenceActivity.java new file mode 100644 index 0000000000000000000000000000000000000000..c6f4eff8e736278c71ef2c34783dd3e1b3659495 --- /dev/null +++ b/deploy/lite/android/demo/app/src/main/java/com/baidu/paddlex/lite/demo/AppCompatPreferenceActivity.java @@ -0,0 +1,126 @@ +// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package com.baidu.paddlex.lite.demo; + +import android.content.res.Configuration; +import android.os.Bundle; +import android.preference.PreferenceActivity; +import android.support.annotation.LayoutRes; +import android.support.annotation.Nullable; +import android.support.v7.app.ActionBar; +import android.support.v7.app.AppCompatDelegate; +import android.support.v7.widget.Toolbar; +import android.view.MenuInflater; +import android.view.View; +import android.view.ViewGroup; + +/** + * A {@link android.preference.PreferenceActivity} which implements and proxies the necessary calls + * to be used with AppCompat. + *

+ * This technique can be used with an {@link android.app.Activity} class, not just + * {@link android.preference.PreferenceActivity}. + */ + +public abstract class AppCompatPreferenceActivity extends PreferenceActivity { + private AppCompatDelegate mDelegate; + + @Override + protected void onCreate(Bundle savedInstanceState) { + getDelegate().installViewFactory(); + getDelegate().onCreate(savedInstanceState); + super.onCreate(savedInstanceState); + } + + @Override + protected void onPostCreate(Bundle savedInstanceState) { + super.onPostCreate(savedInstanceState); + getDelegate().onPostCreate(savedInstanceState); + } + + public ActionBar getSupportActionBar() { + return getDelegate().getSupportActionBar(); + } + + public void setSupportActionBar(@Nullable Toolbar toolbar) { + getDelegate().setSupportActionBar(toolbar); + } + + @Override + public MenuInflater getMenuInflater() { + return getDelegate().getMenuInflater(); + } + + @Override + public void setContentView(@LayoutRes int layoutResID) { + getDelegate().setContentView(layoutResID); + } + + @Override + public void setContentView(View view) { + getDelegate().setContentView(view); + } + + @Override + public void setContentView(View view, ViewGroup.LayoutParams params) { + getDelegate().setContentView(view, params); + } + + @Override + public void addContentView(View view, ViewGroup.LayoutParams params) { + getDelegate().addContentView(view, params); + } + + @Override + protected void onPostResume() { + super.onPostResume(); + getDelegate().onPostResume(); + } + + @Override + protected void onTitleChanged(CharSequence title, int color) { + super.onTitleChanged(title, color); + getDelegate().setTitle(title); + } + + @Override + public void onConfigurationChanged(Configuration newConfig) { + super.onConfigurationChanged(newConfig); + getDelegate().onConfigurationChanged(newConfig); + } + + @Override + protected void onStop() { + super.onStop(); + getDelegate().onStop(); + } + + @Override + protected void onDestroy() { + super.onDestroy(); + getDelegate().onDestroy(); + } + + public void invalidateOptionsMenu() { + getDelegate().invalidateOptionsMenu(); + } + + private AppCompatDelegate getDelegate() { + if (mDelegate == null) { + mDelegate = AppCompatDelegate.create(this, null); + } + return mDelegate; + } +} diff --git a/deploy/lite/android/demo/app/src/main/java/com/baidu/paddlex/lite/demo/MainActivity.java b/deploy/lite/android/demo/app/src/main/java/com/baidu/paddlex/lite/demo/MainActivity.java new file mode 100644 index 0000000000000000000000000000000000000000..62e47214fc80a40fbfa173967f61e490eab92e47 --- /dev/null +++ b/deploy/lite/android/demo/app/src/main/java/com/baidu/paddlex/lite/demo/MainActivity.java @@ -0,0 +1,466 @@ +// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package com.baidu.paddlex.lite.demo; + +import android.Manifest; +import android.app.ProgressDialog; +import android.content.ContentResolver; +import android.content.Intent; +import android.content.SharedPreferences; +import android.content.pm.PackageManager; +import android.database.Cursor; +import android.graphics.Bitmap; +import android.graphics.BitmapFactory; +import android.net.Uri; +import android.os.Bundle; +import android.os.Handler; +import android.os.HandlerThread; +import android.os.Message; +import android.preference.PreferenceManager; +import android.provider.MediaStore; +import android.support.annotation.NonNull; +import android.support.v4.app.ActivityCompat; +import android.support.v4.content.ContextCompat; +import android.support.v7.app.AppCompatActivity; +import android.text.method.ScrollingMovementMethod; +import android.util.Log; +import android.view.Menu; +import android.view.MenuInflater; +import android.view.MenuItem; +import android.view.View; +import android.widget.Button; +import android.widget.ImageView; +import android.widget.TextView; +import android.widget.Toast; +import com.baidu.paddlex.Predictor; +import com.baidu.paddlex.Utils; +import com.baidu.paddlex.config.ConfigParser; +import com.baidu.paddlex.postprocess.ClsResult; +import com.baidu.paddlex.postprocess.DetResult; +import com.baidu.paddlex.postprocess.SegResult; +import com.baidu.paddlex.visual.Visualize; +import org.opencv.core.Mat; +import org.opencv.imgcodecs.Imgcodecs; +import org.opencv.imgproc.Imgproc; + +import java.io.File; +import java.io.IOException; +import java.io.InputStream; + +public class MainActivity extends AppCompatActivity { + public static final int OPEN_GALLERY_REQUEST_CODE = 0; + public static final int TAKE_PHOTO_REQUEST_CODE = 1; + public static final int REQUEST_LOAD_MODEL = 0; + public static final int REQUEST_RUN_MODEL = 1; + public static final int RESPONSE_LOAD_MODEL_SUCCESSED = 0; + public static final int RESPONSE_LOAD_MODEL_FAILED = 1; + public static final int RESPONSE_RUN_MODEL_SUCCESSED = 2; + public static final int RESPONSE_RUN_MODEL_FAILED = 3; + private static final String TAG = MainActivity.class.getSimpleName(); + protected ProgressDialog pbLoadModel = null; + protected ProgressDialog pbRunModel = null; + + protected Handler receiver = null; // receive messages from worker thread + protected Handler sender = null; // send command to worker thread + protected HandlerThread worker = null; // worker thread to load&run model + + protected TextView tvInputSetting; + protected ImageView ivInputImage; + protected TextView tvOutputResult; + protected TextView tvInferenceTime; + private Button predictButton; + protected String testImagePathFromAsset; + protected String testYamlPathFromAsset; + protected String testModelPathFromAsset; + + // Predictor + protected Predictor predictor = new Predictor(); + // model config + protected ConfigParser configParser = new ConfigParser(); + // Visualize + protected Visualize visualize = new Visualize(); + // Predict Mat of Opencv + protected Mat predictMat; + + + + + @Override + protected void onCreate(Bundle savedInstanceState) { + super.onCreate(savedInstanceState); + setContentView(R.layout.activity_main); + receiver = new Handler() { + @Override + public void handleMessage(Message msg) { + switch (msg.what) { + case RESPONSE_LOAD_MODEL_SUCCESSED: + pbLoadModel.dismiss(); + Toast.makeText(MainActivity.this, "Load model successfully!", Toast.LENGTH_SHORT).show(); + break; + case RESPONSE_LOAD_MODEL_FAILED: + pbLoadModel.dismiss(); + Toast.makeText(MainActivity.this, "Load model failed!", Toast.LENGTH_SHORT).show(); + break; + case RESPONSE_RUN_MODEL_SUCCESSED: + pbRunModel.dismiss(); + onRunModelSuccessed(); + break; + case RESPONSE_RUN_MODEL_FAILED: + pbRunModel.dismiss(); + Toast.makeText(MainActivity.this, "Run model failed!", Toast.LENGTH_SHORT).show(); + onRunModelFailed(); + break; + default: + break; + } + } + }; + worker = new HandlerThread("Predictor Worker"); + worker.start(); + sender = new Handler(worker.getLooper()) { + public void handleMessage(Message msg) { + switch (msg.what) { + case REQUEST_LOAD_MODEL: + // load model and reload test image + if (onLoadModel()) { + receiver.sendEmptyMessage(RESPONSE_LOAD_MODEL_SUCCESSED); + } else { + receiver.sendEmptyMessage(RESPONSE_LOAD_MODEL_FAILED); + } + break; + case REQUEST_RUN_MODEL: + // run model if model is loaded + if (onRunModel()) { + receiver.sendEmptyMessage(RESPONSE_RUN_MODEL_SUCCESSED); + } else { + receiver.sendEmptyMessage(RESPONSE_RUN_MODEL_FAILED); + } + break; + default: + break; + } + } + }; + + tvInputSetting = findViewById(R.id.tv_input_setting); + ivInputImage = findViewById(R.id.iv_input_image); + predictButton = findViewById(R.id.iv_predict_button); + tvInferenceTime = findViewById(R.id.tv_inference_time); + tvOutputResult = findViewById(R.id.tv_output_result); + tvInputSetting.setMovementMethod(ScrollingMovementMethod.getInstance()); + tvOutputResult.setMovementMethod(ScrollingMovementMethod.getInstance()); + SharedPreferences sharedPreferences = PreferenceManager.getDefaultSharedPreferences(this); + String image_path = sharedPreferences.getString(getString(R.string.IMAGE_PATH_KEY), + getString(R.string.IMAGE_PATH_DEFAULT)); + Utils.initialOpencv(); + loadTestImageFromAsset(image_path); + predictButton.setOnClickListener(new View.OnClickListener() { + @Override + public void onClick(View v) { + if(predictor.isLoaded()){ + onLoadModelSuccessed(); + } + } + }); + + } + + public boolean onLoadModel() { + return predictor.init(configParser); + } + + public boolean onRunModel() { + return predictor.isLoaded() && predictor.predict(); + } + + public void onRunModelFailed() { + } + + public void loadModel() { + pbLoadModel = ProgressDialog.show(this, "", "Loading model...", false, false); + sender.sendEmptyMessage(REQUEST_LOAD_MODEL); + } + + public void runModel() { + pbRunModel = ProgressDialog.show(this, "", "Running model...", false, false); + sender.sendEmptyMessage(REQUEST_RUN_MODEL); + } + + public void onLoadModelSuccessed() { + if (predictMat != null && predictor.isLoaded()) { + int w = predictMat.width(); + int h = predictMat.height(); + int c = predictMat.channels(); + predictor.setInputMat(predictMat); + runModel(); + } + } + + public void onRunModelSuccessed() { + // obtain results and update UI + tvInferenceTime.setText("Inference time: " + predictor.getInferenceTime() + " ms"); + + if (configParser.getModelType().equalsIgnoreCase("segmenter")) { + SegResult segResult = predictor.getSegResult(); + Mat maskMat = visualize.draw(segResult, predictMat.clone(), predictor.getImageBlob(), 1); + Imgproc.cvtColor(maskMat, maskMat, Imgproc.COLOR_BGRA2RGBA); + Bitmap outputImage = Bitmap.createBitmap(maskMat.width(), maskMat.height(), Bitmap.Config.ARGB_8888); + org.opencv.android.Utils.matToBitmap(maskMat, outputImage); + if (outputImage != null) { + ivInputImage.setImageBitmap(outputImage); + } + } else if (configParser.getModelType().equalsIgnoreCase("detector")) { + DetResult detResult = predictor.getDetResult(); + Mat roiMat = visualize.draw(detResult, predictMat.clone()); + Imgproc.cvtColor(roiMat, roiMat, Imgproc.COLOR_BGR2RGB); + Bitmap outputImage = Bitmap.createBitmap(roiMat.width(),roiMat.height(), Bitmap.Config.ARGB_8888); + org.opencv.android.Utils.matToBitmap(roiMat,outputImage); + if (outputImage != null) { + ivInputImage.setImageBitmap(outputImage); + } + } else if (configParser.getModelType().equalsIgnoreCase("classifier")) { + ClsResult clsResult = predictor.getClsResult(); + if (configParser.getLabeList().size() > 0) { + String outputResult = "Top1: " + clsResult.getCategory() + " - " + String.format("%.3f", clsResult.getScore()); + tvOutputResult.setText(outputResult); + tvOutputResult.scrollTo(0, 0); + } + } + } + + public void onMatChanged(Mat mat) { + this.predictMat = mat.clone(); + } + + public void onImageChanged(Bitmap image) { + ivInputImage.setImageBitmap(image); + tvOutputResult.setText(""); + tvInferenceTime.setText("Inference time: -- ms"); + } + + public void onSettingsClicked() { + startActivity(new Intent(MainActivity.this, SettingsActivity.class)); + } + + @Override + public boolean onCreateOptionsMenu(Menu menu) { + MenuInflater inflater = getMenuInflater(); + inflater.inflate(R.menu.menu_action_options, menu); + return true; + } + + @Override + public boolean onOptionsItemSelected(MenuItem item) { + switch (item.getItemId()) { + case android.R.id.home: + finish(); + break; + case R.id.open_gallery: + if (requestAllPermissions()) { + openGallery(); + } + break; + case R.id.take_photo: + if (requestAllPermissions()) { + takePhoto(); + } + break; + case R.id.settings: + if (requestAllPermissions()) { + // make sure we have SDCard r&w permissions to load model from SDCard + onSettingsClicked(); + } + break; + } + return super.onOptionsItemSelected(item); + } + + @Override + public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, + @NonNull int[] grantResults) { + super.onRequestPermissionsResult(requestCode, permissions, grantResults); + if (grantResults[0] != PackageManager.PERMISSION_GRANTED || grantResults[1] != PackageManager.PERMISSION_GRANTED) { + Toast.makeText(this, "Permission Denied", Toast.LENGTH_SHORT).show(); + } + } + + @Override + protected void onActivityResult(int requestCode, int resultCode, Intent data) { + super.onActivityResult(requestCode, resultCode, data); + if (resultCode == RESULT_OK && data != null) { + switch (requestCode) { + case OPEN_GALLERY_REQUEST_CODE: + try { + ContentResolver resolver = getContentResolver(); + Uri uri = data.getData(); + Bitmap image = MediaStore.Images.Media.getBitmap(resolver, uri); + String[] proj = {MediaStore.Images.Media.DATA}; + Cursor cursor = managedQuery(uri, proj, null, null, null); + cursor.moveToFirst(); + int columnIndex = cursor.getColumnIndex(proj[0]); + String imgDecodableString = cursor.getString(columnIndex); + File file = new File(imgDecodableString); + Mat mat = Imgcodecs.imread(file.getAbsolutePath(),Imgcodecs.IMREAD_COLOR); + onImageChanged(image); + onMatChanged(mat); + } catch (IOException e) { + Log.e(TAG, e.toString()); + } + break; + case TAKE_PHOTO_REQUEST_CODE: + Bitmap image = (Bitmap) data.getParcelableExtra("data"); + Mat mat = new Mat(); + org.opencv.android.Utils.bitmapToMat(image, mat); + Imgproc.cvtColor(mat, mat, Imgproc.COLOR_RGBA2BGR); + onImageChanged(image); + onMatChanged(mat); + break; + default: + break; + } + } + } + + private boolean requestAllPermissions() { + if (ContextCompat.checkSelfPermission(this, Manifest.permission.WRITE_EXTERNAL_STORAGE) + != PackageManager.PERMISSION_GRANTED || ContextCompat.checkSelfPermission(this, + Manifest.permission.CAMERA) + != PackageManager.PERMISSION_GRANTED) { + ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE, + Manifest.permission.CAMERA}, + 0); + return false; + } + return true; + } + + private void openGallery() { + Intent intent = new Intent(Intent.ACTION_PICK, null); + intent.setDataAndType(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, "image/*"); + startActivityForResult(intent, OPEN_GALLERY_REQUEST_CODE); + } + + private void takePhoto() { + Intent takePhotoIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); + if (takePhotoIntent.resolveActivity(getPackageManager()) != null) { + startActivityForResult(takePhotoIntent, TAKE_PHOTO_REQUEST_CODE); + } + } + + @Override + public boolean onPrepareOptionsMenu(Menu menu) { + boolean isLoaded = predictor.isLoaded(); + menu.findItem(R.id.open_gallery).setEnabled(isLoaded); + menu.findItem(R.id.take_photo).setEnabled(isLoaded); + return super.onPrepareOptionsMenu(menu); + } + + @Override + protected void onResume() { + Log.i(TAG, "begin onResume"); + super.onResume(); + SharedPreferences sharedPreferences = PreferenceManager.getDefaultSharedPreferences(this); + + boolean settingsChanged = false; + boolean testImageChanged = false; + String modelPath = sharedPreferences.getString(getString(R.string.MODEL_PATH_KEY), + getString(R.string.MODEL_PATH_DEFAULT)); + settingsChanged |= !modelPath.equalsIgnoreCase(testModelPathFromAsset); + String yamlPath = sharedPreferences.getString(getString(R.string.YAML_PATH_KEY), + getString(R.string.YAML_PATH_DEFAULT)); + settingsChanged |= !yamlPath.equalsIgnoreCase(testYamlPathFromAsset); + int cpuThreadNum = Integer.parseInt(sharedPreferences.getString(getString(R.string.CPU_THREAD_NUM_KEY), + getString(R.string.CPU_THREAD_NUM_DEFAULT))); + settingsChanged |= cpuThreadNum != configParser.getCpuThreadNum(); + String cpuPowerMode = sharedPreferences.getString(getString(R.string.CPU_POWER_MODE_KEY), + getString(R.string.CPU_POWER_MODE_DEFAULT)); + settingsChanged |= !cpuPowerMode.equalsIgnoreCase(configParser.getCpuPowerMode()); + String imagePath = sharedPreferences.getString(getString(R.string.IMAGE_PATH_KEY), + getString(R.string.IMAGE_PATH_DEFAULT)); + testImageChanged |= !imagePath.equalsIgnoreCase(testImagePathFromAsset); + + testYamlPathFromAsset = yamlPath; + testModelPathFromAsset = modelPath; + if (settingsChanged) { + try { + String realModelPath = modelPath; + if (!modelPath.substring(0, 1).equals("/")) { + String modelFileName = Utils.getFileNameFromString(modelPath); + realModelPath = this.getCacheDir() + File.separator + modelFileName; + Utils.copyFileFromAssets(this, modelPath, realModelPath); + } + String realYamlPath = yamlPath; + if (!yamlPath.substring(0, 1).equals("/")) { + String yamlFileName = Utils.getFileNameFromString(yamlPath); + realYamlPath = this.getCacheDir() + File.separator + yamlFileName; + Utils.copyFileFromAssets(this, yamlPath, realYamlPath); + } + configParser.init(realModelPath, realYamlPath, cpuThreadNum, cpuPowerMode); + visualize.init(configParser.getNumClasses()); + } catch (IOException e) { + e.printStackTrace(); + Toast.makeText(MainActivity.this, "Load config failed!", Toast.LENGTH_SHORT).show(); + } + // update UI + tvInputSetting.setText("Model: " + configParser.getModel()+ "\n" + "CPU" + + " Thread Num: " + Integer.toString(configParser.getCpuThreadNum()) + "\n" + "CPU Power Mode: " + configParser.getCpuPowerMode()); + tvInputSetting.scrollTo(0, 0); + // reload model if configure has been changed + loadModel(); + } + + if (testImageChanged){ + loadTestImageFromAsset(imagePath); + } + } + + public void loadTestImageFromAsset(String imagePath){ + if (imagePath.isEmpty()) { + return; + } + // read test image file from custom file_paths if the first character of mode file_paths is '/', otherwise read test + // image file from assets + testImagePathFromAsset = imagePath; + if (!imagePath.substring(0, 1).equals("/")) { + InputStream imageStream = null; + try { + imageStream = getAssets().open(imagePath); + } catch (IOException e) { + e.printStackTrace(); + } + onImageChanged(BitmapFactory.decodeStream(imageStream)); + String realPath; + String imageFileName = Utils.getFileNameFromString(imagePath); + realPath = this.getCacheDir() + File.separator + imageFileName; + Utils.copyFileFromAssets(this, imagePath, realPath); + onMatChanged(Imgcodecs.imread(realPath, Imgcodecs.IMREAD_COLOR)); + } else { + if (!new File(imagePath).exists()) { + return; + } + onMatChanged(Imgcodecs.imread(imagePath, Imgcodecs.IMREAD_COLOR)); + onImageChanged( BitmapFactory.decodeFile(imagePath)); + } + } + + @Override + protected void onDestroy() { + if (predictor != null) { + predictor.releaseModel(); + } + worker.quit(); + super.onDestroy(); + } +} \ No newline at end of file diff --git a/deploy/lite/android/demo/app/src/main/java/com/baidu/paddlex/lite/demo/SettingsActivity.java b/deploy/lite/android/demo/app/src/main/java/com/baidu/paddlex/lite/demo/SettingsActivity.java new file mode 100644 index 0000000000000000000000000000000000000000..271343ff5a626ba5d8a224dfe832738ae4ede123 --- /dev/null +++ b/deploy/lite/android/demo/app/src/main/java/com/baidu/paddlex/lite/demo/SettingsActivity.java @@ -0,0 +1,158 @@ +// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package com.baidu.paddlex.lite.demo; + +import com.baidu.paddlex.Utils; + +import android.content.SharedPreferences; +import android.os.Bundle; +import android.preference.CheckBoxPreference; +import android.preference.EditTextPreference; +import android.preference.ListPreference; +import android.support.v7.app.ActionBar; + +import java.util.ArrayList; +import java.util.List; + +public class SettingsActivity extends AppCompatPreferenceActivity implements SharedPreferences.OnSharedPreferenceChangeListener { + ListPreference lpChoosePreInstalledModel = null; + CheckBoxPreference cbEnableCustomSettings = null; + EditTextPreference etModelPath = null; + EditTextPreference etYamlPath = null; + EditTextPreference etImagePath = null; + ListPreference lpCPUThreadNum = null; + ListPreference lpCPUPowerMode = null; + + List preInstalledModelPaths = null; + List preInstalledYamlPaths = null; + List preInstalledImagePaths = null; + List preInstalledCPUThreadNums = null; + List preInstalledCPUPowerModes = null; + + @Override + public void onCreate(Bundle savedInstanceState) { + super.onCreate(savedInstanceState); + addPreferencesFromResource(R.xml.settings); + ActionBar supportActionBar = getSupportActionBar(); + if (supportActionBar != null) { + supportActionBar.setDisplayHomeAsUpEnabled(true); + } + + // initialized pre-installed models + preInstalledModelPaths = new ArrayList(); + preInstalledYamlPaths = new ArrayList(); + preInstalledImagePaths = new ArrayList(); + preInstalledCPUThreadNums = new ArrayList(); + preInstalledCPUPowerModes = new ArrayList(); + preInstalledModelPaths.add(getString(R.string.MODEL_PATH_DEFAULT)); + preInstalledYamlPaths.add(getString(R.string.YAML_PATH_DEFAULT)); + preInstalledImagePaths.add(getString(R.string.IMAGE_PATH_DEFAULT)); + preInstalledCPUThreadNums.add(getString(R.string.CPU_THREAD_NUM_DEFAULT)); + preInstalledCPUPowerModes.add(getString(R.string.CPU_POWER_MODE_DEFAULT)); + // initialize UI components + lpChoosePreInstalledModel = + (ListPreference) findPreference(getString(R.string.CHOOSE_PRE_INSTALLED_MODEL_KEY)); + String[] preInstalledModelNames = new String[preInstalledModelPaths.size()]; + for (int i = 0; i < preInstalledModelPaths.size(); i++) { + preInstalledModelNames[i] = + preInstalledModelPaths.get(i).substring(preInstalledModelPaths.get(i).lastIndexOf("/") + 1); + } + lpChoosePreInstalledModel.setEntries(preInstalledModelNames); + lpChoosePreInstalledModel.setEntryValues(preInstalledModelPaths.toArray(new String[preInstalledModelPaths.size()])); + cbEnableCustomSettings = + (CheckBoxPreference) findPreference(getString(R.string.ENABLE_CUSTOM_SETTINGS_KEY)); + etModelPath = (EditTextPreference) findPreference(getString(R.string.MODEL_PATH_KEY)); + etModelPath.setTitle("Model Path (SDCard: " + Utils.getSDCardDirectory() + ")"); + etYamlPath = (EditTextPreference) findPreference(getString(R.string.YAML_PATH_KEY)); + etImagePath = (EditTextPreference) findPreference(getString(R.string.IMAGE_PATH_KEY)); + lpCPUThreadNum = + (ListPreference) findPreference(getString(R.string.CPU_THREAD_NUM_KEY)); + lpCPUPowerMode = + (ListPreference) findPreference(getString(R.string.CPU_POWER_MODE_KEY)); + } + + private void reloadPreferenceAndUpdateUI() { + SharedPreferences sharedPreferences = getPreferenceScreen().getSharedPreferences(); + boolean enableCustomSettings = + sharedPreferences.getBoolean(getString(R.string.ENABLE_CUSTOM_SETTINGS_KEY), false); + String modelPath = sharedPreferences.getString(getString(R.string.CHOOSE_PRE_INSTALLED_MODEL_KEY), + getString(R.string.MODEL_PATH_DEFAULT)); + int modelIdx = lpChoosePreInstalledModel.findIndexOfValue(modelPath); + if (modelIdx >= 0 && modelIdx < preInstalledModelPaths.size()) { + if (!enableCustomSettings) { + SharedPreferences.Editor editor = sharedPreferences.edit(); + editor.putString(getString(R.string.MODEL_PATH_KEY), preInstalledModelPaths.get(modelIdx)); + editor.putString(getString(R.string.YAML_PATH_KEY), preInstalledYamlPaths.get(modelIdx)); + editor.putString(getString(R.string.IMAGE_PATH_KEY), preInstalledImagePaths.get(modelIdx)); + editor.putString(getString(R.string.CPU_THREAD_NUM_KEY), preInstalledCPUThreadNums.get(modelIdx)); + editor.putString(getString(R.string.CPU_POWER_MODE_KEY), preInstalledCPUPowerModes.get(modelIdx)); + editor.commit(); + } + lpChoosePreInstalledModel.setSummary(modelPath); + } + + cbEnableCustomSettings.setChecked(enableCustomSettings); + etModelPath.setEnabled(enableCustomSettings); + etYamlPath.setEnabled(enableCustomSettings); + etImagePath.setEnabled(enableCustomSettings); + lpCPUThreadNum.setEnabled(enableCustomSettings); + lpCPUPowerMode.setEnabled(enableCustomSettings); + modelPath = sharedPreferences.getString(getString(R.string.MODEL_PATH_KEY), + getString(R.string.MODEL_PATH_DEFAULT)); + String YamlPath = sharedPreferences.getString(getString(R.string.YAML_PATH_KEY), + getString(R.string.YAML_PATH_DEFAULT)); + String imagePath = sharedPreferences.getString(getString(R.string.IMAGE_PATH_KEY), + getString(R.string.IMAGE_PATH_DEFAULT)); + String cpuThreadNum = sharedPreferences.getString(getString(R.string.CPU_THREAD_NUM_KEY), + getString(R.string.CPU_THREAD_NUM_DEFAULT)); + String cpuPowerMode = sharedPreferences.getString(getString(R.string.CPU_POWER_MODE_KEY), + getString(R.string.CPU_POWER_MODE_DEFAULT)); + + etModelPath.setSummary(modelPath); + etModelPath.setText(modelPath); + etYamlPath.setSummary(YamlPath); + etYamlPath.setText(YamlPath); + etImagePath.setSummary(imagePath); + etImagePath.setText(imagePath); + lpCPUThreadNum.setValue(cpuThreadNum); + lpCPUThreadNum.setSummary(cpuThreadNum); + lpCPUPowerMode.setValue(cpuPowerMode); + lpCPUPowerMode.setSummary(cpuPowerMode); + + } + + @Override + protected void onResume() { + super.onResume(); + getPreferenceScreen().getSharedPreferences().registerOnSharedPreferenceChangeListener(this); + reloadPreferenceAndUpdateUI(); + } + + @Override + protected void onPause() { + super.onPause(); + getPreferenceScreen().getSharedPreferences().unregisterOnSharedPreferenceChangeListener(this); + } + + @Override + public void onSharedPreferenceChanged(SharedPreferences sharedPreferences, String key) { + if (key.equals(getString(R.string.CHOOSE_PRE_INSTALLED_MODEL_KEY))) { + SharedPreferences.Editor editor = sharedPreferences.edit(); + editor.putBoolean(getString(R.string.ENABLE_CUSTOM_SETTINGS_KEY), false); + editor.commit(); + } + reloadPreferenceAndUpdateUI(); + } +} diff --git a/deploy/lite/android/demo/app/src/main/res/drawable-v24/ic_launcher_foreground.xml b/deploy/lite/android/demo/app/src/main/res/drawable-v24/ic_launcher_foreground.xml new file mode 100644 index 0000000000000000000000000000000000000000..1f6bb290603d7caa16c5fb6f61bbfdc750622f5c --- /dev/null +++ b/deploy/lite/android/demo/app/src/main/res/drawable-v24/ic_launcher_foreground.xml @@ -0,0 +1,34 @@ + + + + + + + + + + + diff --git a/deploy/lite/android/demo/app/src/main/res/drawable/face.jpg b/deploy/lite/android/demo/app/src/main/res/drawable/face.jpg new file mode 100644 index 0000000000000000000000000000000000000000..8963ae3db05894cd4bf3ea17957297363db73171 Binary files /dev/null and b/deploy/lite/android/demo/app/src/main/res/drawable/face.jpg differ diff --git a/deploy/lite/android/demo/app/src/main/res/drawable/ic_launcher_background.xml b/deploy/lite/android/demo/app/src/main/res/drawable/ic_launcher_background.xml new file mode 100644 index 0000000000000000000000000000000000000000..0d025f9bf6b67c63044a36a9ff44fbc69e5c5822 --- /dev/null +++ b/deploy/lite/android/demo/app/src/main/res/drawable/ic_launcher_background.xml @@ -0,0 +1,170 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/deploy/lite/android/demo/app/src/main/res/layout/activity_main.xml b/deploy/lite/android/demo/app/src/main/res/layout/activity_main.xml new file mode 100644 index 0000000000000000000000000000000000000000..97c79f86dbedee3b71ef4b787b05352f70a428fd --- /dev/null +++ b/deploy/lite/android/demo/app/src/main/res/layout/activity_main.xml @@ -0,0 +1,112 @@ + + + + + + + + + + + + + + + + + + + + + + + +