diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 21bab1f7baea919e7548df5adbf4f312c7dacc75..7aed9143a1ac7f5fb7c8768588e7354593a55a62 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -1,12 +1,8 @@
-- repo: local
+- repo: https://github.com/PaddlePaddle/mirrors-yapf.git
+ sha: 0d79c0c469bab64f7229c9aca2b1186ef47f0e37
hooks:
- id: yapf
- name: yapf
- entry: yapf
- language: system
- args: [-i, --style .style.yapf]
files: \.py$
-
- repo: https://github.com/pre-commit/pre-commit-hooks
sha: a11d9314b22d8f8c7556443875b731ef05965464
hooks:
diff --git a/README.md b/README.md
index 1c5e93d11f427c81eb193be9dbe747ef36b50dda..a27e3ae5e61667618daf9d83b51ce6968fd02e51 100644
--- a/README.md
+++ b/README.md
@@ -1,24 +1,45 @@
-
+
+
+
[![License](https://img.shields.io/badge/license-Apache%202-red.svg)](LICENSE)
[![Version](https://img.shields.io/github/release/PaddlePaddle/PaddleX.svg)](https://github.com/PaddlePaddle/PaddleX/releases)
![python version](https://img.shields.io/badge/python-3.6+-orange.svg)
![support os](https://img.shields.io/badge/os-linux%2C%20win%2C%20mac-yellow.svg)
-PaddleX是基于飞桨开发套件和工具组件的深度学习全流程开发工具。具备易集成,易使用,全流程等特点。PaddleX作为深度学习开发工具,不仅提供了开源的内核代码,可供用户灵活使用或集成,同时也提供了配套的前端可视化客户端套件,让用户以可视化的方式进行模型开发,免去代码开发过程。
+PaddleX是基于飞桨核心框架、开发套件和工具组件的深度学习全流程开发工具。具备**全流程打通**、**融合产业实践**、**易用易集成**三大特点。
-访问[PaddleX官网](https://www.paddlepaddle.org.cn/paddle/paddlex)获取更多细节。
+## 特点
-## 快速安装
+- **全流程打通**
+ - 数据准备:支持LabelMe,精灵标注等主流数据标注工具协议,同时无缝集成[EasyData智能数据服务平台](https://ai.baidu.com/easydata/), 助力开发者高效获取AI开发所需高质量数据。
+ - 模型训练:基于飞桨核心框架集成[PaddleClas](https://github.com/PaddlePaddle/PaddleClas), [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection), [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)视觉开发套件,[VisualDL](https://github.com/PaddlePaddle/VisualDL)可视化分析组件,高效完成模型训练。
+ - 多端部署:内置[PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim)模型压缩工具和AES模型加密SDK,结合Paddle Inference和[Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite)便捷完成高性能且可靠的多端部署。
-PaddleX提供两种使用模式,满足不同的场景和用户需求:
-- **开发模式:** pip安装后,开发者可通过Python API调用方式更灵活地完成模型的训练与软件集成。
-- **可视化模式:** 通过绿色安装的跨平台软件包,用户即可开箱即用,以可视化方式快速体验飞桨深度学习的全流程。
+- **融合产业实践**
+ - 精选飞桨产业实践的成熟模型结构,开放案例实践教程,加速开发者产业落地。
+ - 通过[PaddleHub](https://github.com/PaddlePaddle/Paddle)内置丰富的飞桨高质量预训练模型,助力开发者高效实现飞桨Master模式。
-### 开发模式
+- **易用易集成**
+ - PadldeX提供简洁易用的全流程API,几行代码即可实现上百种数据增强、模型可解释性、C++模型部署等功能。
+ - 提供以PaddleX API为核心集成的跨平台GUI界面,降低深度学习全流程应用门槛。
+
+
+## 安装
+
+PaddleX提供两种开发模式,满足不同场景和用户需求:
+
+- **Python开发模式:** 通过Python API方式完成全流程使用或集成,该模型提供全面、灵活、开放的深度学习功能,有更高的定制化空间。
+
+- **GUI开发模式:** 以PaddleX API为核心集成的跨平台GUI客户端,支持`Python开发模式`下的常用功能,以更低门槛的方式快速完成产业验证的模型训练。
+
+开发者可根据自身需要按需选择不同的模式进行安装使用。
+
+
+### Python开发模式安装
**前置依赖**
-* paddlepaddle >= 1.7.0
+* paddlepaddle >= 1.8.0
* python >= 3.5
* cython
* pycocotools
@@ -27,36 +48,39 @@ PaddleX提供两种使用模式,满足不同的场景和用户需求:
pip install paddlex -i https://mirror.baidu.com/pypi/simple
```
-### 可视化模式
+### GUI开发模式安装
进入PaddleX官网[下载使用](https://www.paddlepaddle.org.cn/paddle/paddlex),申请下载绿色安装包,开箱即用。
+GUI模式的使用教程可参考[PaddleX GUI模式使用教程](https://paddlex.readthedocs.io/zh_CN/latest/paddlex_gui/index.html)
-## 文档
+## 使用文档
推荐访问[PaddleX在线使用文档](https://paddlex.readthedocs.io/zh_CN/latest/index.html),快速查阅读使用教程和API文档说明。
-- [10分钟快速上手PaddleX模型训练](docs/quick_start.md)
-- [PaddleX使用教程](docs/tutorials)
-- [PaddleX模型库](docs/model_zoo.md)
-- [模型多端部署](docs/deploy.md)
-- [PaddleX可视化模式进行模型训练](docs/client_use.md)
+- [10分钟快速上手](https://paddlex.readthedocs.io/zh_CN/latest/quick_start.html)
+- [PaddleX模型训练](https://paddlex.readthedocs.io/zh_CN/latest/tutorials/train/index.html#id1)
+- [PaddleX模型压缩](https://paddlex.readthedocs.io/zh_CN/latest/slim/index.html#id1)
+- [PaddleX模型库](https://paddlex.readthedocs.io/zh_CN/latest/model_zoo.html#id1)
+- [PaddleX多端部署](docs/deploy.md)
+
+## 在线教程
+
+基于AIStudio平台,快速在线体验PaddleX的Python开发模式教程。
+- [PaddleX快速上手——MobileNetV3-ssld 化妆品分类](https://aistudio.baidu.com/aistudio/projectdetail/450220)
+- [PaddleX快速上手——Faster-RCNN AI识虫](https://aistudio.baidu.com/aistudio/projectdetail/439888)
+- [PaddleX快速上手——DeepLabv3+ 视盘分割](https://aistudio.baidu.com/aistudio/projectdetail/440197)
-## 反馈
+## 交流与反馈
- 项目官网: https://www.paddlepaddle.org.cn/paddle/paddlex
- PaddleX用户QQ群: 1045148026 (手机QQ扫描如下二维码快速加入)
-
+
+## FAQ
-## 飞桨技术生态
+## 更新日志
-PaddleX全流程开发工具依赖以下飞桨开发套件与工具组件
+## 贡献代码
-- [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection)
-- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)
-- [PaddleClas](https://github.com/PaddlePaddle/PaddleClas)
-- [PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim)
-- [PaddleHub](https://github.com/PaddlePaddle/PaddleHub)
-- [Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite)
-- [VisualDL](https://github.com/PaddlePaddle/VisualDL)
+我们非常欢迎您为PaddleX贡献代码或者提供使用建议。如果您可以修复某个issue或者增加一个新功能,欢迎给我们提交Pull Requests.
diff --git a/deploy/README.md b/deploy/README.md
index 37007ac7387c5ec808adfb1f3ed200ea41f9b8f8..1a9efe9d8ef9810592bd05e2e9fbd06842303c49 100644
--- a/deploy/README.md
+++ b/deploy/README.md
@@ -2,4 +2,5 @@
本目录为PaddleX模型部署代码, 编译和使用的教程参考:
-- [C++部署文档](../docs/deploy/deploy.md#C部署)
+- [C++部署文档](../docs/tutorials/deploy/deploy.md#C部署)
+- [OpenVINO部署文档](../docs/tutorials/deploy/deploy.md#openvino部署)
diff --git a/deploy/cpp/CMakeLists.txt b/deploy/cpp/CMakeLists.txt
index d04bf85183e7aaf64950cf56ab9ffe7a78ddb3fe..bd13a46713e1239380891e25c3ee7cb68f0f8d1e 100644
--- a/deploy/cpp/CMakeLists.txt
+++ b/deploy/cpp/CMakeLists.txt
@@ -5,12 +5,29 @@ option(WITH_MKL "Compile demo with MKL/OpenBlas support,defaultuseMKL."
option(WITH_GPU "Compile demo with GPU/CPU, default use CPU." ON)
option(WITH_STATIC_LIB "Compile demo with static/shared library, default use static." OFF)
option(WITH_TENSORRT "Compile demo with TensorRT." OFF)
+option(WITH_ENCRYPTION "Compile demo with encryption tool." OFF)
-SET(TENSORRT_DIR "" CACHE PATH "Compile demo with TensorRT")
+SET(TENSORRT_DIR "" CACHE PATH "Location of libraries")
SET(PADDLE_DIR "" CACHE PATH "Location of libraries")
SET(OPENCV_DIR "" CACHE PATH "Location of libraries")
+SET(ENCRYPTION_DIR"" CACHE PATH "Location of libraries")
SET(CUDA_LIB "" CACHE PATH "Location of libraries")
+if (NOT WIN32)
+ set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/lib)
+ set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/lib)
+ set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/demo)
+else()
+ set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/paddlex_inference)
+ set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/paddlex_inference)
+ set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/paddlex_inference)
+endif()
+
+if (NOT WIN32)
+ SET(YAML_BUILD_TYPE ON CACHE BOOL "yaml build shared library.")
+else()
+ SET(YAML_BUILD_TYPE OFF CACHE BOOL "yaml build shared library.")
+endif()
include(cmake/yaml-cpp.cmake)
include_directories("${CMAKE_SOURCE_DIR}/")
@@ -27,6 +44,11 @@ macro(safe_set_static_flag)
endforeach(flag_var)
endmacro()
+
+if (WITH_ENCRYPTION)
+add_definitions( -DWITH_ENCRYPTION=${WITH_ENCRYPTION})
+endif()
+
if (WITH_MKL)
ADD_DEFINITIONS(-DUSE_MKL)
endif()
@@ -183,6 +205,7 @@ else()
set(DEPS ${DEPS}
${MATH_LIB} ${MKLDNN_LIB}
glog gflags_static libprotobuf zlibstatic xxhash libyaml-cppmt)
+
set(DEPS ${DEPS} libcmt shlwapi)
if (EXISTS "${PADDLE_DIR}/third_party/install/snappy/lib")
set(DEPS ${DEPS} snappy)
@@ -207,21 +230,35 @@ if(WITH_GPU)
endif()
endif()
+if(WITH_ENCRYPTION)
+ if(NOT WIN32)
+ include_directories("${ENCRYPTION_DIR}/include")
+ link_directories("${ENCRYPTION_DIR}/lib")
+ set(DEPS ${DEPS} ${ENCRYPTION_DIR}/lib/libpmodel-decrypt${CMAKE_SHARED_LIBRARY_SUFFIX})
+ else()
+ message(FATAL_ERROR "Encryption Tool don't support WINDOWS")
+ endif()
+endif()
+
if (NOT WIN32)
set(EXTERNAL_LIB "-ldl -lrt -lgomp -lz -lm -lpthread")
set(DEPS ${DEPS} ${EXTERNAL_LIB})
endif()
set(DEPS ${DEPS} ${OpenCV_LIBS})
-add_executable(classifier src/classifier.cpp src/transforms.cpp src/paddlex.cpp)
+add_library(paddlex_inference SHARED src/visualize src/transforms.cpp src/paddlex.cpp)
+ADD_DEPENDENCIES(paddlex_inference ext-yaml-cpp)
+target_link_libraries(paddlex_inference ${DEPS})
+
+add_executable(classifier demo/classifier.cpp src/transforms.cpp src/paddlex.cpp)
ADD_DEPENDENCIES(classifier ext-yaml-cpp)
target_link_libraries(classifier ${DEPS})
-add_executable(detector src/detector.cpp src/transforms.cpp src/paddlex.cpp src/visualize.cpp)
+add_executable(detector demo/detector.cpp src/transforms.cpp src/paddlex.cpp src/visualize.cpp)
ADD_DEPENDENCIES(detector ext-yaml-cpp)
target_link_libraries(detector ${DEPS})
-add_executable(segmenter src/segmenter.cpp src/transforms.cpp src/paddlex.cpp src/visualize.cpp)
+add_executable(segmenter demo/segmenter.cpp src/transforms.cpp src/paddlex.cpp src/visualize.cpp)
ADD_DEPENDENCIES(segmenter ext-yaml-cpp)
target_link_libraries(segmenter ${DEPS})
@@ -252,3 +289,14 @@ if (WIN32 AND WITH_MKL)
)
endif()
+
+file(COPY "${CMAKE_SOURCE_DIR}/include/paddlex/visualize.h"
+DESTINATION "${CMAKE_BINARY_DIR}/include/" )
+file(COPY "${CMAKE_SOURCE_DIR}/include/paddlex/config_parser.h"
+DESTINATION "${CMAKE_BINARY_DIR}/include/" )
+file(COPY "${CMAKE_SOURCE_DIR}/include/paddlex/transforms.h"
+DESTINATION "${CMAKE_BINARY_DIR}/include/" )
+file(COPY "${CMAKE_SOURCE_DIR}/include/paddlex/results.h"
+DESTINATION "${CMAKE_BINARY_DIR}/include/" )
+file(COPY "${CMAKE_SOURCE_DIR}/include/paddlex/paddlex.h"
+DESTINATION "${CMAKE_BINARY_DIR}/include/" )
diff --git a/deploy/cpp/cmake/yaml-cpp.cmake b/deploy/cpp/cmake/yaml-cpp.cmake
index 30d904dc76196cf106abccb47c003eed485691f1..caa8be513bcaaf7ff73c12c268b8137e5582672c 100644
--- a/deploy/cpp/cmake/yaml-cpp.cmake
+++ b/deploy/cpp/cmake/yaml-cpp.cmake
@@ -14,7 +14,7 @@ ExternalProject_Add(
-DYAML_CPP_INSTALL=OFF
-DYAML_CPP_BUILD_CONTRIB=OFF
-DMSVC_SHARED_RT=OFF
- -DBUILD_SHARED_LIBS=OFF
+ -DBUILD_SHARED_LIBS=${YAML_BUILD_TYPE}
-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}
-DCMAKE_CXX_FLAGS=${CMAKE_CXX_FLAGS}
-DCMAKE_CXX_FLAGS_DEBUG=${CMAKE_CXX_FLAGS_DEBUG}
diff --git a/deploy/cpp/src/classifier.cpp b/deploy/cpp/demo/classifier.cpp
similarity index 94%
rename from deploy/cpp/src/classifier.cpp
rename to deploy/cpp/demo/classifier.cpp
index a885ab6a09aa3996e7fc2c661b955754e813c92a..badb835132418098d332014a590d2dbb7a1e43fd 100644
--- a/deploy/cpp/src/classifier.cpp
+++ b/deploy/cpp/demo/classifier.cpp
@@ -25,6 +25,7 @@ DEFINE_string(model_dir, "", "Path of inference model");
DEFINE_bool(use_gpu, false, "Infering with GPU or CPU");
DEFINE_bool(use_trt, false, "Infering with TensorRT");
DEFINE_int32(gpu_id, 0, "GPU card id");
+DEFINE_string(key, "", "key of encryption");
DEFINE_string(image, "", "Path of test image file");
DEFINE_string(image_list, "", "Path of test image list file");
@@ -43,7 +44,7 @@ int main(int argc, char** argv) {
// 加载模型
PaddleX::Model model;
- model.Init(FLAGS_model_dir, FLAGS_use_gpu, FLAGS_use_trt, FLAGS_gpu_id);
+ model.Init(FLAGS_model_dir, FLAGS_use_gpu, FLAGS_use_trt, FLAGS_gpu_id, FLAGS_key);
// 进行预测
if (FLAGS_image_list != "") {
diff --git a/deploy/cpp/src/detector.cpp b/deploy/cpp/demo/detector.cpp
similarity index 92%
rename from deploy/cpp/src/detector.cpp
rename to deploy/cpp/demo/detector.cpp
index f31178b26f644eb6cb8c22de403f0c5758655ab7..e5fc2800e2678aa26a15c9fa78d2de9b2e6e58ea 100644
--- a/deploy/cpp/src/detector.cpp
+++ b/deploy/cpp/demo/detector.cpp
@@ -26,6 +26,7 @@ DEFINE_string(model_dir, "", "Path of inference model");
DEFINE_bool(use_gpu, false, "Infering with GPU or CPU");
DEFINE_bool(use_trt, false, "Infering with TensorRT");
DEFINE_int32(gpu_id, 0, "GPU card id");
+DEFINE_string(key, "", "key of encryption");
DEFINE_string(image, "", "Path of test image file");
DEFINE_string(image_list, "", "Path of test image list file");
DEFINE_string(save_dir, "output", "Path to save visualized image");
@@ -45,7 +46,7 @@ int main(int argc, char** argv) {
// 加载模型
PaddleX::Model model;
- model.Init(FLAGS_model_dir, FLAGS_use_gpu, FLAGS_use_trt, FLAGS_gpu_id);
+ model.Init(FLAGS_model_dir, FLAGS_use_gpu, FLAGS_use_trt, FLAGS_gpu_id, FLAGS_key);
auto colormap = PaddleX::GenerateColorMap(model.labels.size());
std::string save_dir = "output";
@@ -74,7 +75,7 @@ int main(int argc, char** argv) {
// 可视化
cv::Mat vis_img =
- PaddleX::VisualizeDet(im, result, model.labels, colormap, 0.5);
+ PaddleX::Visualize(im, result, model.labels, colormap, 0.5);
std::string save_path =
PaddleX::generate_save_path(FLAGS_save_dir, image_path);
cv::imwrite(save_path, vis_img);
@@ -97,7 +98,7 @@ int main(int argc, char** argv) {
// 可视化
cv::Mat vis_img =
- PaddleX::VisualizeDet(im, result, model.labels, colormap, 0.5);
+ PaddleX::Visualize(im, result, model.labels, colormap, 0.5);
std::string save_path =
PaddleX::generate_save_path(FLAGS_save_dir, FLAGS_image);
cv::imwrite(save_path, vis_img);
diff --git a/deploy/cpp/src/segmenter.cpp b/deploy/cpp/demo/segmenter.cpp
similarity index 90%
rename from deploy/cpp/src/segmenter.cpp
rename to deploy/cpp/demo/segmenter.cpp
index d4b7aae37675ef96bae27e9fd0eba9a91d88c38b..0492ef803e15268022d869eb8b8e84969b1c8fad 100644
--- a/deploy/cpp/src/segmenter.cpp
+++ b/deploy/cpp/demo/segmenter.cpp
@@ -26,6 +26,7 @@ DEFINE_string(model_dir, "", "Path of inference model");
DEFINE_bool(use_gpu, false, "Infering with GPU or CPU");
DEFINE_bool(use_trt, false, "Infering with TensorRT");
DEFINE_int32(gpu_id, 0, "GPU card id");
+DEFINE_string(key, "", "key of encryption");
DEFINE_string(image, "", "Path of test image file");
DEFINE_string(image_list, "", "Path of test image list file");
DEFINE_string(save_dir, "output", "Path to save visualized image");
@@ -45,7 +46,7 @@ int main(int argc, char** argv) {
// 加载模型
PaddleX::Model model;
- model.Init(FLAGS_model_dir, FLAGS_use_gpu, FLAGS_use_trt, FLAGS_gpu_id);
+ model.Init(FLAGS_model_dir, FLAGS_use_gpu, FLAGS_use_trt, FLAGS_gpu_id, FLAGS_key);
auto colormap = PaddleX::GenerateColorMap(model.labels.size());
// 进行预测
@@ -62,7 +63,7 @@ int main(int argc, char** argv) {
model.predict(im, &result);
// 可视化
cv::Mat vis_img =
- PaddleX::VisualizeSeg(im, result, model.labels, colormap);
+ PaddleX::Visualize(im, result, model.labels, colormap);
std::string save_path =
PaddleX::generate_save_path(FLAGS_save_dir, image_path);
cv::imwrite(save_path, vis_img);
@@ -74,7 +75,7 @@ int main(int argc, char** argv) {
cv::Mat im = cv::imread(FLAGS_image, 1);
model.predict(im, &result);
// 可视化
- cv::Mat vis_img = PaddleX::VisualizeSeg(im, result, model.labels, colormap);
+ cv::Mat vis_img = PaddleX::Visualize(im, result, model.labels, colormap);
std::string save_path =
PaddleX::generate_save_path(FLAGS_save_dir, FLAGS_image);
cv::imwrite(save_path, vis_img);
diff --git a/deploy/cpp/include/paddlex/paddlex.h b/deploy/cpp/include/paddlex/paddlex.h
index 3e951f46c4ac295e35c3c2a211d09b2eead3fc46..d000728c763666e46271d4602b0e42c41dc130f1 100644
--- a/deploy/cpp/include/paddlex/paddlex.h
+++ b/deploy/cpp/include/paddlex/paddlex.h
@@ -28,9 +28,14 @@
#include "paddle_inference_api.h" // NOLINT
-#include "include/paddlex/config_parser.h"
-#include "include/paddlex/results.h"
-#include "include/paddlex/transforms.h"
+#include "config_parser.h"
+#include "results.h"
+#include "transforms.h"
+
+#ifdef WITH_ENCRYPTION
+#include "paddle_model_decrypt.h"
+#include "model_code.h"
+#endif
namespace PaddleX {
@@ -39,14 +44,16 @@ class Model {
void Init(const std::string& model_dir,
bool use_gpu = false,
bool use_trt = false,
- int gpu_id = 0) {
- create_predictor(model_dir, use_gpu, use_trt, gpu_id);
+ int gpu_id = 0,
+ std::string key = "") {
+ create_predictor(model_dir, use_gpu, use_trt, gpu_id, key);
}
void create_predictor(const std::string& model_dir,
bool use_gpu = false,
bool use_trt = false,
- int gpu_id = 0);
+ int gpu_id = 0,
+ std::string key = "");
bool load_config(const std::string& model_dir);
diff --git a/deploy/cpp/include/paddlex/visualize.h b/deploy/cpp/include/paddlex/visualize.h
index ba928b705ddd004277971cea4350a0610cf1af2c..7a71f474d028795aa1dec3cd993f5480c0906ced 100644
--- a/deploy/cpp/include/paddlex/visualize.h
+++ b/deploy/cpp/include/paddlex/visualize.h
@@ -46,13 +46,13 @@ namespace PaddleX {
// Generate visualization colormap for each class
std::vector GenerateColorMap(int num_class);
-cv::Mat VisualizeDet(const cv::Mat& img,
+cv::Mat Visualize(const cv::Mat& img,
const DetResult& results,
const std::map& labels,
const std::vector& colormap,
float threshold = 0.5);
-cv::Mat VisualizeSeg(const cv::Mat& img,
+cv::Mat Visualize(const cv::Mat& img,
const SegResult& result,
const std::map& labels,
const std::vector& colormap);
diff --git a/deploy/cpp/scripts/build.sh b/deploy/cpp/scripts/build.sh
index b6b0b5ca14106951571112df92abef38e9d497f4..52019280e5ec1b2bdb1cc84094a2e63a4f98393d 100644
--- a/deploy/cpp/scripts/build.sh
+++ b/deploy/cpp/scripts/build.sh
@@ -16,6 +16,11 @@ CUDA_LIB=/path/to/cuda/lib/
# CUDNN 的 lib 路径
CUDNN_LIB=/path/to/cudnn/lib/
+# 是否加载加密后的模型
+WITH_ENCRYPTION=OFF
+# 加密工具的路径
+ENCRYPTION_DIR=/path/to/encryption_tool/
+
# OPENCV 路径, 如果使用自带预编译版本可不修改
OPENCV_DIR=$(pwd)/deps/opencv3gcc4.8/
sh $(pwd)/scripts/bootstrap.sh
@@ -28,10 +33,12 @@ cmake .. \
-DWITH_GPU=${WITH_GPU} \
-DWITH_MKL=${WITH_MKL} \
-DWITH_TENSORRT=${WITH_TENSORRT} \
+ -DWITH_ENCRYPTION=${WITH_ENCRYPTION} \
-DTENSORRT_DIR=${TENSORRT_DIR} \
-DPADDLE_DIR=${PADDLE_DIR} \
-DWITH_STATIC_LIB=${WITH_STATIC_LIB} \
-DCUDA_LIB=${CUDA_LIB} \
-DCUDNN_LIB=${CUDNN_LIB} \
+ -DENCRYPTION_DIR=${ENCRYPTION_DIR} \
-DOPENCV_DIR=${OPENCV_DIR}
make
diff --git a/deploy/cpp/src/paddlex.cpp b/deploy/cpp/src/paddlex.cpp
index ddaecaa3f19788ceb3c57bf133f7f6a32f894f50..d10c3fd429202c57f074ffacb248fa7eb243cd3f 100644
--- a/deploy/cpp/src/paddlex.cpp
+++ b/deploy/cpp/src/paddlex.cpp
@@ -19,7 +19,8 @@ namespace PaddleX {
void Model::create_predictor(const std::string& model_dir,
bool use_gpu,
bool use_trt,
- int gpu_id) {
+ int gpu_id,
+ std::string key) {
// 读取配置文件
if (!load_config(model_dir)) {
std::cerr << "Parse file 'model.yml' failed!" << std::endl;
@@ -28,7 +29,14 @@ void Model::create_predictor(const std::string& model_dir,
paddle::AnalysisConfig config;
std::string model_file = model_dir + OS_PATH_SEP + "__model__";
std::string params_file = model_dir + OS_PATH_SEP + "__params__";
- config.SetModel(model_file, params_file);
+#ifdef WITH_ENCRYPTION
+ if (key != ""){
+ paddle_security_load_model(&config, key.c_str(), model_file.c_str(), params_file.c_str());
+ }
+#endif
+ if (key == ""){
+ config.SetModel(model_file, params_file);
+ }
if (use_gpu) {
config.EnableUseGpu(100, gpu_id);
} else {
diff --git a/deploy/cpp/src/visualize.cpp b/deploy/cpp/src/visualize.cpp
index 28c7c3bcce51a3f26dfd6ab3ea377b644e259622..6ec09fd1c2b7a342ea3d31e784a80033d80f1014 100644
--- a/deploy/cpp/src/visualize.cpp
+++ b/deploy/cpp/src/visualize.cpp
@@ -31,7 +31,7 @@ std::vector GenerateColorMap(int num_class) {
return colormap;
}
-cv::Mat VisualizeDet(const cv::Mat& img,
+cv::Mat Visualize(const cv::Mat& img,
const DetResult& result,
const std::map& labels,
const std::vector& colormap,
@@ -105,7 +105,7 @@ cv::Mat VisualizeDet(const cv::Mat& img,
return vis_img;
}
-cv::Mat VisualizeSeg(const cv::Mat& img,
+cv::Mat Visualize(const cv::Mat& img,
const SegResult& result,
const std::map& labels,
const std::vector& colormap) {
diff --git a/deploy/openvino/CMakeLists.txt b/deploy/openvino/CMakeLists.txt
new file mode 100644
index 0000000000000000000000000000000000000000..8e32a9592fce38918e46ad9ab9e4b2d1fc97cd6e
--- /dev/null
+++ b/deploy/openvino/CMakeLists.txt
@@ -0,0 +1,111 @@
+cmake_minimum_required(VERSION 3.0)
+project(PaddleX CXX C)
+
+
+option(WITH_STATIC_LIB "Compile demo with static/shared library, default use static." OFF)
+
+SET(CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake" ${CMAKE_MODULE_PATH})
+SET(OPENVINO_DIR "" CACHE PATH "Location of libraries")
+SET(OPENCV_DIR "" CACHE PATH "Location of libraries")
+SET(GFLAGS_DIR "" CACHE PATH "Location of libraries")
+SET(NGRAPH_LIB "" CACHE PATH "Location of libraries")
+
+include(cmake/yaml-cpp.cmake)
+
+include_directories("${CMAKE_SOURCE_DIR}/")
+link_directories("${CMAKE_CURRENT_BINARY_DIR}")
+include_directories("${CMAKE_CURRENT_BINARY_DIR}/ext/yaml-cpp/src/ext-yaml-cpp/include")
+link_directories("${CMAKE_CURRENT_BINARY_DIR}/ext/yaml-cpp/lib")
+
+macro(safe_set_static_flag)
+ foreach(flag_var
+ CMAKE_CXX_FLAGS CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE
+ CMAKE_CXX_FLAGS_MINSIZEREL CMAKE_CXX_FLAGS_RELWITHDEBINFO)
+ if(${flag_var} MATCHES "/MD")
+ string(REGEX REPLACE "/MD" "/MT" ${flag_var} "${${flag_var}}")
+ endif(${flag_var} MATCHES "/MD")
+ endforeach(flag_var)
+endmacro()
+
+if (NOT DEFINED OPENVINO_DIR OR ${OPENVINO_DIR} STREQUAL "")
+ message(FATAL_ERROR "please set OPENVINO_DIR with -DOPENVINO_DIR=/path/influence_engine")
+endif()
+
+if (NOT DEFINED OPENCV_DIR OR ${OPENCV_DIR} STREQUAL "")
+ message(FATAL_ERROR "please set OPENCV_DIR with -DOPENCV_DIR=/path/opencv")
+endif()
+
+if (NOT DEFINED GFLAGS_DIR OR ${GFLAGS_DIR} STREQUAL "")
+ message(FATAL_ERROR "please set GFLAGS_DIR with -DGFLAGS_DIR=/path/gflags")
+endif()
+
+if (NOT DEFINED NGRAPH_LIB OR ${NGRAPH_LIB} STREQUAL "")
+ message(FATAL_ERROR "please set NGRAPH_DIR with -DNGRAPH_DIR=/path/ngraph")
+endif()
+
+include_directories("${OPENVINO_DIR}")
+link_directories("${OPENVINO_DIR}/lib")
+include_directories("${OPENVINO_DIR}/include")
+link_directories("${OPENVINO_DIR}/external/tbb/lib")
+include_directories("${OPENVINO_DIR}/external/tbb/include/tbb")
+
+link_directories("${GFLAGS_DIR}/lib")
+include_directories("${GFLAGS_DIR}/include")
+
+link_directories("${NGRAPH_LIB}")
+link_directories("${NGRAPH_LIB}/lib")
+
+if (WIN32)
+ find_package(OpenCV REQUIRED PATHS ${OPENCV_DIR}/build/ NO_DEFAULT_PATH)
+ unset(OpenCV_DIR CACHE)
+else ()
+ find_package(OpenCV REQUIRED PATHS ${OPENCV_DIR}/share/OpenCV NO_DEFAULT_PATH)
+endif ()
+
+include_directories(${OpenCV_INCLUDE_DIRS})
+
+if (WIN32)
+ add_definitions("/DGOOGLE_GLOG_DLL_DECL=")
+ set(CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG} /bigobj /MTd")
+ set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} /bigobj /MT")
+ set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} /bigobj /MTd")
+ set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /bigobj /MT")
+ if (WITH_STATIC_LIB)
+ safe_set_static_flag()
+ add_definitions(-DSTATIC_LIB)
+ endif()
+else()
+ set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -g -o2 -fopenmp -std=c++11")
+ set(CMAKE_STATIC_LIBRARY_PREFIX "")
+endif()
+
+
+if(WITH_STATIC_LIB)
+ set(DEPS ${OPENVINO_DIR}/lib/intel64/libinference_engine${CMAKE_STATIC_LIBRARY_SUFFIX})
+ set(DEPS ${DEPS} ${OPENVINO_DIR}/lib/intel64/libinference_engine_legacy${CMAKE_STATIC_LIBRARY_SUFFIX})
+else()
+ set(DEPS ${OPENVINO_DIR}/lib/intel64/libinference_engine${CMAKE_SHARED_LIBRARY_SUFFIX})
+ set(DEPS ${DEPS} ${OPENVINO_DIR}/lib/intel64/libinference_engine_legacy${CMAKE_SHARED_LIBRARY_SUFFIX})
+endif()
+
+if (NOT WIN32)
+ set(DEPS ${DEPS}
+ glog gflags z yaml-cpp
+ )
+else()
+ set(DEPS ${DEPS}
+ glog gflags_static libprotobuf zlibstatic xxhash libyaml-cppmt)
+ set(DEPS ${DEPS} libcmt shlwapi)
+endif(NOT WIN32)
+
+
+if (NOT WIN32)
+ set(EXTERNAL_LIB "-ldl -lrt -lgomp -lz -lm -lpthread")
+ set(DEPS ${DEPS} ${EXTERNAL_LIB})
+endif()
+
+set(DEPS ${DEPS} ${OpenCV_LIBS})
+add_executable(classifier src/classifier.cpp src/transforms.cpp src/paddlex.cpp)
+ADD_DEPENDENCIES(classifier ext-yaml-cpp)
+target_link_libraries(classifier ${DEPS})
+
diff --git a/deploy/openvino/CMakeSettings.json b/deploy/openvino/CMakeSettings.json
new file mode 100644
index 0000000000000000000000000000000000000000..861839dbc67816aeb96ca1ab174d95ca7dd292ef
--- /dev/null
+++ b/deploy/openvino/CMakeSettings.json
@@ -0,0 +1,27 @@
+{
+ "configurations": [
+ {
+ "name": "x64-Release",
+ "generator": "Ninja",
+ "configurationType": "RelWithDebInfo",
+ "inheritEnvironments": [ "msvc_x64_x64" ],
+ "buildRoot": "${projectDir}\\out\\build\\${name}",
+ "installRoot": "${projectDir}\\out\\install\\${name}",
+ "cmakeCommandArgs": "",
+ "buildCommandArgs": "-v",
+ "ctestCommandArgs": "",
+ "variables": [
+ {
+ "name": "OPENCV_DIR",
+ "value": "C:/projects/opencv",
+ "type": "PATH"
+ },
+ {
+ "name": "OPENVINO_LIB",
+ "value": "C:/projetcs/inference_engine",
+ "type": "PATH"
+ }
+ ]
+ }
+ ]
+}
diff --git a/deploy/openvino/cmake/yaml-cpp.cmake b/deploy/openvino/cmake/yaml-cpp.cmake
new file mode 100644
index 0000000000000000000000000000000000000000..30d904dc76196cf106abccb47c003eed485691f1
--- /dev/null
+++ b/deploy/openvino/cmake/yaml-cpp.cmake
@@ -0,0 +1,30 @@
+find_package(Git REQUIRED)
+
+include(ExternalProject)
+
+message("${CMAKE_BUILD_TYPE}")
+
+ExternalProject_Add(
+ ext-yaml-cpp
+ URL https://bj.bcebos.com/paddlex/deploy/deps/yaml-cpp.zip
+ URL_MD5 9542d6de397d1fbd649ed468cb5850e6
+ CMAKE_ARGS
+ -DYAML_CPP_BUILD_TESTS=OFF
+ -DYAML_CPP_BUILD_TOOLS=OFF
+ -DYAML_CPP_INSTALL=OFF
+ -DYAML_CPP_BUILD_CONTRIB=OFF
+ -DMSVC_SHARED_RT=OFF
+ -DBUILD_SHARED_LIBS=OFF
+ -DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}
+ -DCMAKE_CXX_FLAGS=${CMAKE_CXX_FLAGS}
+ -DCMAKE_CXX_FLAGS_DEBUG=${CMAKE_CXX_FLAGS_DEBUG}
+ -DCMAKE_CXX_FLAGS_RELEASE=${CMAKE_CXX_FLAGS_RELEASE}
+ -DCMAKE_LIBRARY_OUTPUT_DIRECTORY=${CMAKE_BINARY_DIR}/ext/yaml-cpp/lib
+ -DCMAKE_ARCHIVE_OUTPUT_DIRECTORY=${CMAKE_BINARY_DIR}/ext/yaml-cpp/lib
+ PREFIX "${CMAKE_BINARY_DIR}/ext/yaml-cpp"
+ # Disable install step
+ INSTALL_COMMAND ""
+ LOG_DOWNLOAD ON
+ LOG_BUILD 1
+)
+
diff --git a/deploy/openvino/include/paddlex/config_parser.h b/deploy/openvino/include/paddlex/config_parser.h
new file mode 100644
index 0000000000000000000000000000000000000000..5303e4da7ac0eb3de73bc57059617d361065f136
--- /dev/null
+++ b/deploy/openvino/include/paddlex/config_parser.h
@@ -0,0 +1,57 @@
+// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+#pragma once
+
+#include
+#include