@@ -263,3 +249,4 @@ A1:如果已经走通了上述步骤,更换模型只需要替换 `.nb` 模
Q2:换一个图测试怎么做?
A2:替换 debug 下的测试图像为你想要测试的图像,使用 ADB 再次 push 到手机上即可。
+
diff --git a/deploy/lite_shitu/Makefile b/deploy/lite_shitu/Makefile
index 64ee6aed81e09f9fc21f48aa1ab3ed4e81dd9b1c..df6685b6517d073d4c08bb5ec6dd34397fdbd072 100644
--- a/deploy/lite_shitu/Makefile
+++ b/deploy/lite_shitu/Makefile
@@ -17,7 +17,6 @@ ${info LITE_ROOT: $(abspath ${LITE_ROOT})}
THIRD_PARTY_DIR=third_party
${info THIRD_PARTY_DIR: $(abspath ${THIRD_PARTY_DIR})}
-
OPENCV_VERSION=opencv4.1.0
OPENCV_LIBS = ${THIRD_PARTY_DIR}/${OPENCV_VERSION}/${ARM_PLAT}/libs/libopencv_imgcodecs.a \
${THIRD_PARTY_DIR}/${OPENCV_VERSION}/${ARM_PLAT}/libs/libopencv_imgproc.a \
@@ -32,6 +31,8 @@ OPENCV_LIBS = ${THIRD_PARTY_DIR}/${OPENCV_VERSION}/${ARM_PLAT}/libs/libopencv_im
${THIRD_PARTY_DIR}/${OPENCV_VERSION}/${ARM_PLAT}/3rdparty/libs/libtbb.a \
${THIRD_PARTY_DIR}/${OPENCV_VERSION}/${ARM_PLAT}/3rdparty/libs/libcpufeatures.a
+FAISS_VERSION=faiss1.5.3
+FAISS_LIBS = ${THIRD_PARTY_DIR}/${FAISS_VERSION}/libs/${ARM_PLAT}/libfaiss.a
LITE_LIBS = -L${LITE_ROOT}/cxx/lib/ -lpaddle_light_api_shared
###############################################################
@@ -45,7 +46,7 @@ LITE_LIBS = -L${LITE_ROOT}/cxx/lib/ -lpaddle_light_api_shared
# 2. Undo comment below line using `libpaddle_api_light_bundled.a`
# LITE_LIBS = ${LITE_ROOT}/cxx/lib/libpaddle_api_light_bundled.a
-CXX_LIBS = $(LITE_LIBS) ${OPENCV_LIBS} $(SYSTEM_LIBS)
+CXX_LIBS = $(LITE_LIBS) ${OPENCV_LIBS} ${FAISS_LIBS} $(SYSTEM_LIBS)
LOCAL_DIRSRCS=$(wildcard src/*.cc)
LOCAL_SRCS=$(notdir $(LOCAL_DIRSRCS))
@@ -53,9 +54,17 @@ LOCAL_OBJS=$(patsubst %.cpp, %.o, $(patsubst %.cc, %.o, $(LOCAL_SRCS)))
JSON_OBJS = json_reader.o json_value.o json_writer.o
-pp_shitu: $(LOCAL_OBJS) $(JSON_OBJS) fetch_opencv
+pp_shitu: $(LOCAL_OBJS) $(JSON_OBJS) fetch_opencv fetch_faiss
$(CC) $(SYSROOT_LINK) $(CXXFLAGS_LINK) $(LOCAL_OBJS) $(JSON_OBJS) -o pp_shitu $(CXX_LIBS) $(LDFLAGS)
+fetch_faiss:
+ @ test -d ${THIRD_PARTY_DIR} || mkdir ${THIRD_PARTY_DIR}
+ @ test -e ${THIRD_PARTY_DIR}/${FAISS_VERSION}.tar.gz || \
+ (echo "fetch faiss libs" && \
+ wget -P ${THIRD_PARTY_DIR} https://paddle-inference-dist.bj.bcebos.com/${FAISS_VERSION}.tar.gz)
+ @ test -d ${THIRD_PARTY_DIR}/${FAISS_VERSION} || \
+ tar -xf ${THIRD_PARTY_DIR}/${FAISS_VERSION}.tar.gz -C ${THIRD_PARTY_DIR}
+
fetch_opencv:
@ test -d ${THIRD_PARTY_DIR} || mkdir ${THIRD_PARTY_DIR}
@ test -e ${THIRD_PARTY_DIR}/${OPENCV_VERSION}.tar.gz || \
@@ -74,11 +83,12 @@ fetch_json_code:
LOCAL_INCLUDES = -I./ -Iinclude
OPENCV_INCLUDE = -I${THIRD_PARTY_DIR}/${OPENCV_VERSION}/${ARM_PLAT}/include
+FAISS_INCLUDE = -I${THIRD_PARTY_DIR}/${FAISS_VERSION}/include
JSON_INCLUDE = -I${THIRD_PARTY_DIR}/jsoncpp_code/include
-CXX_INCLUDES = ${LOCAL_INCLUDES} ${INCLUDES} ${OPENCV_INCLUDE} ${JSON_INCLUDE} -I$(LITE_ROOT)/cxx/include
+CXX_INCLUDES = ${LOCAL_INCLUDES} ${INCLUDES} ${OPENCV_INCLUDE} ${FAISS_INCLUDE} ${JSON_INCLUDE} -I$(LITE_ROOT)/cxx/include
-$(LOCAL_OBJS): %.o: src/%.cc fetch_opencv fetch_json_code
+$(LOCAL_OBJS): %.o: src/%.cc fetch_opencv fetch_json_code fetch_faiss
$(CC) $(SYSROOT_COMPLILE) $(CXX_DEFINES) $(CXX_INCLUDES) $(CXX_FLAGS) -c $< -o $@
$(JSON_OBJS): %.o: ${THIRD_PARTY_DIR}/jsoncpp_code/%.cpp fetch_json_code
diff --git a/deploy/lite_shitu/README.md b/deploy/lite_shitu/README.md
index 8f5462f69f87a0ac810fe7c7c7892b17520693c5..52871c3c16dc9990f9cf23de24b24cb54067cac6 100644
--- a/deploy/lite_shitu/README.md
+++ b/deploy/lite_shitu/README.md
@@ -2,7 +2,7 @@
本教程将介绍基于[Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite) 在移动端部署PaddleClas PP-ShiTu模型的详细步骤。
-Paddle Lite是飞桨轻量化推理引擎,为手机、IOT端提供高效推理能力,并广泛整合跨平台硬件,为端侧部署及应用落地问题提供轻量化的部署方案。
+Paddle Lite是飞桨轻量化推理引擎,为手机、IoT端提供高效推理能力,并广泛整合跨平台硬件,为端侧部署及应用落地问题提供轻量化的部署方案。
## 1. 准备环境
@@ -81,35 +81,134 @@ inference_lite_lib.android.armv8/
| `-- java Java 预测库demo
```
-## 2 开始运行
+## 2 模型准备
### 2.1 模型准备
+PaddleClas 提供了转换并优化后的推理模型,可以直接参考下方 2.1.1 小节进行下载。如果需要使用其他模型,请参考后续 2.1.2 小节自行转换并优化模型。
-#### 2.1.1 模型准备
+#### 2.1.1 使用PaddleClas提供的推理模型
```shell
# 进入lite_ppshitu目录
cd $PaddleClas/deploy/lite_shitu
-wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/lite/ppshitu_lite_models_v1.0.tar
-tar -xf ppshitu_lite_models_v1.0.tar
-rm -f ppshitu_lite_models_v1.0.tar
+wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/lite/ppshitu_lite_models_v1.1.tar
+tar -xf ppshitu_lite_models_v1.1.tar
+rm -f ppshitu_lite_models_v1.1.tar
```
-#### 2.1.2将yaml文件转换成json文件
+#### 2.1.2 使用其他模型
+
+Paddle-Lite 提供了多种策略来自动优化原始的模型,其中包括量化、子图融合、混合调度、Kernel优选等方法,使用Paddle-Lite的`opt`工具可以自动对inference模型进行优化,目前支持两种优化方式,优化后的模型更轻量,模型运行速度更快。
+
+**注意**:如果已经准备好了 `.nb` 结尾的模型文件,可以跳过此步骤。
+
+##### 2.1.2.1 安装paddle_lite_opt工具
+
+安装`paddle_lite_opt`工具有如下两种方法:
+
+1. [**建议**]pip安装paddlelite并进行转换
+ ```shell
+ pip install paddlelite==2.10rc
+ ```
+
+2. 源码编译Paddle-Lite生成`paddle_lite_opt`工具
+
+ 模型优化需要Paddle-Lite的`opt`可执行文件,可以通过编译Paddle-Lite源码获得,编译步骤如下:
+ ```shell
+ # 如果准备环境时已经clone了Paddle-Lite,则不用重新clone Paddle-Lite
+ git clone https://github.com/PaddlePaddle/Paddle-Lite.git
+ cd Paddle-Lite
+ git checkout develop
+ # 启动编译
+ ./lite/tools/build.sh build_optimize_tool
+ ```
+
+ 编译完成后,`opt`文件位于`build.opt/lite/api/`下,可通过如下方式查看`opt`的运行选项和使用方式;
+ ```shell
+ cd build.opt/lite/api/
+ ./opt
+ ```
+
+ `opt`的使用方式与参数与上面的`paddle_lite_opt`完全一致。
+
+之后使用`paddle_lite_opt`工具可以进行inference模型的转换。`paddle_lite_opt`的部分参数如下:
+
+|选项|说明|
+|-|-|
+|--model_file|待优化的PaddlePaddle模型(combined形式)的网络结构文件路径|
+|--param_file|待优化的PaddlePaddle模型(combined形式)的权重文件路径|
+|--optimize_out_type|输出模型类型,目前支持两种类型:protobuf和naive_buffer,其中naive_buffer是一种更轻量级的序列化/反序列化实现,默认为naive_buffer|
+|--optimize_out|优化模型的输出路径|
+|--valid_targets|指定模型可执行的backend,默认为arm。目前可支持x86、arm、opencl、npu、xpu,可以同时指定多个backend(以空格分隔),Model Optimize Tool将会自动选择最佳方式。如果需要支持华为NPU(Kirin 810/990 Soc搭载的达芬奇架构NPU),应当设置为npu, arm|
+
+更详细的`paddle_lite_opt`工具使用说明请参考[使用opt转化模型文档](https://paddle-lite.readthedocs.io/zh/latest/user_guides/opt/opt_bin.html)
+
+`--model_file`表示inference模型的model文件地址,`--param_file`表示inference模型的param文件地址;`optimize_out`用于指定输出文件的名称(不需要添加`.nb`的后缀)。直接在命令行中运行`paddle_lite_opt`,也可以查看所有参数及其说明。
+
+
+##### 2.1.2.2 转换示例
+
+下面介绍使用`paddle_lite_opt`完成主体检测模型和识别模型的预训练模型,转成inference模型,最终转换成Paddle-Lite的优化模型的过程。
+
+1. 转换主体检测模型
+
+```shell
+# 当前目录为 $PaddleClas/deploy/lite_shitu
+# $code_path需替换成相应的运行目录,可以根据需要,将$code_path设置成需要的目录
+export $code_path=~
+cd $code_path
+git clone https://github.com/PaddlePaddle/PaddleDetection.git
+# 进入PaddleDetection根目录
+cd PaddleDetection
+# 将预训练模型导出为inference模型
+python tools/export_model.py -c configs/picodet/application/mainbody_detection/picodet_lcnet_x2_5_640_mainbody.yml -o weights=https://paddledet.bj.bcebos.com/models/picodet_lcnet_x2_5_640_mainbody.pdparams --output_dir=inference
+# 将inference模型转化为Paddle-Lite优化模型
+paddle_lite_opt --model_file=inference/picodet_lcnet_x2_5_640_mainbody/model.pdmodel --param_file=inference/picodet_lcnet_x2_5_640_mainbody/model.pdiparams --optimize_out=inference/picodet_lcnet_x2_5_640_mainbody/mainbody_det
+# 将转好的模型复制到lite_shitu目录下
+cd $PaddleClas/deploy/lite_shitu
+mkdir models
+cp $code_path/PaddleDetection/inference/picodet_lcnet_x2_5_640_mainbody/mainbody_det.nb $PaddleClas/deploy/lite_shitu/models
+```
+
+2. 转换识别模型
+
+```shell
+# 转换为Paddle-Lite模型
+paddle_lite_opt --model_file=inference/inference.pdmodel --param_file=inference/inference.pdiparams --optimize_out=inference/rec
+# 将模型文件拷贝到lite_shitu下
+cp inference/rec.nb deploy/lite_shitu/models/
+cd deploy/lite_shitu
+```
+
+**注意**:`--optimize_out` 参数为优化后模型的保存路径,无需加后缀`.nb`;`--model_file` 参数为模型结构信息文件的路径,`--param_file` 参数为模型权重信息文件的路径,请注意文件名。
+
+### 2.2 将yaml文件转换成json文件
```shell
# 如果测试单张图像
-python generate_json_config.py --det_model_path ppshitu_lite_models_v1.0/mainbody_PPLCNet_x2_5_640_quant_v1.0_lite.nb --rec_model_path ppshitu_lite_models_v1.0/general_PPLCNet_x2_5_quant_v1.0_lite.nb --rec_label_path ppshitu_lite_models_v1.0/label.txt --img_path images/demo.jpg
+python generate_json_config.py --det_model_path ppshitu_lite_models_v1.1/mainbody_PPLCNet_x2_5_640_quant_v1.1_lite.nb --rec_model_path ppshitu_lite_models_v1.1/general_PPLCNet_x2_5_lite_v1.1_infer.nb --img_path images/demo.jpg
# or
# 如果测试多张图像
-python generate_json_config.py --det_model_path ppshitu_lite_models_v1.0/mainbody_PPLCNet_x2_5_640_quant_v1.0_lite.nb --rec_model_path ppshitu_lite_models_v1.0/general_PPLCNet_x2_5_quant_v1.0_lite.nb --rec_label_path ppshitu_lite_models_v1.0/label.txt --img_dir images
-
+python generate_json_config.py --det_model_path ppshitu_lite_models_v1.1/mainbody_PPLCNet_x2_5_640_quant_v1.1_lite.nb --rec_model_path ppshitu_lite_models_v1.1/general_PPLCNet_x2_5_lite_v1.1_infer.nb --img_dir images
# 执行完成后,会在lit_shitu下生成shitu_config.json配置文件
+```
+
+### 2.3 index字典转换
+由于python的检索库字典,使用`pickle`进行的序列化存储,导致C++不方便读取,因此需要进行转换
+
+```shell
+# 下载瓶装饮料数据集
+wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v1.0.tar && tar -xf drink_dataset_v1.0.tar
+rm -rf drink_dataset_v1.0.tar
+# 转化id_map.pkl为id_map.txt
+python transform_id_map.py -c ../configs/inference_drink.yaml
```
+转换成功后,会在`IndexProcess.index_dir`目录下生成`id_map.txt`。
+
-### 2.2 与手机联调
+### 2.4 与手机联调
首先需要进行一些准备工作。
1. 准备一台arm8的安卓手机,如果编译的预测库是armv7,则需要arm7的手机,并修改Makefile中`ARM_ABI=arm7`。
@@ -160,7 +259,8 @@ make ARM_ABI=arm8
```shell
mkdir deploy
-mv ppshitu_lite_models_v1.0 deploy/
+mv ppshitu_lite_models_v1.1 deploy/
+mv drink_dataset_v1.0 deploy/
mv images deploy/
mv shitu_config.json deploy/
cp pp_shitu deploy/
@@ -173,13 +273,13 @@ cp ../../../cxx/lib/libpaddle_light_api_shared.so deploy/
```shell
deploy/
-|-- ppshitu_lite_models_v1.0/
-| |--mainbody_PPLCNet_x2_5_640_v1.0_lite.nb 优化后的主体检测模型文件
-| |--general_PPLCNet_x2_5_quant_v1.0_lite.nb 优化后的识别模型文件
-| |--label.txt 识别模型的label文件
+|-- ppshitu_lite_models_v1.1/
+| |--mainbody_PPLCNet_x2_5_640_quant_v1.1_lite.nb 优化后的主体检测模型文件
+| |--general_PPLCNet_x2_5_lite_v1.1_infer.nb 优化后的识别模型文件
|-- images/
| |--demo.jpg 图片文件
-| ... 图片文件
+|-- drink_dataset_v1.0/ 瓶装饮料demo数据
+| |--index 检索index目录
|-- pp_shitu 生成的移动端执行文件
|-- shitu_config.json 执行时参数配置文件
|-- libpaddle_light_api_shared.so Paddle-Lite库文件
@@ -207,8 +307,10 @@ chmod 777 pp_shitu
如果对代码做了修改,则需要重新编译并push到手机上。
运行效果如下:
-
-
+```
+images/demo.jpg:
+ result0: bbox[253, 275, 1146, 872], score: 0.974196, label: 伊藤园_果蔬汁
+```
## FAQ
Q1:如果想更换模型怎么办,需要重新按照流程走一遍吗?
diff --git a/deploy/lite_shitu/generate_json_config.py b/deploy/lite_shitu/generate_json_config.py
index 1525cdab9e223eb480dec2103328a22c91551f65..642dfcd9d6a46e2894ec0f01f0914a5347bc8d72 100644
--- a/deploy/lite_shitu/generate_json_config.py
+++ b/deploy/lite_shitu/generate_json_config.py
@@ -95,7 +95,7 @@ def main():
config_json["Global"]["det_model_path"] = args.det_model_path
config_json["Global"]["rec_model_path"] = args.rec_model_path
config_json["Global"]["rec_label_path"] = args.rec_label_path
- config_json["Global"]["label_list"] = config_yaml["Global"]["labe_list"]
+ config_json["Global"]["label_list"] = config_yaml["Global"]["label_list"]
config_json["Global"]["rec_nms_thresold"] = config_yaml["Global"][
"rec_nms_thresold"]
config_json["Global"]["max_det_results"] = config_yaml["Global"][
@@ -130,6 +130,8 @@ def main():
y["type"] = k
config_json["RecPreProcess"]["transform_ops"].append(y)
+ # set IndexProces
+ config_json["IndexProcess"] = config_yaml["IndexProcess"]
with open('shitu_config.json', 'w') as fd:
json.dump(config_json, fd, indent=4)
diff --git a/deploy/lite_shitu/include/recognition.h b/deploy/lite_shitu/include/feature_extractor.h
similarity index 72%
rename from deploy/lite_shitu/include/recognition.h
rename to deploy/lite_shitu/include/feature_extractor.h
index 0c45e946e8b88d161fcbe470ccefc6e370cf8b1e..1961459ecfab149695890df60cef550ed5177b52 100644
--- a/deploy/lite_shitu/include/recognition.h
+++ b/deploy/lite_shitu/include/feature_extractor.h
@@ -36,10 +36,9 @@ struct RESULT {
float score;
};
-class Recognition {
-
+class FeatureExtract {
public:
- explicit Recognition(const Json::Value &config_file) {
+ explicit FeatureExtract(const Json::Value &config_file) {
MobileConfig config;
if (config_file["Global"]["rec_model_path"].as
().empty()) {
std::cout << "Please set [rec_model_path] in config file" << std::endl;
@@ -53,29 +52,8 @@ public:
std::cout << "Please set [rec_label_path] in config file" << std::endl;
exit(-1);
}
- LoadLabel(config_file["Global"]["rec_label_path"].as());
SetPreProcessParam(config_file["RecPreProcess"]["transform_ops"]);
- if (!config_file["Global"].isMember("return_k")){
- this->topk = config_file["Global"]["return_k"].as();
- }
- printf("rec model create!\n");
- }
-
- void LoadLabel(std::string path) {
- std::ifstream file;
- std::vector label_list;
- file.open(path);
- while (file) {
- std::string line;
- std::getline(file, line);
- std::string::size_type pos = line.find(" ");
- if (pos != std::string::npos) {
- line = line.substr(pos);
- }
- this->label_list.push_back(line);
- }
- file.clear();
- file.close();
+ printf("feature extract model create!\n");
}
void SetPreProcessParam(const Json::Value &config_file) {
@@ -97,19 +75,17 @@ public:
}
}
- std::vector RunRecModel(const cv::Mat &img, double &cost_time);
- std::vector PostProcess(const float *output_data, int output_size,
- cv::Mat &output_image);
+ void RunRecModel(const cv::Mat &img, double &cost_time, std::vector &feature);
+ //void PostProcess(std::vector &feature);
cv::Mat ResizeImage(const cv::Mat &img);
void NeonMeanScale(const float *din, float *dout, int size);
private:
std::shared_ptr predictor;
- std::vector label_list;
+ //std::vector label_list;
std::vector mean = {0.485f, 0.456f, 0.406f};
std::vector std = {1 / 0.229f, 1 / 0.224f, 1 / 0.225f};
double scale = 0.00392157;
float size = 224;
- int topk = 5;
};
} // namespace PPShiTu
diff --git a/deploy/lite_shitu/include/utils.h b/deploy/lite_shitu/include/utils.h
index 18a04cf344327c2a9c363ac73c8aff5d26f15eef..a3b57c882561577defff97e384fb775b78204f36 100644
--- a/deploy/lite_shitu/include/utils.h
+++ b/deploy/lite_shitu/include/utils.h
@@ -16,7 +16,7 @@
#include
#include
-#include
+#include
#include
#include
#include
diff --git a/deploy/lite_shitu/include/vector_search.h b/deploy/lite_shitu/include/vector_search.h
new file mode 100644
index 0000000000000000000000000000000000000000..89ef7733ab86c534a5c507cb4f87c9d4597dba15
--- /dev/null
+++ b/deploy/lite_shitu/include/vector_search.h
@@ -0,0 +1,73 @@
+// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+#pragma once
+#ifdef WIN32
+#define OS_PATH_SEP "\\"
+#else
+#define OS_PATH_SEP "/"
+#endif
+
+#include "json/json.h"
+#include
+#include
+#include
+#include
-
-> If you think this document is helpful to you, welcome to give a star to our project:[https://github.com/PaddlePaddle/PaddleClas](https://github.com/PaddlePaddle/PaddleClas)
-
-
-## Pretrained model list and download address
-- ResNet and ResNet_vd series
- - ResNet series