未验证 提交 43bf70df 编写于 作者: C cnn 提交者: GitHub

[Deploy] update deploy serving (#2203) (#2220)

* fix crash bug when using cpp infer, test=dygraph

* update PaddleServing deploy, test=dygraph
上级 a4b6d5cb
......@@ -16,10 +16,15 @@
使用`tools/export_model.py`脚本导出模型已经部署时使用的配置文件,配置文件名字为`infer_cfg.yml`。模型导出脚本如下:
```bash
# 导出YOLOv3模型
python tools/export_model.py -c configs/yolov3/yolov3_darknet53_270e_coco.yml --output_dir=./inference_model \
-o weights=weights/yolov3_darknet53_270e_coco.pdparams
python tools/export_model.py -c configs/yolov3/yolov3_darknet53_270e_coco.yml -o weights=weights/yolov3_darknet53_270e_coco.pdparams
```
预测模型会导出到`inference_model/yolov3_darknet53_270e_coco`目录下,分别为`infer_cfg.yml`, `model.pdiparams`, `model.pdiparams.info`, `model.pdmodel`
预测模型会导出到`output_inference/yolov3_darknet53_270e_coco`目录下,分别为`infer_cfg.yml`, `model.pdiparams`, `model.pdiparams.info`, `model.pdmodel`
如果需要导出`PaddleServing`格式的模型,需要设置`export_serving_model=True`:
```buildoutcfg
python tools/export_model.py -c configs/yolov3/yolov3_darknet53_270e_coco.yml -o weights=weights/yolov3_darknet53_270e_coco.pdparams --export_serving_model=True
```
预测模型会导出到`output_inference/yolov3_darknet53_270e_coco`目录下,分别为`infer_cfg.yml`, `model.pdiparams`, `model.pdiparams.info`, `model.pdmodel`, `serving_client/`文件夹, `serving_server/`文件夹。
模型导出具体请参考文档[PaddleDetection模型导出教程](EXPORT_MODEL.md)
......@@ -34,10 +39,12 @@ python tools/export_model.py -c configs/yolov3/yolov3_darknet53_270e_coco.yml --
(1)Paddle官网提供在不同平台、不同环境下编译好的预测库,您可以直接使用,请在这里[Paddle预测库](https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/05_inference_deployment/inference/build_and_install_lib_cn.html) 选择。
(2)如果您将要部署的平台环境,Paddle官网上没有提供已编译好的预测库,您可以自行编译,编译过程请参考[Paddle源码编译](https://www.paddlepaddle.org.cn/documentation/docs/zh/install/compile/linux-compile.html)
**注意:** Paddle预测库版本需要>=2.0
- Python语言部署,需要在对应平台上安装Paddle Python包。如果Paddle官网上没有提供该平台下的Paddle Python包,您可以自行编译,编译过程请参考[Paddle源码编译](https://www.paddlepaddle.org.cn/documentation/docs/zh/install/compile/linux-compile.html)
- PaddleServing部署
PaddleServing 0.4.0是基于Paddle 1.8.4开发,PaddleServing 0.4.1是基于Paddle2.0开发。
PaddleServing 0.4.0是基于Paddle 1.8.4开发,PaddleServing 0.5.0是基于Paddle2.0开发。
- Paddle-Lite部署
Paddle-Lite支持OP列表请参考:[Paddle-Lite支持的OP列表](https://paddle-lite.readthedocs.io/zh/latest/source_compile/library.html) ,请跟进所部署模型中使用到的op选择Paddle-Lite版本。
......
......@@ -22,6 +22,7 @@
#include <utility>
#include <memory>
#include <unordered_map>
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
......@@ -138,9 +139,10 @@ class Preprocessor {
return std::make_shared<Permute>();
} else if (name == "NormalizeImageOp") {
return std::make_shared<Normalize>();
} else if (name == "PadBatchOp") {
} else if (name == "PadBatchOp" || name == "PadStride") {
return std::make_shared<PadStride>();
}
std::cerr << "can not find function of OP: " << name << " and return: nullptr" << std::endl;
return nullptr;
}
......
# 服务端预测部署
`PaddleDetection`训练出来的模型可以使用[Serving](https://github.com/PaddlePaddle/Serving) 部署在服务端。
本教程以在路标数据集[roadsign_voc](https://paddlemodels.bj.bcebos.com/object_detection/roadsign_voc.tar) 使用`configs/yolov3_mobilenet_v1_roadsign.yml`算法训练的模型进行部署。
预训练模型权重文件为[yolov3_mobilenet_v1_roadsign.pdparams](https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1_roadsign.pdparams)
本教程以在COCO数据集上用`configs/yolov3/yolov3_darknet53_270e_coco.yml`算法训练的模型进行部署。
预训练模型权重文件为[yolov3_darknet53_270e_coco.pdparams](https://paddlemodels.bj.bcebos.com/object_detection/dygraph/yolov3_darknet53_270e_coco.pdparams)
## 1. 首先验证模型
```
python tools/infer.py -c configs/yolov3_mobilenet_v1_roadsign.yml -o use_gpu=true weights=https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1_roadsign.pdparams --infer_img=demo/road554.png
python tools/infer.py -c --infer_img=demo/000000014439.jpg -o use_gpu=True weights=https://paddlemodels.bj.bcebos.com/object_detection/dygraph/yolov3_darknet53_270e_coco.pdparams --infer_img=demo/000000014439.jpg
```
## 2. 安装 paddle serving
```
# 安装 paddle-serving-client
pip install paddle-serving-client -i https://mirror.baidu.com/pypi/simple
# 安装 paddle-serving-server
pip install paddle-serving-server -i https://mirror.baidu.com/pypi/simple
# 安装 paddle-serving-server-gpu
pip install paddle-serving-server-gpu -i https://mirror.baidu.com/pypi/simple
```
请参考[PaddleServing](https://github.com/PaddlePaddle/Serving/tree/v0.5.0) 中安装教程安装
## 3. 导出模型
PaddleDetection在训练过程包括网络的前向和优化器相关参数,而在部署过程中,我们只需要前向参数,具体参考:[导出模型](https://github.com/PaddlePaddle/PaddleDetection/blob/master/docs/advanced_tutorials/deploy/EXPORT_MODEL.md)
```
python tools/export_serving_model.py -c configs/yolov3_mobilenet_v1_roadsign.yml -o use_gpu=true weights=https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1_roadsign.pdparams --output_dir=./inference_model
python tools/export_model.py -c configs/yolov3/yolov3_darknet53_270e_coco.yml -o weights=weights/yolov3_darknet53_270e_coco.pdparams --export_serving_model=True
```
以上命令会在./inference_model文件夹下生成一个`yolov3_mobilenet_v1_roadsign`文件夹:
以上命令会在`output_inference/`文件夹下生成一个`yolov3_darknet53_270e_coco`文件夹:
```
inference_model
│ ├── yolov3_mobilenet_v1_roadsign
output_inference
│ ├── yolov3_darknet53_270e_coco
│ │ ├── infer_cfg.yml
│ │ ├── model.pdiparams
│ │ ├── model.pdiparams.info
│ │ ├── model.pdmodel
│ │ ├── serving_client
│ │ │ ├── serving_client_conf.prototxt
│ │ │ ├── serving_client_conf.stream.prototxt
│ │ ├── serving_server
│ │ │ ├── conv1_bn_mean
│ │ │ ├── conv1_bn_offset
│ │ │ ├── conv1_bn_scale
│ │ │ ├── __model__
│ │ │ ├── __params__
│ │ │ ├── serving_server_conf.prototxt
│ │ │ ├── serving_server_conf.stream.prototxt
│ │ │ ├── ...
```
`serving_client`文件夹下`serving_client_conf.prototxt`详细说明了模型输入输出信息
`serving_client_conf.prototxt`文件内容为:
```
lient_conf.prototxt
feed_var {
name: "im_shape"
alias_name: "im_shape"
is_lod_tensor: false
feed_type: 1
shape: 2
}
feed_var {
name: "image"
alias_name: "image"
......@@ -56,25 +59,32 @@ feed_var {
shape: 608
}
feed_var {
name: "im_size"
alias_name: "im_size"
name: "scale_factor"
alias_name: "scale_factor"
is_lod_tensor: false
feed_type: 2
feed_type: 1
shape: 2
}
fetch_var {
name: "multiclass_nms_0.tmp_0"
alias_name: "multiclass_nms_0.tmp_0"
name: "save_infer_model/scale_0.tmp_1"
alias_name: "save_infer_model/scale_0.tmp_1"
is_lod_tensor: true
fetch_type: 1
shape: -1
}
fetch_var {
name: "save_infer_model/scale_1.tmp_1"
alias_name: "save_infer_model/scale_1.tmp_1"
is_lod_tensor: true
fetch_type: 2
shape: -1
}
```
## 4. 启动PaddleServing服务
```
cd inference_model/yolov3_mobilenet_v1_roadsign/
cd output_inference/yolov3_darknet53_270e_coco/
# GPU
python -m paddle_serving_server_gpu.serve --model serving_server --port 9393 --gpu_ids 0
......@@ -87,20 +97,19 @@ python -m paddle_serving_server.serve --model serving_server --port 9393
准备`label_list.txt`文件
```
# 进入到导出模型文件夹
cd inference_model/yolov3_mobilenet_v1_roadsign/
cd output_inference/yolov3_darknet53_270e_coco/
# 将数据集对应的label_list.txt文件拷贝到当前文件夹下
cp ../../dataset/roadsign_voc/label_list.txt .
# 将数据集对应的label_list.txt文件放到当前文件夹下
```
设置`prototxt`文件路径为`serving_client/serving_client_conf.prototxt`
设置`fetch``fetch=["multiclass_nms_0.tmp_0"])`
设置`fetch``fetch=["save_infer_model/scale_0.tmp_1"])`
测试
```
# 进入目录
cd inference_model/yolov3_mobilenet_v1_roadsign/
cd output_inference/yolov3_darknet53_270e_coco/
# 测试代码 test_client.py 会自动创建output文件夹,并在output下生成`bbox.json`和`road554.png`两个文件
python ../../deploy/serving/test_client.py ../../demo/road554.png
# 测试代码 test_client.py 会自动创建output文件夹,并在output下生成`bbox.json`和`000000014439.jpg`两个文件
python ../../deploy/serving/test_client.py ../../demo/000000014439.jpg
```
......@@ -43,6 +43,11 @@ def parse_args():
type=str,
default="output_inference",
help="Directory for storing the output model files.")
parser.add_argument(
"--export_serving_model",
type=bool,
default=False,
help="Whether to export serving model or not.")
parser.add_argument(
"--slim_config",
default=None,
......@@ -62,6 +67,19 @@ def run(FLAGS, cfg):
# export model
trainer.export(FLAGS.output_dir)
if FLAGS.export_serving_model:
from paddle_serving_client.io import inference_model_to_serving
model_name = os.path.splitext(os.path.split(cfg.filename)[-1])[0]
inference_model_to_serving(
dirname="{}/{}".format(FLAGS.output_dir, model_name),
serving_server="{}/{}/serving_server".format(FLAGS.output_dir,
model_name),
serving_client="{}/{}/serving_client".format(FLAGS.output_dir,
model_name),
model_filename="model.pdmodel",
params_filename="model.pdiparams")
def main():
paddle.set_device("cpu")
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册