未验证 提交 1cd05b21 编写于 作者: C cnn 提交者: GitHub

Add PaddleServing demo. (#1482)

上级 1cb5eed0
# PaddleDetection 预测部署
`PaddleDetection`目前支持使用`Python``C++`部署在`Windows``Linux` 上运行。
`PaddleDetection`目前支持:
- 使用`Python``C++`部署在`Windows``Linux` 上运行
- [在线服务化部署](./serving/README.md)
- [移动端部署](https://github.com/PaddlePaddle/Paddle-Lite-Demo)
## 模型导出
训练得到一个满足要求的模型后,如果想要将该模型接入到C++服务器端预测库或移动端预测库,需要通过`tools/export_model.py`导出该模型。
......@@ -20,4 +23,5 @@ yolov3_darknet # 模型目录
## 预测部署
- [1. Python预测(支持 Linux 和 Windows)](https://github.com/PaddlePaddle/PaddleDetection/blob/master/deploy/python)
- [2. C++预测(支持 Linux 和 Windows)](https://github.com/PaddlePaddle/PaddleDetection/blob/master/deploy/cpp)
- [3. 移动端部署参考Paddle-Lite文档](https://paddle-lite.readthedocs.io/zh/latest/)
- [3. 在线服务化部署](./serving/README.md)
- [4. 移动端部署](https://github.com/PaddlePaddle/Paddle-Lite-Demo)
# 服务端预测部署
`PaddleDetection`训练出来的模型可以使用[Serving](https://github.com/PaddlePaddle/Serving) 部署在服务端。
本教程以在路标数据集[roadsign_voc](https://paddlemodels.bj.bcebos.com/object_detection/roadsign_voc.tar) 使用`configs/yolov3_mobilenet_v1_roadsign.yml`算法训练的模型进行部署。
预训练模型权重文件为[yolov3_mobilenet_v1_roadsign.pdparams](https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1_roadsign.pdparams)
## 1. 首先验证模型
```
python tools/infer.py -c configs/yolov3_mobilenet_v1_roadsign.yml -o use_gpu=true weights=https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1_roadsign.pdparams --infer_img=demo/road554.png
```
## 2. 安装 paddle serving
```
# 安装 paddle-serving-client
pip install paddle-serving-client -i https://mirror.baidu.com/pypi/simple
# 安装 paddle-serving-server
pip install paddle-serving-server -i https://mirror.baidu.com/pypi/simple
# 安装 paddle-serving-server-gpu
pip install paddle-serving-server-gpu -i https://mirror.baidu.com/pypi/simple
```
## 3. 导出模型
PaddleDetection在训练过程包括网络的前向和优化器相关参数,而在部署过程中,我们只需要前向参数,具体参考:[导出模型](https://github.com/PaddlePaddle/PaddleDetection/blob/master/docs/advanced_tutorials/deploy/EXPORT_MODEL.md)
```
python tools/export_serving_model.py -c configs/yolov3_mobilenet_v1_roadsign.yml -o use_gpu=true weights=https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1_roadsign.pdparams --output_dir=./inference_model
```
以上命令会在./inference_model文件夹下生成一个`yolov3_mobilenet_v1_roadsign`文件夹:
```
inference_model
│ ├── yolov3_mobilenet_v1_roadsign
│ │ ├── infer_cfg.yml
│ │ ├── serving_client
│ │ │ ├── serving_client_conf.prototxt
│ │ │ ├── serving_client_conf.stream.prototxt
│ │ ├── serving_server
│ │ │ ├── conv1_bn_mean
│ │ │ ├── conv1_bn_offset
│ │ │ ├── conv1_bn_scale
│ │ │ ├── ...
```
`serving_client`文件夹下`serving_client_conf.prototxt`详细说明了模型输入输出信息
`serving_client_conf.prototxt`文件内容为:
```
feed_var {
name: "image"
alias_name: "image"
is_lod_tensor: false
feed_type: 1
shape: 3
shape: 608
shape: 608
}
feed_var {
name: "im_size"
alias_name: "im_size"
is_lod_tensor: false
feed_type: 2
shape: 2
}
fetch_var {
name: "multiclass_nms_0.tmp_0"
alias_name: "multiclass_nms_0.tmp_0"
is_lod_tensor: true
fetch_type: 1
shape: -1
}
```
## 4. 启动PaddleServing服务
```
cd inference_model/yolov3_mobilenet_v1_roadsign/
# GPU
python -m paddle_serving_server_gpu.serve --model serving_server --port 9393 --gpu_ids 0
# CPU
python -m paddle_serving_server.serve --model serving_server --port 9393
```
## 5. 测试部署的服务
准备`label_list.txt`文件
```
# 进入到导出模型文件夹
cd inference_model/yolov3_mobilenet_v1_roadsign/
# 将数据集对应的label_list.txt文件拷贝到当前文件夹下
cp ../../dataset/roadsign_voc/label_list.txt .
```
设置`prototxt`文件路径为`serving_client/serving_client_conf.prototxt`
设置`fetch``fetch=["multiclass_nms_0.tmp_0"])`
测试
```
# 进入目录
cd inference_model/yolov3_mobilenet_v1_roadsign/
# 测试代码 test_client.py 会自动创建output文件夹,并在output下生成`bbox.json`和`road554.png`两个文件
python ../../deploy/serving/test_client.py ../../demo/road554.png
```
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import numpy as np
from paddle_serving_client import Client
from paddle_serving_app.reader import *
import cv2
preprocess = Sequential([
File2Image(), BGR2RGB(), Resize(
(608, 608), interpolation=cv2.INTER_LINEAR), Div(255.0), Transpose(
(2, 0, 1))
])
postprocess = RCNNPostprocess("label_list.txt", "output", [608, 608])
client = Client()
client.load_client_config("serving_client/serving_client_conf.prototxt")
client.connect(['127.0.0.1:9393'])
im = preprocess(sys.argv[1])
fetch_map = client.predict(
feed={
"image": im,
"im_size": np.array(list(im.shape[1:])),
},
fetch=["multiclass_nms_0.tmp_0"])
fetch_map["image"] = sys.argv[1]
postprocess(fetch_map)
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册