Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
PaddlePaddle
PaddleSeg
提交
960a868a
P
PaddleSeg
项目概览
PaddlePaddle
/
PaddleSeg
通知
285
Star
8
Fork
1
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
53
列表
看板
标记
里程碑
合并请求
3
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
PaddleSeg
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
53
Issue
53
列表
看板
标记
里程碑
合并请求
3
合并请求
3
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
960a868a
编写于
9月 24, 2020
作者:
M
MRXLT
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
add paddle serving doc
上级
bc5e0d7a
变更
3
隐藏空白更改
内联
并排
Showing
3 changed file
with
223 addition
and
0 deletion
+223
-0
deploy/README.md
deploy/README.md
+1
-0
deploy/paddle-serving/README.md
deploy/paddle-serving/README.md
+87
-0
pdseg/export_serving_model.py
pdseg/export_serving_model.py
+135
-0
未找到文件。
deploy/README.md
浏览文件 @
960a868a
...
@@ -10,3 +10,4 @@
...
@@ -10,3 +10,4 @@
[
4. 移动端部署(仅支持Android)
](
./lite
)
[
4. 移动端部署(仅支持Android)
](
./lite
)
[
5. 使用PaddleServing部署
](
./paddle-serving
)
deploy/paddle-serving/README.md
0 → 100644
浏览文件 @
960a868a
# 通过PaddleServing部署服务
## 1.简介
PaddleServing是Paddle的在线预测服务框架,可以快速部署训练好的模型用于在线预测。更多信息请参考
[
PaddleServing 主页
](
https://github.com/PaddlePaddle/Serving
)
。本文中将通过unet模型示例,展示预测服务的部署和预测过程。
## 2.安装Paddle Serving
目前PaddleServing的正式版本为0.3.2版本,本文中的示例需要develop版本的paddle_serving_app,请从
[
链接
](
https://github.com/PaddlePaddle/Serving/blob/develop/doc/LATEST_PACKAGES.md#app
)
中下载并安装。
服务端安装:
```
shell
pip
install
paddle_serving_server
==
0.3.2
#CPU
pip
install
paddle_serving_server_gpu
==
0.3.2.post9
#GPU with CUDA9.0
pip
install
paddle_serving_server_gpu
==
0.3.2.post9
#GPU with CUDA10.0
```
客户端安装:
```
shell
pip
install
paddle_serving_client
==
0.3.2
```
## 3.导出预测模型
通过训练得到一个满足要求的模型后,如果想要将该模型接入到PaddleServing服务,我们需要通过
[
`pdseg/export_serving_model.py`
](
../../pdseg/export_serving_model.py
)
来导出该模型。
该脚本的使用方法和
`train.py/eval.py/vis.py`
完全一样。
### FLAGS
|FLAG|用途|默认值|备注|
|-|-|-|-|
|--cfg|配置文件路径|None||
### 使用示例
我们使用
[
训练/评估/可视化
](
./usage.md
)
一节中训练得到的模型进行试用,命令如下
```
shell
python pdseg/export_serving_model.py
--cfg
configs/unet_optic.yaml TEST.TEST_MODEL ./saved_model/unet_optic/final
```
预测模型会导出到
`freeze_model`
目录,包括
`serving_server`
和
`serving_client`
两个子目录。
`freeze_model/serving_server`
目录下包含了模型文件和serving server端配置文件,
`freeze_model/serving_client`
目录下包含了serving client端配置文件。
## 4.部署预测服务
```
shell
python
-m
paddle_serving_server.serve
--model
unet_model/
--port
9494
# CPU
python
-m
paddle_serving_server_gpu.serve
--model
unet_model
--port
9494
--gpu_ids
0
#GPU
```
## 5.执行预测
```
python
#seg_client.py
from
paddle_serving_client
import
Client
from
paddle_serving_app.reader
import
Sequential
,
File2Image
,
Resize
,
Transpose
,
BGR2RGB
,
SegPostprocess
,
Normalize
,
Div
import
sys
import
cv2
client
=
Client
()
client
.
load_client_config
(
"unet_client/serving_client_conf.prototxt"
)
client
.
connect
([
"127.0.0.1:9494"
])
preprocess
=
Sequential
([
File2Image
(),
Resize
(
(
512
,
512
),
interpolation
=
cv2
.
INTER_LINEAR
),
Div
(
255.0
),
Normalize
([
0.5
,
0.5
,
0.5
],
[
0.5
,
0.5
,
0.5
],
False
),
Transpose
((
2
,
0
,
1
))
])
postprocess
=
SegPostprocess
(
2
)
filename
=
"N0060.jpg"
im
=
preprocess
(
filename
)
fetch_map
=
client
.
predict
(
feed
=
{
"image"
:
im
},
fetch
=
[
"transpose_1.tmp_0"
])
fetch_map
[
"filename"
]
=
filename
postprocess
(
fetch_map
)
```
脚本执行之后,当前目录下生成处理后的图片。
完整的部署示例请参考PaddleServing的
[
unet示例
](
https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/unet_for_image_seg
)
。
pdseg/export_serving_model.py
0 → 100644
浏览文件 @
960a868a
# coding: utf8
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
print_function
import
os
import
sys
import
time
import
pprint
import
cv2
import
argparse
import
numpy
as
np
import
paddle.fluid
as
fluid
from
utils.config
import
cfg
from
models.model_builder
import
build_model
from
models.model_builder
import
ModelPhase
def
parse_args
():
parser
=
argparse
.
ArgumentParser
(
description
=
'PaddleSeg Inference Model Exporter'
)
parser
.
add_argument
(
'--cfg'
,
dest
=
'cfg_file'
,
help
=
'Config file for training (and optionally testing)'
,
default
=
None
,
type
=
str
)
parser
.
add_argument
(
'opts'
,
help
=
'See utils/config.py for all options'
,
default
=
None
,
nargs
=
argparse
.
REMAINDER
)
if
len
(
sys
.
argv
)
==
1
:
parser
.
print_help
()
sys
.
exit
(
1
)
return
parser
.
parse_args
()
def
export_inference_config
():
deploy_cfg
=
'''DEPLOY:
USE_GPU : 1
USE_PR : 0
MODEL_PATH : "%s"
MODEL_FILENAME : "%s"
PARAMS_FILENAME : "%s"
EVAL_CROP_SIZE : %s
MEAN : %s
STD : %s
IMAGE_TYPE : "%s"
NUM_CLASSES : %d
CHANNELS : %d
PRE_PROCESSOR : "SegPreProcessor"
PREDICTOR_MODE : "ANALYSIS"
BATCH_SIZE : 1
'''
%
(
cfg
.
FREEZE
.
SAVE_DIR
,
cfg
.
FREEZE
.
MODEL_FILENAME
,
cfg
.
FREEZE
.
PARAMS_FILENAME
,
cfg
.
EVAL_CROP_SIZE
,
cfg
.
MEAN
,
cfg
.
STD
,
cfg
.
DATASET
.
IMAGE_TYPE
,
cfg
.
DATASET
.
NUM_CLASSES
,
len
(
cfg
.
STD
))
if
not
os
.
path
.
exists
(
cfg
.
FREEZE
.
SAVE_DIR
):
os
.
mkdir
(
cfg
.
FREEZE
.
SAVE_DIR
)
yaml_path
=
os
.
path
.
join
(
cfg
.
FREEZE
.
SAVE_DIR
,
'deploy.yaml'
)
with
open
(
yaml_path
,
"w"
)
as
fp
:
fp
.
write
(
deploy_cfg
)
return
yaml_path
def
export_serving_model
(
args
):
"""
Export PaddlePaddle inference model for prediction depolyment and serving.
"""
print
(
"Exporting serving model..."
)
startup_prog
=
fluid
.
Program
()
infer_prog
=
fluid
.
Program
()
image
,
logit_out
=
build_model
(
infer_prog
,
startup_prog
,
phase
=
ModelPhase
.
PREDICT
)
# Use CPU for exporting inference model instead of GPU
place
=
fluid
.
CPUPlace
()
exe
=
fluid
.
Executor
(
place
)
exe
.
run
(
startup_prog
)
infer_prog
=
infer_prog
.
clone
(
for_test
=
True
)
if
os
.
path
.
exists
(
cfg
.
TEST
.
TEST_MODEL
):
print
(
'load test model:'
,
cfg
.
TEST
.
TEST_MODEL
)
try
:
fluid
.
load
(
infer_prog
,
os
.
path
.
join
(
cfg
.
TEST
.
TEST_MODEL
,
'model'
),
exe
)
except
:
fluid
.
io
.
load_params
(
exe
,
cfg
.
TEST
.
TEST_MODEL
,
main_program
=
infer_prog
)
else
:
print
(
"TEST.TEST_MODEL diretory is empty!"
)
exit
(
-
1
)
from
paddle_serving_client.io
import
save_model
save_model
(
cfg
.
FREEZE
.
SAVE_DIR
+
"/serving_server"
,
cfg
.
FREEZE
.
SAVE_DIR
+
"/serving_client"
,
{
image
.
name
:
image
},
{
logit_out
.
name
:
logit_out
},
infer_prog
,
)
print
(
"Serving model exported!"
)
print
(
"Exporting serving model config..."
)
deploy_cfg_path
=
export_inference_config
()
print
(
"Serving model saved : [%s]"
%
(
deploy_cfg_path
))
def
main
():
args
=
parse_args
()
if
args
.
cfg_file
is
not
None
:
cfg
.
update_from_file
(
args
.
cfg_file
)
if
args
.
opts
:
cfg
.
update_from_list
(
args
.
opts
)
cfg
.
check_and_infer
()
print
(
pprint
.
pformat
(
cfg
))
export_serving_model
(
args
)
if
__name__
==
'__main__'
:
main
()
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录