提交 0a11e56b 编写于 作者: W wangguanzhong 提交者: GitHub

add export_model doc (#3709)

* add export_model doc
上级 a7a825b4
......@@ -75,6 +75,7 @@ PaddleDetection的目的是为工业界和学术界提供丰富、易用的目
## 推理部署
- [模型导出教程](docs/EXPORT_MODEL.md)
- [C++推理部署](inference/README.md)
## Benchmark
......
# 模型导出
训练得到一个满足要求的模型后,如果想要将该模型接入到C++预测库或者Serving服务,需要通过`tools/export_model.py`导出该模型。
## 启动参数说明
| FLAG | 用途 | 默认值 | 备注 |
|:--------------:|:--------------:|:------------:|:-----------------------------------------:|
| -c | 指定配置文件 | None | |
| --output_dir | 模型保存路径 | `./output` | 模型默认保存在`output/配置文件名/`路径下 |
## 使用示例
使用[训练/评估/推断](GETTING_STARTED_cn.md)中训练得到的模型进行试用,脚本如下
```bash
# export model for RCNN
python tools/export_model.py -c configs/faster_rcnn_r50_1x.yml \
--output_dir=./inference_model \
-o weights=output/faster_rcnn_r50_1x/model_final \
FasterRCNNTestFeed.image_shape=[3,800,1333]
# export model for YOLOv3
python tools/export_model.py -c configs/yolov3_darknet.yml \
--output_dir=./inference_model \
-o weights=output/yolov3_darknet/model_final \
YoloTestFeed.image_shape=[3,800,1333]
# export model for SSD
python tools/export_model.py -c configs/ssd/ssd_mobilenet_v1_voc.yml \
--output_dir=./inference_model \
-o weights=output/ssd_mobilenet_v1_voc/model_final \
SSDTestFeed.image_shape=[3,300,300]
```
- 预测模型会导出到`output/faster_rcnn_r50_1x`目录下,模型名和参数名分别为`__model__``__params__`
- 通过`image_shape`修改保存模型中的图片大小。使用Fluid-TensorRT进行预测时,由于TensorRT仅支持定长输入,需要将输入图片大小与`image_shape`保持一致。
......@@ -40,7 +40,6 @@ list below can be viewed by `--help`
| --json_eval | eval | Whether to evaluate with already existed bbox.json or mask.json | False | json path is set in `--output_eval` |
| --output_dir | infer | Directory for storing the output visualization files | `./output` | `--output_dir output` |
| --draw_threshold | infer | Threshold to reserve the result for visualization | 0.5 | `--draw_threshold 0.7` |
| --save\_inference_model | infer | Whether to save inference model in output_dir | False | save_inference_model is saved in `--output_dir` |
| --infer_dir | infer | Directory for images to perform inference on | None | |
| --infer_img | infer | Image path | None | higher priority over --infer_dir |
| --use_tb | train/infer | Whether to record the data with [tb-paddle](https://github.com/linshuliang/tb-paddle), so as to display in Tensorboard | False | |
......@@ -149,17 +148,16 @@ moment, but it is a planned feature
Different thresholds will produce different results depending on the calculation of [NMS](https://ieeexplore.ieee.org/document/1699659).
- Save inference model
- Export model
```bash
export CUDA_VISIBLE_DEVICES=0
python tools/infer.py -c configs/faster_rcnn_r50_1x.yml \
--infer_img=demo/000000570688.jpg \
--save_inference_model
python tools/export_model.py -c configs/faster_rcnn_r50_1x.yml \
--output_dir=inference_model \
-o weights=output/faster_rcnn_r50_1x/model_final \
FasterRCNNTestFeed.image_shape=[3,800,1333]
```
Save inference model by set `--save_inference_model`, which can be loaded by PaddlePaddle predict library.
Save inference model `tools/export_model.py`, which can be loaded by PaddlePaddle predict library.
## FAQ
......
......@@ -37,7 +37,6 @@ python tools/infer.py -c configs/faster_rcnn_r50_1x.yml --infer_img=demo/0000005
| --json_eval | eval | 是否通过已存在的bbox.json或者mask.json进行评估 | False | json文件路径在`--output_eval`中设置 |
| --output_dir | infer | 输出推断后可视化文件 | `./output` | `--output_dir output` |
| --draw_threshold | infer | 可视化时分数阈值 | 0.5 | `--draw_threshold 0.7` |
| --save\_inference_model | infer | 是否保存预测模型 | False | 预测模型保存路径为`--output_dir`设置的路径 |
| --infer_dir | infer | 用于推断的图片文件夹路径 | None | |
| --infer_img | infer | 用于推断的图片路径 | None | 相较于`--infer_dir`具有更高优先级 |
| --use_tb | train/infer | 是否使用[tb-paddle](https://github.com/linshuliang/tb-paddle)记录数据,进而在TensorBoard中显示 | False | |
......@@ -145,18 +144,6 @@ python -m paddle.distributed.launch --selected_gpus 0,1,2,3,4,5,6,7 tools/train.
`--draw_threshold` 是个可选参数. 根据 [NMS](https://ieeexplore.ieee.org/document/1699659) 的计算,
不同阈值会产生不同的结果。如果用户需要对自定义路径的模型进行推断,可以设置`-o weights`指定模型路径。
- 保存推断模型
```bash
# GPU推断
export CUDA_VISIBLE_DEVICES=0
python -u tools/infer.py -c configs/faster_rcnn_r50_1x.yml --infer_img=demo/000000570688.jpg \
--save_inference_model
```
通过设置`--save_inference_model`保存可供PaddlePaddle预测库加载的推断模型。
## FAQ
**Q:** 为什么我使用单GPU训练loss会出`NaN`? </br>
......
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
from paddle import fluid
from ppdet.core.workspace import load_config, merge_config, create
from ppdet.modeling.model_input import create_feed
from ppdet.utils.cli import ArgsParser
from ppdet.utils.check import check_gpu
import ppdet.utils.checkpoint as checkpoint
import logging
FORMAT = '%(asctime)s-%(levelname)s: %(message)s'
logging.basicConfig(level=logging.INFO, format=FORMAT)
logger = logging.getLogger(__name__)
def prune_feed_vars(feeded_var_names, target_vars, prog):
"""
Filter out feed variables which are not in program,
pruned feed variables are only used in post processing
on model output, which are not used in program, such
as im_id to identify image order, im_shape to clip bbox
in image.
"""
exist_var_names = []
prog = prog.clone()
prog = prog._prune(targets=target_vars)
global_block = prog.global_block()
for name in feeded_var_names:
try:
v = global_block.var(name)
exist_var_names.append(str(v.name))
except Exception:
logger.info('save_inference_model pruned unused feed '
'variables {}'.format(name))
pass
return exist_var_names
def save_infer_model(FLAGS, exe, feed_vars, test_fetches, infer_prog):
cfg_name = os.path.basename(FLAGS.config).split('.')[0]
save_dir = os.path.join(FLAGS.output_dir, cfg_name)
feed_var_names = [var.name for var in feed_vars.values()]
target_vars = list(test_fetches.values())
feed_var_names = prune_feed_vars(feed_var_names, target_vars, infer_prog)
logger.info("Export inference model to {}, input: {}, output: "
"{}...".format(save_dir, feed_var_names,
[str(var.name) for var in target_vars]))
fluid.io.save_inference_model(
save_dir,
feeded_var_names=feed_var_names,
target_vars=target_vars,
executor=exe,
main_program=infer_prog,
params_filename="__params__")
def main():
cfg = load_config(FLAGS.config)
if 'architecture' in cfg:
main_arch = cfg.architecture
else:
raise ValueError("'architecture' not specified in config file.")
merge_config(FLAGS.opt)
if 'test_feed' not in cfg:
test_feed = create(main_arch + 'TestFeed')
else:
test_feed = create(cfg.test_feed)
# Use CPU for exporting inference model instead of GPU
place = fluid.CPUPlace()
exe = fluid.Executor(place)
model = create(main_arch)
startup_prog = fluid.Program()
infer_prog = fluid.Program()
with fluid.program_guard(infer_prog, startup_prog):
with fluid.unique_name.guard():
_, feed_vars = create_feed(test_feed, use_pyreader=False)
test_fetches = model.test(feed_vars)
infer_prog = infer_prog.clone(True)
exe.run(startup_prog)
checkpoint.load_params(exe, infer_prog, cfg.weights)
save_infer_model(FLAGS, exe, feed_vars, test_fetches, infer_prog)
if __name__ == '__main__':
parser = ArgsParser()
parser.add_argument(
"--output_dir",
type=str,
default="output",
help="Directory for storing the output model files.")
FLAGS = parser.parse_args()
main()
......@@ -96,48 +96,6 @@ def get_test_images(infer_dir, infer_img):
return images
def prune_feed_vars(feeded_var_names, target_vars, prog):
"""
Filter out feed variables which are not in program,
pruned feed variables are only used in post processing
on model output, which are not used in program, such
as im_id to identify image order, im_shape to clip bbox
in image.
"""
exist_var_names = []
prog = prog.clone()
prog = prog._prune(targets=target_vars)
global_block = prog.global_block()
for name in feeded_var_names:
try:
v = global_block.var(name)
exist_var_names.append(str(v.name))
except Exception:
logger.info('save_inference_model pruned unused feed '
'variables {}'.format(name))
pass
return exist_var_names
def save_infer_model(FLAGS, exe, feed_vars, test_fetches, infer_prog):
cfg_name = os.path.basename(FLAGS.config).split('.')[0]
save_dir = os.path.join(FLAGS.output_dir, cfg_name)
feeded_var_names = [var.name for var in feed_vars.values()]
target_vars = list(test_fetches.values())
feeded_var_names = prune_feed_vars(feeded_var_names, target_vars,
infer_prog)
logger.info("Save inference model to {}, input: {}, output: "
"{}...".format(save_dir, feeded_var_names,
[str(var.name) for var in target_vars]))
fluid.io.save_inference_model(
save_dir,
feeded_var_names=feeded_var_names,
target_vars=target_vars,
executor=exe,
main_program=infer_prog,
params_filename="__params__")
def main():
cfg = load_config(FLAGS.config)
......@@ -180,9 +138,6 @@ def main():
if cfg.weights:
checkpoint.load_params(exe, infer_prog, cfg.weights)
if FLAGS.save_inference_model:
save_infer_model(FLAGS, exe, feed_vars, test_fetches, infer_prog)
# parse infer fetches
assert cfg.metric in ['COCO', 'VOC', 'WIDERFACE'], \
"unknown metric type {}".format(cfg.metric)
......@@ -300,11 +255,6 @@ if __name__ == '__main__':
type=float,
default=0.5,
help="Threshold to reserve the result for visualization.")
parser.add_argument(
"--save_inference_model",
action='store_true',
default=False,
help="Save inference model in output_dir if True.")
parser.add_argument(
"--use_tb",
type=bool,
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册