未验证 提交 8cc41a33 编写于 作者: K Kaipeng Deng 提交者: GitHub

add yolov3 save_inference_model example and refine link (#2369)

* add yolov3 save_inference_model example

* add save_inference_model in infer.py

* refine link
上级 c4b8ca5a
......@@ -74,7 +74,7 @@ dataset/coco/
**自定义数据集:**
用户可使用自定义的数据集,我们推荐自定义数据集使用COCO数据集格式的标注,并可通过设置`--data_dir`或修改[reader.py](https://github.com/PaddlePaddle/models/blob/623698ef30cc2f7879e47621678292254d6af51e/PaddleCV/yolov3/reader.py#L39)指定数据集路径。使用COCO数据集格式标注时,目录结构可参考上述COCO数据集目录结构。
用户可使用自定义的数据集,我们推荐自定义数据集使用COCO数据集格式的标注,并可通过设置`--data_dir`或修改[reader.py](./reader.py#L39)指定数据集路径。使用COCO数据集格式标注时,目录结构可参考上述COCO数据集目录结构。
### 模型训练
......@@ -198,7 +198,7 @@ YOLOv3 预测可视化
### 服务部署
进行YOLOv3的服务部署,用户可以在[eval.py](https://github.com/PaddlePaddle/models/blob/623698ef30cc2f7879e47621678292254d6af51e/PaddleCV/yolov3/eval.py#L58)中保存可部署的推断模型,该模型可以用Paddle预测库加载和部署,参考[Paddle预测库](http://paddlepaddle.org/documentation/docs/zh/1.4/advanced_usage/deploy/index_cn.html)
进行YOLOv3的服务部署,用户可以在[eval.py](./eval.py#L54)[infer.py](./infer.py#L47)中保存可部署的推断模型,该模型可以用Paddle预测库加载和部署,参考[Paddle预测库](http://paddlepaddle.org/documentation/docs/zh/1.4/advanced_usage/deploy/index_cn.html)
## 进阶使用
......@@ -236,7 +236,7 @@ YOLOv3 的网络结构由基础特征提取网络、multi-scale特征融合层
对YOLOv3进行fine-tune,用户可用`--pretrain`指定下载好的Paddle发布的YOLOv3[模型](https://paddlemodels.bj.bcebos.com/yolo/yolov3.tar.gz),并把`--class_num`设置为用户数据集的类别数。
在fine-tune时,若用户自定义数据集的类别数不等于COCO数据集的80类,则加载权重时不应加载`yolo_output`层的权重,可通过在[train.py](https://github.com/heavengate/models/blob/3fa6035550ebd4a425a2e354489967a829174155/PaddleCV/yolov3/train.py#L76)使用如下方式加载非`yolo_output`层的权重:
在fine-tune时,若用户自定义数据集的类别数不等于COCO数据集的80类,则加载权重时不应加载`yolo_output`层的权重,可通过在[train.py](./train.py#L76)使用如下方式加载非`yolo_output`层的权重:
```python
if cfg.pretrain:
......@@ -295,7 +295,7 @@ if cfg.pretrain:
**A:** YOLOv3中`learning_rate=0.001`的设置是针对总batch size为64的情况,若用户的batch size小于该值,建议调小学习率。
**Q:** 我训练YOLOv3速度比较慢,要怎么提速?
**A:** YOLOv3的数据增强比较复杂,速度比较慢,可通过在[reader.py](https://github.com/PaddlePaddle/models/blob/66e135ccc4f35880d1cd625e9ec96c041835e37d/PaddleCV/yolov3/reader.py#L284)中增加数据读取的进程数来提速。若用户是进行fine-tune,也可将`--no_mixup_iter`设置大于`--max_iter`的值来禁用mixup提升速度。
**A:** YOLOv3的数据增强比较复杂,速度比较慢,可通过在[reader.py](./reader.py#L284)中增加数据读取的进程数来提速。若用户是进行fine-tune,也可将`--no_mixup_iter`设置大于`--max_iter`的值来禁用mixup提升速度。
## 参考文献
......
......@@ -74,7 +74,7 @@ The data catalog structure is as follows:
**User defined dataset:**
You can defined datasets by yourself, we recommend using annotations in COCO format, and you can set dataset directory by `--data_dir` or in [reader.py](https://github.com/PaddlePaddle/models/blob/623698ef30cc2f7879e47621678292254d6af51e/PaddleCV/yolov3/reader.py#L39). When using annotations in COCO format, you can reference the directory structure in COCO dataset above.
You can defined datasets by yourself, we recommend using annotations in COCO format, and you can set dataset directory by `--data_dir` or in [reader.py](./reader.py#L39). When using annotations in COCO format, you can reference the directory structure in COCO dataset above.
### Training
......@@ -199,7 +199,7 @@ Inference speed on single GPU:
### Inference deployment
For YOLOv3 inference deployment, you can save YOLOv3 inference model in [eval.py](https://github.com/PaddlePaddle/models/blob/623698ef30cc2f7879e47621678292254d6af51e/PaddleCV/yolov3/eval.py#L58), inference model can be loaded and deployed by Paddle prediction library, see [Paddle Inference Lib](http://www.paddlepaddle.org/documentation/docs/en/1.4/advanced_usage/deploy/index_en.html).
For YOLOv3 inference deployment, you can save YOLOv3 inference model in [eval.py](./eval.py#L54) or [infer.py](./infer.py#L47), inference model can be loaded and deployed by Paddle prediction library, see [Paddle Inference Lib](http://www.paddlepaddle.org/documentation/docs/en/1.4/advanced_usage/deploy/index_en.html).
## Advanced Usage
......@@ -236,7 +236,7 @@ YOLOv3 networks are composed of base feature extraction network, multi-scale fea
For YOLOv3 fine-tuning, you should set `--pretrain` as YOLOv3 [model](https://paddlemodels.bj.bcebos.com/yolo/yolov3.tar.gz) you download, set `--class_num` as category number in your dataset.
In fine-tuning, weights of `yolo_output` layers should not be loaded when your `--class_num` is not equal to 80 as in COCO dataset, you can load pre-trained weights in [train.py](https://github.com/heavengate/models/blob/3fa6035550ebd4a425a2e354489967a829174155/PaddleCV/yolov3/train.py#L76) without `yolo_output` layers as:
In fine-tuning, weights of `yolo_output` layers should not be loaded when your `--class_num` is not equal to 80 as in COCO dataset, you can load pre-trained weights in [train.py](./train.py#L76) without `yolo_output` layers as:
```python
if cfg.pretrain:
......@@ -295,7 +295,7 @@ if cfg.pretrain:
**A:** `learning_rate=0.001` configuration is for training in 8 GPUs while total batch size is 64, if you train with smaller batch size, please decrease the learning rate.
**Q:** YOLOv3 training in my machine is very slow, how can I speed it up?
**A:** Image augmentation is very complicated and time consuming in YOLOv3, you can set more workers for reader in [reader.py](https://github.com/PaddlePaddle/models/blob/66e135ccc4f35880d1cd625e9ec96c041835e37d/PaddleCV/yolov3/reader.py#L284) for speeding up. If you are fine-tuning, you can also set `--no_mixup_iter` greater than `--max_iter` to disable image mixup.
**A:** Image augmentation is very complicated and time consuming in YOLOv3, you can set more workers for reader in [reader.py](./reader.py#L284) for speeding up. If you are fine-tuning, you can also set `--no_mixup_iter` greater than `--max_iter` to disable image mixup.
## Reference
......
......@@ -50,6 +50,13 @@ def eval():
return os.path.exists(os.path.join(cfg.weights, var.name))
fluid.io.load_vars(exe, cfg.weights, predicate=if_exist)
# yapf: enable
# you can save inference model by following code
# fluid.io.save_inference_model("./output/yolov3",
# feeded_var_names=['image', 'im_shape'],
# target_vars=outputs,
# executor=exe)
input_size = cfg.input_size
test_reader = reader.test(input_size, 1)
label_names, label_ids = reader.get_label_infos()
......
......@@ -43,6 +43,13 @@ def infer():
return os.path.exists(os.path.join(cfg.weights, var.name))
fluid.io.load_vars(exe, cfg.weights, predicate=if_exist)
# yapf: enable
# you can save inference model by following code
# fluid.io.save_inference_model("./output/yolov3",
# feeded_var_names=['image', 'im_shape'],
# target_vars=outputs,
# executor=exe)
feeder = fluid.DataFeeder(place=place, feed_list=model.feeds())
fetch_list = [outputs]
image_names = []
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册