未验证 提交 6c79e88c 编写于 作者: W wangguanzhong 提交者: GitHub

fix ttfhead & doc link (#2654)

上级 2bf965f0
...@@ -21,7 +21,6 @@ ...@@ -21,7 +21,6 @@
| ResNet50-vd-SSLDv2-FPN | Faster | 1 | 1x | ---- | 41.4 | [下载链接](https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_vd_fpn_ssld_1x_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/faster_rcnn/faster_rcnn_r50_vd_fpn_ssld_1x_coco.yml) | | ResNet50-vd-SSLDv2-FPN | Faster | 1 | 1x | ---- | 41.4 | [下载链接](https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_vd_fpn_ssld_1x_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/faster_rcnn/faster_rcnn_r50_vd_fpn_ssld_1x_coco.yml) |
| ResNet50-vd-SSLDv2-FPN | Faster | 1 | 2x | ---- | 42.3 | [下载链接](https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_vd_ssld_fpn_2x_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/faster_rcnn/faster_rcnn_r50_vd_ssld_fpn_2x_coco.yml) | | ResNet50-vd-SSLDv2-FPN | Faster | 1 | 2x | ---- | 42.3 | [下载链接](https://paddledet.bj.bcebos.com/models/faster_rcnn_r50_vd_ssld_fpn_2x_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/faster_rcnn/faster_rcnn_r50_vd_ssld_fpn_2x_coco.yml) |
**注意:** Faster R-CNN模型精度依赖Paddle develop分支修改,精度复现须使用[每日版本](https://www.paddlepaddle.org.cn/documentation/docs/zh/install/Tables.html#whl-dev)或2.0.1版本(将于2021.03发布),使用Paddle 2.0.0版本会有少量精度损失。
## Citations ## Citations
``` ```
......
...@@ -19,7 +19,6 @@ FCOS (Fully Convolutional One-Stage Object Detection) is a fast anchor-free obje ...@@ -19,7 +19,6 @@ FCOS (Fully Convolutional One-Stage Object Detection) is a fast anchor-free obje
**Notes:** **Notes:**
- FCOS is trained on COCO train2017 dataset and evaluated on val2017 results of `mAP(IoU=0.5:0.95)`. - FCOS is trained on COCO train2017 dataset and evaluated on val2017 results of `mAP(IoU=0.5:0.95)`.
- FCOS training performace is dependented on Paddle develop branch, performance reproduction shoule based on [Paddle daily version](https://www.paddlepaddle.org.cn/documentation/docs/zh/install/Tables.html#whl-dev) or Paddle 2.0.1(will be published on 2021.03), performace will loss slightly is training base on Paddle 2.0.0
## Citations ## Citations
``` ```
......
...@@ -17,7 +17,6 @@ ...@@ -17,7 +17,6 @@
| ResNet50-vd-SSLDv2-FPN | Mask | 1 | 1x | ---- | 42.0 | 38.2 | [下载链接](https://paddledet.bj.bcebos.com/models/mask_rcnn_r50_vd_fpn_ssld_1x_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mask_rcnn/mask_rcnn_r50_vd_fpn_ssld_1x_coco.yml) | | ResNet50-vd-SSLDv2-FPN | Mask | 1 | 1x | ---- | 42.0 | 38.2 | [下载链接](https://paddledet.bj.bcebos.com/models/mask_rcnn_r50_vd_fpn_ssld_1x_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mask_rcnn/mask_rcnn_r50_vd_fpn_ssld_1x_coco.yml) |
| ResNet50-vd-SSLDv2-FPN | Mask | 1 | 2x | ---- | 42.7 | 38.9 | [下载链接](https://paddledet.bj.bcebos.com/models/mask_rcnn_r50_vd_fpn_ssld_2x_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mask_rcnn/mask_rcnn_r50_vd_fpn_ssld_2x_coco.yml) | | ResNet50-vd-SSLDv2-FPN | Mask | 1 | 2x | ---- | 42.7 | 38.9 | [下载链接](https://paddledet.bj.bcebos.com/models/mask_rcnn_r50_vd_fpn_ssld_2x_coco.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mask_rcnn/mask_rcnn_r50_vd_fpn_ssld_2x_coco.yml) |
**注意:** Mask R-CNN模型精度依赖Paddle develop分支修改,精度复现须使用[每日版本](https://www.paddlepaddle.org.cn/documentation/docs/zh/install/Tables.html#whl-dev)或2.0.1版本(将于2021.03发布),使用Paddle 2.0.0版本会有少量精度损失。
## Citations ## Citations
``` ```
......
...@@ -5,7 +5,7 @@ We provide some models implemented by PaddlePaddle to detect objects in specific ...@@ -5,7 +5,7 @@ We provide some models implemented by PaddlePaddle to detect objects in specific
| Task | Algorithm | Box AP | Download | Configs | | Task | Algorithm | Box AP | Download | Configs |
|:---------------------|:---------:|:------:| :-------------------------------------------------------------------------------------: |:------:| |:---------------------|:---------:|:------:| :-------------------------------------------------------------------------------------: |:------:|
| Pedestrian Detection | YOLOv3 | 51.8 | [model](https://paddledet.bj.bcebos.com/models/pedestrian_yolov3_darknet.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/master/dygraph/configs/pedestrian/pedestrian_yolov3_darknet.yml) | | Pedestrian Detection | YOLOv3 | 51.8 | [model](https://paddledet.bj.bcebos.com/models/pedestrian_yolov3_darknet.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/pedestrian/pedestrian_yolov3_darknet.yml) |
## Pedestrian Detection ## Pedestrian Detection
...@@ -17,7 +17,7 @@ The network for detecting vehicles is YOLOv3, the backbone of which is Dacknet53 ...@@ -17,7 +17,7 @@ The network for detecting vehicles is YOLOv3, the backbone of which is Dacknet53
### 2. Configuration for training ### 2. Configuration for training
PaddleDetection provides users with a configuration file [yolov3_darknet53_270e_coco.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/master/dygraph/configs/yolov3/yolov3_darknet53_270e_coco.yml) to train YOLOv3 on the COCO dataset, compared with this file, we modify some parameters as followed to conduct the training for pedestrian detection: PaddleDetection provides users with a configuration file [yolov3_darknet53_270e_coco.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/yolov3/yolov3_darknet53_270e_coco.yml) to train YOLOv3 on the COCO dataset, compared with this file, we modify some parameters as followed to conduct the training for pedestrian detection:
* num_classes: 1 * num_classes: 1
* dataset_dir: dataset/pedestrian * dataset_dir: dataset/pedestrian
...@@ -45,6 +45,6 @@ python -u tools/infer.py -c configs/pedestrian/pedestrian_yolov3_darknet.yml \ ...@@ -45,6 +45,6 @@ python -u tools/infer.py -c configs/pedestrian/pedestrian_yolov3_darknet.yml \
Some inference results are visualized below: Some inference results are visualized below:
![](https://github.com/PaddlePaddle/PaddleDetection/tree/master/docs/images/PedestrianDetection_001.png) ![](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/docs/images/PedestrianDetection_001.png)
![](https://github.com/PaddlePaddle/PaddleDetection/tree/master/docs/images/PedestrianDetection_004.png) ![](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/docs/images/PedestrianDetection_004.png)
...@@ -5,7 +5,7 @@ ...@@ -5,7 +5,7 @@
| 任务 | 算法 | 精度(Box AP) | 下载 | 配置文件 | | 任务 | 算法 | 精度(Box AP) | 下载 | 配置文件 |
|:---------------------|:---------:|:------:| :---------------------------------------------------------------------------------: | :------:| |:---------------------|:---------:|:------:| :---------------------------------------------------------------------------------: | :------:|
| 行人检测 | YOLOv3 | 51.8 | [下载链接](https://paddledet.bj.bcebos.com/models/pedestrian_yolov3_darknet.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/master/dygraph/configs/pedestrian/pedestrian_yolov3_darknet.yml) | | 行人检测 | YOLOv3 | 51.8 | [下载链接](https://paddledet.bj.bcebos.com/models/pedestrian_yolov3_darknet.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/pedestrian/pedestrian_yolov3_darknet.yml) |
## 行人检测(Pedestrian Detection) ## 行人检测(Pedestrian Detection)
...@@ -18,7 +18,7 @@ Backbone为Dacknet53的YOLOv3。 ...@@ -18,7 +18,7 @@ Backbone为Dacknet53的YOLOv3。
### 2. 训练参数配置 ### 2. 训练参数配置
PaddleDetection提供了使用COCO数据集对YOLOv3进行训练的参数配置文件[yolov3_darknet53_270e_coco.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/master/dygraph/configs/yolov3/yolov3_darknet53_270e_coco.yml),与之相比,在进行行人检测的模型训练时,我们对以下参数进行了修改: PaddleDetection提供了使用COCO数据集对YOLOv3进行训练的参数配置文件[yolov3_darknet53_270e_coco.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/yolov3/yolov3_darknet53_270e_coco.yml),与之相比,在进行行人检测的模型训练时,我们对以下参数进行了修改:
* num_classes: 1 * num_classes: 1
* dataset_dir: dataset/pedestrian * dataset_dir: dataset/pedestrian
...@@ -46,6 +46,6 @@ python -u tools/infer.py -c configs/pedestrian/pedestrian_yolov3_darknet.yml \ ...@@ -46,6 +46,6 @@ python -u tools/infer.py -c configs/pedestrian/pedestrian_yolov3_darknet.yml \
预测结果示例: 预测结果示例:
![](../../../docs/images/PedestrianDetection_001.png) ![](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/docs/images/PedestrianDetection_001.png)
![](../../../docs/images/PedestrianDetection_004.png) ![](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/docs/images/PedestrianDetection_004.png)
...@@ -56,7 +56,7 @@ PP-YOLO improved performance and speed of YOLOv3 with following methods: ...@@ -56,7 +56,7 @@ PP-YOLO improved performance and speed of YOLOv3 with following methods:
**Notes:** **Notes:**
- PP-YOLO is trained on COCO train2017 dataset and evaluated on val2017 & test-dev2017 dataset,Box AP<sup>test</sup> is evaluation results of `mAP(IoU=0.5:0.95)`. - PP-YOLO is trained on COCO train2017 dataset and evaluated on val2017 & test-dev2017 dataset,Box AP<sup>test</sup> is evaluation results of `mAP(IoU=0.5:0.95)`.
- PP-YOLO used 8 GPUs for training and mini-batch size as 24 on each GPU, if GPU number and mini-batch size is changed, learning rate and iteration times should be adjusted according [FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob/master/docs/FAQ.md). - PP-YOLO used 8 GPUs for training and mini-batch size as 24 on each GPU, if GPU number and mini-batch size is changed, learning rate and iteration times should be adjusted according [FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/static/docs/FAQ.md).
- PP-YOLO inference speed is tesed on single Tesla V100 with batch size as 1, CUDA 10.2, CUDNN 7.5.1, TensorRT 5.1.2.2 in TensorRT mode. - PP-YOLO inference speed is tesed on single Tesla V100 with batch size as 1, CUDA 10.2, CUDNN 7.5.1, TensorRT 5.1.2.2 in TensorRT mode.
- PP-YOLO FP32 inference speed testing uses inference model exported by `tools/export_model.py` and benchmarked by running `depoly/python/infer.py` with `--run_benchmark`. All testing results do not contains the time cost of data reading and post-processing(NMS), which is same as [YOLOv4(AlexyAB)](https://github.com/AlexeyAB/darknet) in testing method. - PP-YOLO FP32 inference speed testing uses inference model exported by `tools/export_model.py` and benchmarked by running `depoly/python/infer.py` with `--run_benchmark`. All testing results do not contains the time cost of data reading and post-processing(NMS), which is same as [YOLOv4(AlexyAB)](https://github.com/AlexeyAB/darknet) in testing method.
- TensorRT FP16 inference speed testing exclude the time cost of bounding-box decoding(`yolo_box`) part comparing with FP32 testing above, which means that data reading, bounding-box decoding and post-processing(NMS) is excluded(test method same as [YOLOv4(AlexyAB)](https://github.com/AlexeyAB/darknet) too) - TensorRT FP16 inference speed testing exclude the time cost of bounding-box decoding(`yolo_box`) part comparing with FP32 testing above, which means that data reading, bounding-box decoding and post-processing(NMS) is excluded(test method same as [YOLOv4(AlexyAB)](https://github.com/AlexeyAB/darknet) too)
...@@ -71,7 +71,7 @@ PP-YOLO improved performance and speed of YOLOv3 with following methods: ...@@ -71,7 +71,7 @@ PP-YOLO improved performance and speed of YOLOv3 with following methods:
**Notes:** **Notes:**
- PP-YOLO_MobileNetV3 is trained on COCO train2017 datast and evaluated on val2017 dataset,Box AP<sup>val</sup> is evaluation results of `mAP(IoU=0.5:0.95)`, Box AP<sup>val</sup> is evaluation results of `mAP(IoU=0.5)`. - PP-YOLO_MobileNetV3 is trained on COCO train2017 datast and evaluated on val2017 dataset,Box AP<sup>val</sup> is evaluation results of `mAP(IoU=0.5:0.95)`, Box AP<sup>val</sup> is evaluation results of `mAP(IoU=0.5)`.
- PP-YOLO_MobileNetV3 used 4 GPUs for training and mini-batch size as 32 on each GPU, if GPU number and mini-batch size is changed, learning rate and iteration times should be adjusted according [FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob/master/docs/FAQ.md). - PP-YOLO_MobileNetV3 used 4 GPUs for training and mini-batch size as 32 on each GPU, if GPU number and mini-batch size is changed, learning rate and iteration times should be adjusted according [FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/static/docs/FAQ.md).
- PP-YOLO_MobileNetV3 inference speed is tested on Kirin 990 with 1 thread. - PP-YOLO_MobileNetV3 inference speed is tested on Kirin 990 with 1 thread.
### PP-YOLO tiny ### PP-YOLO tiny
...@@ -84,7 +84,7 @@ PP-YOLO improved performance and speed of YOLOv3 with following methods: ...@@ -84,7 +84,7 @@ PP-YOLO improved performance and speed of YOLOv3 with following methods:
**Notes:** **Notes:**
- PP-YOLO-tiny is trained on COCO train2017 datast and evaluated on val2017 dataset,Box AP<sup>val</sup> is evaluation results of `mAP(IoU=0.5:0.95)`, Box AP<sup>val</sup> is evaluation results of `mAP(IoU=0.5)`. - PP-YOLO-tiny is trained on COCO train2017 datast and evaluated on val2017 dataset,Box AP<sup>val</sup> is evaluation results of `mAP(IoU=0.5:0.95)`, Box AP<sup>val</sup> is evaluation results of `mAP(IoU=0.5)`.
- PP-YOLO-tiny used 8 GPUs for training and mini-batch size as 32 on each GPU, if GPU number and mini-batch size is changed, learning rate and iteration times should be adjusted according [FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob/master/docs/FAQ.md). - PP-YOLO-tiny used 8 GPUs for training and mini-batch size as 32 on each GPU, if GPU number and mini-batch size is changed, learning rate and iteration times should be adjusted according [FAQ](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/static/docs/FAQ.md).
- PP-YOLO-tiny inference speed is tested on Kirin 990 with 4 threads by arm8 - PP-YOLO-tiny inference speed is tested on Kirin 990 with 4 threads by arm8
- we alse provide PP-YOLO-tiny post quant inference model, which can compress model to **1.3MB** with nearly no inference on inference speed and performance - we alse provide PP-YOLO-tiny post quant inference model, which can compress model to **1.3MB** with nearly no inference on inference speed and performance
...@@ -187,7 +187,7 @@ Optimizing method and ablation experiments of PP-YOLO compared with YOLOv3. ...@@ -187,7 +187,7 @@ Optimizing method and ablation experiments of PP-YOLO compared with YOLOv3.
- Performance and inference spedd are measure with input shape as 608 - Performance and inference spedd are measure with input shape as 608
- All models are trained on COCO train2017 datast and evaluated on val2017 & test-dev2017 dataset,`Box AP` is evaluation results as `mAP(IoU=0.5:0.95)`. - All models are trained on COCO train2017 datast and evaluated on val2017 & test-dev2017 dataset,`Box AP` is evaluation results as `mAP(IoU=0.5:0.95)`.
- Inference speed is tested on single Tesla V100 with batch size as 1 following test method and environment configuration in benchmark above. - Inference speed is tested on single Tesla V100 with batch size as 1 following test method and environment configuration in benchmark above.
- [YOLOv3-DarkNet53](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/yolov3_darknet53_270e_coco.yml) with mAP as 39.0 is optimized YOLOv3 model in PaddleDetection,see [Model Zoo](https://github.com/PaddlePaddle/PaddleDetection/blob/master/docs/MODEL_ZOO.md) for details. - [YOLOv3-DarkNet53](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/yolov3_darknet53_270e_coco.yml) with mAP as 39.0 is optimized YOLOv3 model in PaddleDetection,see [Model Zoo](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/MODEL_ZOO.md) for details.
## Citation ## Citation
......
...@@ -12,12 +12,11 @@ ...@@ -12,12 +12,11 @@
## 实验环境 ## 实验环境
- Python 3.7+ - Python 3.7+
- PaddlePaddle >= 2.0.0 - PaddlePaddle >= 2.0.1
- PaddleSlim >= 2.0.0 - PaddleSlim >= 2.0.0
- CUDA 9.0+ - CUDA 9.0+
- cuDNN >=7.5 - cuDNN >=7.5
**注意:** 量化训练需要依赖Paddle develop分支,可在[PaddlePaddle每日版本](https://www.paddlepaddle.org.cn/documentation/docs/zh/install/Tables.html#whl-dev)中下载安装合适的PaddlePaddle版本。
## 快速开始 ## 快速开始
......
English | [简体中文](CONTRIB_cn.md) English | [简体中文](README_cn.md)
# PaddleDetection applied for specific scenarios # PaddleDetection applied for specific scenarios
We provide some models implemented by PaddlePaddle to detect objects in specific scenarios, users can download the models and use them in these scenarios. We provide some models implemented by PaddlePaddle to detect objects in specific scenarios, users can download the models and use them in these scenarios.
| Task | Algorithm | Box AP | Download | Configs | | Task | Algorithm | Box AP | Download | Configs |
|:---------------------|:---------:|:------:| :-------------------------------------------------------------------------------------: |:------:| |:---------------------|:---------:|:------:| :-------------------------------------------------------------------------------------: |:------:|
| Vehicle Detection | YOLOv3 | 54.5 | [model](https://paddledet.bj.bcebos.com/models/vehicle_yolov3_darknet.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/master/dygraph/configs/vehicle/vehicle_yolov3_darknet.yml) | | Vehicle Detection | YOLOv3 | 54.5 | [model](https://paddledet.bj.bcebos.com/models/vehicle_yolov3_darknet.pdparams) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/vehicle/vehicle_yolov3_darknet.yml) |
## Vehicle Detection ## Vehicle Detection
...@@ -17,7 +17,7 @@ The network for detecting vehicles is YOLOv3, the backbone of which is Dacknet53 ...@@ -17,7 +17,7 @@ The network for detecting vehicles is YOLOv3, the backbone of which is Dacknet53
### 2. Configuration for training ### 2. Configuration for training
PaddleDetection provides users with a configuration file [yolov3_darknet53_270e_coco.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/master/dygraph/configs/yolov3/yolov3_darknet53_270e_coco.yml) to train YOLOv3 on the COCO dataset, compared with this file, we modify some parameters as followed to conduct the training for vehicle detection: PaddleDetection provides users with a configuration file [yolov3_darknet53_270e_coco.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/yolov3/yolov3_darknet53_270e_coco.yml) to train YOLOv3 on the COCO dataset, compared with this file, we modify some parameters as followed to conduct the training for vehicle detection:
* num_classes: 6 * num_classes: 6
* anchors: [[8, 9], [10, 23], [19, 15], [23, 33], [40, 25], [54, 50], [101, 80], [139, 145], [253, 224]] * anchors: [[8, 9], [10, 23], [19, 15], [23, 33], [40, 25], [54, 50], [101, 80], [139, 145], [253, 224]]
...@@ -48,6 +48,6 @@ python -u tools/infer.py -c configs/vehicle/vehicle_yolov3_darknet.yml \ ...@@ -48,6 +48,6 @@ python -u tools/infer.py -c configs/vehicle/vehicle_yolov3_darknet.yml \
Some inference results are visualized below: Some inference results are visualized below:
![](https://github.com/PaddlePaddle/PaddleDetection/tree/master/docs/images/VehicleDetection_001.jpeg) ![](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/docs/images/VehicleDetection_001.jpeg)
![](https://github.com/PaddlePaddle/PaddleDetection/tree/master/docs/images/VehicleDetection_005.png) ![](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/docs/images/VehicleDetection_005.png)
[English](CONTRIB.md) | 简体中文 [English](README.md) | 简体中文
# 特色垂类检测模型 # 特色垂类检测模型
我们提供了针对不同场景的基于PaddlePaddle的检测模型,用户可以下载模型进行使用。 我们提供了针对不同场景的基于PaddlePaddle的检测模型,用户可以下载模型进行使用。
| 任务 | 算法 | 精度(Box AP) | 下载 | 配置文件 | | 任务 | 算法 | 精度(Box AP) | 下载 | 配置文件 |
|:---------------------|:---------:|:------:| :---------------------------------------------------------------------------------: | :------:| |:---------------------|:---------:|:------:| :---------------------------------------------------------------------------------: | :------:|
| 车辆检测 | YOLOv3 | 54.5 | [下载链接](https://paddledet.bj.bcebos.com/models/vehicle_yolov3_darknet.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/master/dygraph/configs/vehicle/vehicle_yolov3_darknet.yml) | | 车辆检测 | YOLOv3 | 54.5 | [下载链接](https://paddledet.bj.bcebos.com/models/vehicle_yolov3_darknet.pdparams) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/vehicle/vehicle_yolov3_darknet.yml) |
## 车辆检测(Vehicle Detection) ## 车辆检测(Vehicle Detection)
...@@ -18,7 +18,7 @@ Backbone为Dacknet53的YOLOv3。 ...@@ -18,7 +18,7 @@ Backbone为Dacknet53的YOLOv3。
### 2. 训练参数配置 ### 2. 训练参数配置
PaddleDetection提供了使用COCO数据集对YOLOv3进行训练的参数配置文件[yolov3_darknet53_270e_coco.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/master/dygraph/configs/yolov3/yolov3_darknet53_270e_coco.yml),与之相比,在进行车辆检测的模型训练时,我们对以下参数进行了修改: PaddleDetection提供了使用COCO数据集对YOLOv3进行训练的参数配置文件[yolov3_darknet53_270e_coco.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/yolov3/yolov3_darknet53_270e_coco.yml),与之相比,在进行车辆检测的模型训练时,我们对以下参数进行了修改:
* num_classes: 6 * num_classes: 6
* anchors: [[8, 9], [10, 23], [19, 15], [23, 33], [40, 25], [54, 50], [101, 80], [139, 145], [253, 224]] * anchors: [[8, 9], [10, 23], [19, 15], [23, 33], [40, 25], [54, 50], [101, 80], [139, 145], [253, 224]]
...@@ -49,6 +49,6 @@ python -u tools/infer.py -c configs/vehicle/vehicle_yolov3_darknet.yml \ ...@@ -49,6 +49,6 @@ python -u tools/infer.py -c configs/vehicle/vehicle_yolov3_darknet.yml \
预测结果示例: 预测结果示例:
![](https://github.com/PaddlePaddle/PaddleDetection/tree/master/docs/images/VehicleDetection_001.jpeg) ![](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/docs/images/VehicleDetection_001.jpeg)
![](https://github.com/PaddlePaddle/PaddleDetection/tree/master/docs/images/VehicleDetection_005.png) ![](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/static/docs/images/VehicleDetection_005.png)
...@@ -13,7 +13,7 @@ python tools/infer.py -c --infer_img=demo/000000014439.jpg -o use_gpu=True weig ...@@ -13,7 +13,7 @@ python tools/infer.py -c --infer_img=demo/000000014439.jpg -o use_gpu=True weig
请参考[PaddleServing](https://github.com/PaddlePaddle/Serving/tree/v0.5.0) 中安装教程安装 请参考[PaddleServing](https://github.com/PaddlePaddle/Serving/tree/v0.5.0) 中安装教程安装
## 3. 导出模型 ## 3. 导出模型
PaddleDetection在训练过程包括网络的前向和优化器相关参数,而在部署过程中,我们只需要前向参数,具体参考:[导出模型](https://github.com/PaddlePaddle/PaddleDetection/blob/master/docs/advanced_tutorials/deploy/EXPORT_MODEL.md) PaddleDetection在训练过程包括网络的前向和优化器相关参数,而在部署过程中,我们只需要前向参数,具体参考:[导出模型](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/advanced_tutorials/deploy/EXPORT_MODEL.md)
``` ```
python tools/export_model.py -c configs/yolov3/yolov3_darknet53_270e_coco.yml -o weights=weights/yolov3_darknet53_270e_coco.pdparams --export_serving_model=True python tools/export_model.py -c configs/yolov3/yolov3_darknet53_270e_coco.yml -o weights=weights/yolov3_darknet53_270e_coco.pdparams --export_serving_model=True
......
...@@ -30,36 +30,36 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型 ...@@ -30,36 +30,36 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型
### Faster R-CNN ### Faster R-CNN
请参考[Faster R-CNN](https://github.com/PaddlePaddle/PaddleDetection/tree/master/dygraph/configs/faster_rcnn/) 请参考[Faster R-CNN](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/faster_rcnn/)
### Mask R-CNN ### Mask R-CNN
请参考[Mask R-CNN](https://github.com/PaddlePaddle/PaddleDetection/tree/master/dygraph/configs/mask_rcnn/) 请参考[Mask R-CNN](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/mask_rcnn/)
### Cascade R-CNN ### Cascade R-CNN
请参考[Cascade R-CNN](https://github.com/PaddlePaddle/PaddleDetection/tree/master/dygraph/configs/cascade_rcnn/) 请参考[Cascade R-CNN](https://github.com/PaddlePaddle/PaddleDetection/tree/developh/configs/cascade_rcnn/)
### YOLOv3 ### YOLOv3
请参考[YOLOv3](https://github.com/PaddlePaddle/PaddleDetection/tree/master/dygraph/configs/yolov3/) 请参考[YOLOv3](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/yolov3/)
### SSD ### SSD
请参考[SSD](https://github.com/PaddlePaddle/PaddleDetection/tree/master/dygraph/configs/ssd/) 请参考[SSD](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ssd/)
### FCOS ### FCOS
请参考[FCOS](https://github.com/PaddlePaddle/PaddleDetection/tree/master/dygraph/configs/fcos/) 请参考[FCOS](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/fcos/)
### SOLOv2 ### SOLOv2
请参考[SOLOv2](https://github.com/PaddlePaddle/PaddleDetection/tree/master/dygraph/configs/solov2/) 请参考[SOLOv2](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/solov2/)
### PP-YOLO ### PP-YOLO
请参考[PP-YOLO](https://github.com/PaddlePaddle/PaddleDetection/tree/master/dygraph/configs/ppyolo/) 请参考[PP-YOLO](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyolo/)
### TTFNet ### TTFNet
请参考[TTFNet](https://github.com/PaddlePaddle/PaddleDetection/tree/master/dygraph/configs/ttfnet/) 请参考[TTFNet](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ttfnet/)
...@@ -77,6 +77,11 @@ class DetDataset(Dataset): ...@@ -77,6 +77,11 @@ class DetDataset(Dataset):
copy.deepcopy(self.roidbs[np.random.randint(n)]) copy.deepcopy(self.roidbs[np.random.randint(n)])
for _ in range(3) for _ in range(3)
] ]
if isinstance(roidb, Sequence):
for r in roidb:
r['curr_iter'] = self._curr_iter
else:
roidb['curr_iter'] = self._curr_iter
roidb['curr_iter'] = self._curr_iter roidb['curr_iter'] = self._curr_iter
self._curr_iter += 1 self._curr_iter += 1
......
...@@ -72,8 +72,7 @@ class HMHead(nn.Layer): ...@@ -72,8 +72,7 @@ class HMHead(nn.Layer):
in_channels=ch_in if i == 0 else ch_out, in_channels=ch_in if i == 0 else ch_out,
out_channels=ch_out, out_channels=ch_out,
kernel_size=3, kernel_size=3,
weight_attr=ParamAttr(initializer=Normal(0, 0.01)), weight_attr=ParamAttr(initializer=Normal(0, 0.01))))
name='hm.' + name))
else: else:
head_conv.add_sublayer( head_conv.add_sublayer(
name, name,
...@@ -151,8 +150,7 @@ class WHHead(nn.Layer): ...@@ -151,8 +150,7 @@ class WHHead(nn.Layer):
in_channels=ch_in if i == 0 else ch_out, in_channels=ch_in if i == 0 else ch_out,
out_channels=ch_out, out_channels=ch_out,
kernel_size=3, kernel_size=3,
weight_attr=ParamAttr(initializer=Normal(0, 0.01)), weight_attr=ParamAttr(initializer=Normal(0, 0.01))))
name='wh.' + name))
else: else:
head_conv.add_sublayer( head_conv.add_sublayer(
name, name,
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册