未验证 提交 a8282705 编写于 作者: W wangguanzhong 提交者: GitHub

move path to PaddleDetection repo in docs (#35)

* move path to PaddleDetection repo in docs

* refine README
上级 492b4e91
English | [简体中文](README_cn.md) [English](README_en.md) | 简体中文
# PaddleDetection # PaddleDetection
The goal of PaddleDetection is to provide easy access to a wide range of object PaddleDetection的目的是为工业界和学术界提供丰富、易用的目标检测模型。不仅性能优越、易于部署,而且能够灵活的满足算法研究的需求。
detection models in both industry and research settings. We design
PaddleDetection to be not only performant, production-ready but also highly
flexible, catering to research needs.
**Now all models in PaddleDetection require PaddlePaddle version 1.6 or higher, or suitable develop version.** **目前检测库下模型均要求使用PaddlePaddle 1.6及以上版本或适当的develop版本。**
<div align="center"> <div align="center">
<img src="demo/output/000000570688.jpg" /> <img src="demo/output/000000570688.jpg" />
</div> </div>
## Introduction ## 简介
Features: 特性:
- Production Ready: - 易部署:
Key operations are implemented in C++ and CUDA, together with PaddlePaddle's PaddleDetection的模型中使用的核心算子均通过C++或CUDA实现,同时基于PaddlePaddle的高性能推理引擎可以方便地部署在多种硬件平台上。
highly efficient inference engine, enables easy deployment in server environments.
- Highly Flexible: - 高灵活度:
Components are designed to be modular. Model architectures, as well as data PaddleDetection通过模块化设计来解耦各个组件,基于配置文件可以轻松地搭建各种检测模型。
preprocess pipelines, can be easily customized with simple configuration
changes.
- Performance Optimized: - 高性能:
With the help of the underlying PaddlePaddle framework, faster training and 基于PaddlePaddle框架的高性能内核,在模型训练速度、显存占用上有一定的优势。例如,YOLOv3的训练速度快于其他框架,在Tesla V100 16GB环境下,Mask-RCNN(ResNet50)可以单卡Batch Size可以达到4 (甚至到5)。
reduced GPU memory footprint is achieved. Notably, YOLOv3 training is
much faster compared to other frameworks. Another example is Mask-RCNN
(ResNet50), we managed to fit up to 4 images per GPU (Tesla V100 16GB) during
multi-GPU training.
Supported Architectures: 支持的模型结构:
| | ResNet | ResNet-vd <sup>[1](#vd)</sup> | ResNeXt-vd | SENet | MobileNet | DarkNet | VGG | | | ResNet | ResNet-vd <sup>[1](#vd)</sup> | ResNeXt-vd | SENet | MobileNet | DarkNet | VGG |
| ------------------- | :----: | ----------------------------: | :--------: | :---: | :-------: | :-----: | :--: | |--------------------|:------:|------------------------------:|:----------:|:-----:|:---------:|:-------:|:---:|
| Faster R-CNN | ✓ | ✓ | x | ✓ | ✗ | ✗ | ✗ | | Faster R-CNN | ✓ | ✓ | x | ✓ | ✗ | ✗ | ✗ |
| Faster R-CNN + FPN | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | | Faster R-CNN + FPN | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ |
| Mask R-CNN | ✓ | ✓ | x | ✓ | ✗ | ✗ | ✗ | | Mask R-CNN | ✓ | ✓ | x | ✓ | ✗ | ✗ | ✗ |
| Mask R-CNN + FPN | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | | Mask R-CNN + FPN | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ |
| Cascade Faster-RCNN | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | | Cascade Faster-CNN | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ |
| Cascade Mask-RCNN | ✓ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | | Cascade Mask-CNN | ✓ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ |
| RetinaNet | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | RetinaNet | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
| YOLOv3 | ✓ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | | YOLOv3 | ✓ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ |
| SSD | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✓ | | SSD | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✓ |
<a name="vd">[1]</a> [ResNet-vd](https://arxiv.org/pdf/1812.01187) models offer much improved accuracy with negligible performance cost. <a name="vd">[1]</a> [ResNet-vd](https://arxiv.org/pdf/1812.01187) 模型提供了较大的精度提高和较少的性能损失。
Advanced Features: 扩展特性:
- [x] **Synchronized Batch Norm**: currently used by YOLOv3. - [x] **Synchronized Batch Norm**: 目前在YOLOv3中使用。
- [x] **Group Norm** - [x] **Group Norm**
- [x] **Modulated Deformable Convolution** - [x] **Modulated Deformable Convolution**
- [x] **Deformable PSRoI Pooling** - [x] **Deformable PSRoI Pooling**
**NOTE:** Synchronized batch normalization can only be used on multiple GPU devices, can not be used on CPU devices or single GPU device. **注意:** Synchronized batch normalization 只能在多GPU环境下使用,不能在CPU环境或者单GPU环境下使用。
## Get Started
- [Installation guide](docs/INSTALL.md) ## 使用教程
- [Quick start on small dataset](docs/QUICK_STARTED.md)
- For detailed training and evaluation workflow, please refer to [GETTING_STARTED](docs/GETTING_STARTED.md) - [安装说明](docs/INSTALL_cn.md)
- [Guide to preprocess pipeline and custom dataset](docs/DATA.md) - [快速开始](docs/QUICK_STARTED_cn.md)
- [Introduction to the configuration workflow](docs/CONFIG.md) - [训练、评估流程](docs/GETTING_STARTED_cn.md)
- [Examples for detailed configuration explanation](docs/config_example/) - [数据预处理及自定义数据集](docs/DATA_cn.md)
- [配置模块设计和介绍](docs/CONFIG_cn.md)
- [详细的配置信息和参数说明示例](docs/config_example/)
- [IPython Notebook demo](demo/mask_rcnn_demo.ipynb) - [IPython Notebook demo](demo/mask_rcnn_demo.ipynb)
- [Transfer learning document](docs/TRANSFER_LEARNING.md) - [迁移学习教程](docs/TRANSFER_LEARNING_cn.md)
## Model Zoo ## 模型库
- Pretrained models are available in the [PaddleDetection model zoo](docs/MODEL_ZOO.md). - [模型库](docs/MODEL_ZOO_cn.md)
- [Face detection models](configs/face_detection/README.md) - [人脸检测模型](configs/face_detection/README.md)
- [Pretrained models for pedestrian and vehicle detection](contrib/README.md) - [行人检测和车辆检测预训练模型](contrib/README_cn.md) 针对不同场景的检测模型
- [YOLOv3增强模型](docs/YOLOv3_ENHANCEMENT.md) 改进原始YOLOv3,精度达到41.4%,原论文精度为33.0%,同时预测速度也得到提升
- [Objects365 2019 Challenge夺冠模型](docs/CACascadeRCNN.md) Objects365 Full Track任务中最好的单模型之一,精度达到31.7%
## Model compression
- [Quantization-aware training example](slim/quantization) ## 模型压缩
- [Model pruning example](slim/prune) - [量化训练压缩示例](slim/quantization)
- [剪枝压缩示例](slim/prune)
## Deployment ## 推理部署
- [Export model for inference](docs/EXPORT_MODEL.md) - [模型导出教程](docs/EXPORT_MODEL.md)
- [C++ inference](inference/README.md) - [C++推理部署](inference/README.md)
## Benchmark ## Benchmark
- [Inference benchmark](docs/BENCHMARK_INFER_cn.md) - [推理Benchmark](docs/BENCHMARK_INFER_cn.md)
## Updates
#### 10/2019 ## 版本更新
- Add enhanced YOLOv3 models, box mAP up to 41.4%. ### 10/2019
- Face detection models included: BlazeFace, Faceboxes.
- Enrich COCO models, box mAP up to 51.9%.
- Add CACacascade RCNN, one of the best single model of Objects365 2019 challenge Full Track champion.
- Add pretrained models for pedestrian and vehicle detection.
- Support mixed-precision training.
- Add C++ inference depolyment.
- Add model compression examples.
#### 2/9/2019 - 增加增强版YOLOv3模型,精度高达41.4%。
- 增加人脸检测模型BlazeFace、Faceboxes。
- 丰富基于COCO的模型,精度高达51.9%。
- 增加Objects365 2019 Challenge上夺冠的最佳单模型之一CACascade-RCNN。
- 增加行人检测和车辆检测预训练模型。
- 支持FP16训练。
- 增加跨平台的C++推理部署方案。
- 增加模型压缩示例。
- Add retrained models for GroupNorm.
- Add Cascade-Mask-RCNN+FPN. ### 2/9/2019
- 增加GroupNorm模型。
- 增加CascadeRCNN+Mask模型。
#### 5/8/2019 #### 5/8/2019
- 增加Modulated Deformable Convolution系列模型。
- Add a series of models ralated modulated Deformable Convolution.
#### 29/7/2019 #### 29/7/2019
- Update Chinese docs for PaddleDetection - 增加检测库中文文档
- Fix bug in R-CNN models when train and test at the same time - 修复R-CNN系列模型训练同时进行评估的问题
- Add ResNext101-vd + Mask R-CNN + FPN models - 新增ResNext101-vd + Mask R-CNN + FPN模型
- Add YOLOv3 on VOC models - 新增基于VOC数据集的YOLOv3模型
#### 3/7/2019 #### 3/7/2019
- Initial release of PaddleDetection and detection model zoo - 首次发布PaddleDetection检测库和检测模型库
- Models included: Faster R-CNN, Mask R-CNN, Faster R-CNN+FPN, Mask - 模型包括:Faster R-CNN, Mask R-CNN, Faster R-CNN+FPN, Mask
R-CNN+FPN, Cascade-Faster-RCNN+FPN, RetinaNet, YOLOv3, and SSD. R-CNN+FPN, Cascade-Faster-RCNN+FPN, RetinaNet, YOLOv3, 和SSD.
## Contributing ## 如何贡献代码
Contributions are highly welcomed and we would really appreciate your feedback!! 我们非常欢迎你可以为PaddleDetection提供代码,也十分感谢你的反馈。
[English](README.md) | 简体中文
# PaddleDetection
PaddleDetection的目的是为工业界和学术界提供丰富、易用的目标检测模型。不仅性能优越、易于部署,而且能够灵活的满足算法研究的需求。
**目前检测库下模型均要求使用PaddlePaddle 1.6及以上版本或适当的develop版本。**
<div align="center">
<img src="demo/output/000000570688.jpg" />
</div>
## 简介
特性:
- 易部署:
PaddleDetection的模型中使用的核心算子均通过C++或CUDA实现,同时基于PaddlePaddle的高性能推理引擎可以方便地部署在多种硬件平台上。
- 高灵活度:
PaddleDetection通过模块化设计来解耦各个组件,基于配置文件可以轻松地搭建各种检测模型。
- 高性能:
基于PaddlePaddle框架的高性能内核,在模型训练速度、显存占用上有一定的优势。例如,YOLOv3的训练速度快于其他框架,在Tesla V100 16GB环境下,Mask-RCNN(ResNet50)可以单卡Batch Size可以达到4 (甚至到5)。
支持的模型结构:
| | ResNet | ResNet-vd <sup>[1](#vd)</sup> | ResNeXt-vd | SENet | MobileNet | DarkNet | VGG |
|--------------------|:------:|------------------------------:|:----------:|:-----:|:---------:|:-------:|:---:|
| Faster R-CNN | ✓ | ✓ | x | ✓ | ✗ | ✗ | ✗ |
| Faster R-CNN + FPN | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ |
| Mask R-CNN | ✓ | ✓ | x | ✓ | ✗ | ✗ | ✗ |
| Mask R-CNN + FPN | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ |
| Cascade Faster-CNN | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ |
| Cascade Mask-CNN | ✓ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ |
| RetinaNet | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
| YOLOv3 | ✓ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ |
| SSD | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✓ |
<a name="vd">[1]</a> [ResNet-vd](https://arxiv.org/pdf/1812.01187) 模型提供了较大的精度提高和较少的性能损失。
扩展特性:
- [x] **Synchronized Batch Norm**: 目前在YOLOv3中使用。
- [x] **Group Norm**
- [x] **Modulated Deformable Convolution**
- [x] **Deformable PSRoI Pooling**
**注意:** Synchronized batch normalization 只能在多GPU环境下使用,不能在CPU环境或者单GPU环境下使用。
## 使用教程
- [安装说明](docs/INSTALL_cn.md)
- [快速开始](docs/QUICK_STARTED_cn.md)
- [训练、评估流程](docs/GETTING_STARTED_cn.md)
- [数据预处理及自定义数据集](docs/DATA_cn.md)
- [配置模块设计和介绍](docs/CONFIG_cn.md)
- [详细的配置信息和参数说明示例](docs/config_example/)
- [IPython Notebook demo](demo/mask_rcnn_demo.ipynb)
- [迁移学习教程](docs/TRANSFER_LEARNING_cn.md)
## 模型库
- [模型库](docs/MODEL_ZOO_cn.md)
- [人脸检测模型](configs/face_detection/README.md)
- [行人检测和车辆检测预训练模型](contrib/README_cn.md)
## 模型压缩
- [量化训练压缩示例](slim/quantization)
- [剪枝压缩示例](slim/prune)
## 推理部署
- [模型导出教程](docs/EXPORT_MODEL.md)
- [C++推理部署](inference/README.md)
## Benchmark
- [推理Benchmark](docs/BENCHMARK_INFER_cn.md)
## 版本更新
### 10/2019
- 增加增强版YOLOv3模型,精度高达41.4%。
- 增加人脸检测模型BlazeFace、Faceboxes。
- 丰富基于COCO的模型,精度高达51.9%。
- 增加Objects365 2019 Challenge上夺冠的最佳单模型之一CACascade-RCNN。
- 增加行人检测和车辆检测预训练模型。
- 支持FP16训练。
- 增加跨平台的C++推理部署方案。
- 增加模型压缩示例。
### 2/9/2019
- 增加GroupNorm模型。
- 增加CascadeRCNN+Mask模型。
#### 5/8/2019
- 增加Modulated Deformable Convolution系列模型。
#### 29/7/2019
- 增加检测库中文文档
- 修复R-CNN系列模型训练同时进行评估的问题
- 新增ResNext101-vd + Mask R-CNN + FPN模型
- 新增基于VOC数据集的YOLOv3模型
#### 3/7/2019
- 首次发布PaddleDetection检测库和检测模型库
- 模型包括:Faster R-CNN, Mask R-CNN, Faster R-CNN+FPN, Mask
R-CNN+FPN, Cascade-Faster-RCNN+FPN, RetinaNet, YOLOv3, 和SSD.
## 如何贡献代码
我们非常欢迎你可以为PaddleDetection提供代码,也十分感谢你的反馈。
English | [简体中文](README.md)
# PaddleDetection
The goal of PaddleDetection is to provide easy access to a wide range of object
detection models in both industry and research settings. We design
PaddleDetection to be not only performant, production-ready but also highly
flexible, catering to research needs.
**Now all models in PaddleDetection require PaddlePaddle version 1.6 or higher, or suitable develop version.**
<div align="center">
<img src="demo/output/000000570688.jpg" />
</div>
## Introduction
Features:
- Production Ready:
Key operations are implemented in C++ and CUDA, together with PaddlePaddle's
highly efficient inference engine, enables easy deployment in server environments.
- Highly Flexible:
Components are designed to be modular. Model architectures, as well as data
preprocess pipelines, can be easily customized with simple configuration
changes.
- Performance Optimized:
With the help of the underlying PaddlePaddle framework, faster training and
reduced GPU memory footprint is achieved. Notably, YOLOv3 training is
much faster compared to other frameworks. Another example is Mask-RCNN
(ResNet50), we managed to fit up to 4 images per GPU (Tesla V100 16GB) during
multi-GPU training.
Supported Architectures:
| | ResNet | ResNet-vd <sup>[1](#vd)</sup> | ResNeXt-vd | SENet | MobileNet | DarkNet | VGG |
| ------------------- | :----: | ----------------------------: | :--------: | :---: | :-------: | :-----: | :--: |
| Faster R-CNN | ✓ | ✓ | x | ✓ | ✗ | ✗ | ✗ |
| Faster R-CNN + FPN | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ |
| Mask R-CNN | ✓ | ✓ | x | ✓ | ✗ | ✗ | ✗ |
| Mask R-CNN + FPN | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ |
| Cascade Faster-RCNN | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ |
| Cascade Mask-RCNN | ✓ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ |
| RetinaNet | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| YOLOv3 | ✓ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ |
| SSD | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✓ |
<a name="vd">[1]</a> [ResNet-vd](https://arxiv.org/pdf/1812.01187) models offer much improved accuracy with negligible performance cost.
Advanced Features:
- [x] **Synchronized Batch Norm**: currently used by YOLOv3.
- [x] **Group Norm**
- [x] **Modulated Deformable Convolution**
- [x] **Deformable PSRoI Pooling**
**NOTE:** Synchronized batch normalization can only be used on multiple GPU devices, can not be used on CPU devices or single GPU device.
## Get Started
- [Installation guide](docs/INSTALL.md)
- [Quick start on small dataset](docs/QUICK_STARTED.md)
- For detailed training and evaluation workflow, please refer to [GETTING_STARTED](docs/GETTING_STARTED.md)
- [Guide to preprocess pipeline and custom dataset](docs/DATA.md)
- [Introduction to the configuration workflow](docs/CONFIG.md)
- [Examples for detailed configuration explanation](docs/config_example/)
- [IPython Notebook demo](demo/mask_rcnn_demo.ipynb)
- [Transfer learning document](docs/TRANSFER_LEARNING.md)
## Model Zoo
- Pretrained models are available in the [PaddleDetection model zoo](docs/MODEL_ZOO.md).
- [Face detection models](configs/face_detection/README.md)
- [Pretrained models for pedestrian and vehicle detection](contrib/README.md) Models for object detection in specific scenarios.
- [YOLOv3 enhanced model](docs/YOLOv3_ENHANCEMENT.md) Compared to MAP of 33.0% in paper, enhanced YOLOv3 reaches the MAP of 41.4% and inference speed is improved as well
- [Objects365 2019 Challenge champion model](docs/CACascadeRCNN.md) One of the best single models in Objects365 Full Track of which MAP reaches 31.7%.
## Model compression
- [Quantization-aware training example](slim/quantization)
- [Model pruning example](slim/prune)
## Deployment
- [Export model for inference](docs/EXPORT_MODEL.md)
- [C++ inference](inference/README.md)
## Benchmark
- [Inference benchmark](docs/BENCHMARK_INFER_cn.md)
## Updates
#### 10/2019
- Add enhanced YOLOv3 models, box mAP up to 41.4%.
- Face detection models included: BlazeFace, Faceboxes.
- Enrich COCO models, box mAP up to 51.9%.
- Add CACacascade RCNN, one of the best single model of Objects365 2019 challenge Full Track champion.
- Add pretrained models for pedestrian and vehicle detection.
- Support mixed-precision training.
- Add C++ inference depolyment.
- Add model compression examples.
#### 2/9/2019
- Add retrained models for GroupNorm.
- Add Cascade-Mask-RCNN+FPN.
#### 5/8/2019
- Add a series of models ralated modulated Deformable Convolution.
#### 29/7/2019
- Update Chinese docs for PaddleDetection
- Fix bug in R-CNN models when train and test at the same time
- Add ResNext101-vd + Mask R-CNN + FPN models
- Add YOLOv3 on VOC models
#### 3/7/2019
- Initial release of PaddleDetection and detection model zoo
- Models included: Faster R-CNN, Mask R-CNN, Faster R-CNN+FPN, Mask
R-CNN+FPN, Cascade-Faster-RCNN+FPN, RetinaNet, YOLOv3, and SSD.
## Contributing
Contributions are highly welcomed and we would really appreciate your feedback!!
...@@ -17,7 +17,7 @@ The network for detecting vehicles is YOLOv3, the backbone of which is Dacknet53 ...@@ -17,7 +17,7 @@ The network for detecting vehicles is YOLOv3, the backbone of which is Dacknet53
### 2. Configuration for training ### 2. Configuration for training
PaddleDetection provides users with a configuration file [yolov3_darnet.yml](https://github.com/PaddlePaddle/models/blob/develop/PaddleCV/PaddleDetection/configs/yolov3_darknet.yml) to train YOLOv3 on the COCO dataset, compared with this file, we modify some parameters as followed to conduct the training for vehicle detection: PaddleDetection provides users with a configuration file [yolov3_darknet.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/master/configs/yolov3_darknet.yml) to train YOLOv3 on the COCO dataset, compared with this file, we modify some parameters as followed to conduct the training for vehicle detection:
* max_iters: 120000 * max_iters: 120000
* num_classes: 6 * num_classes: 6
...@@ -67,7 +67,7 @@ The network for detecting vehicles is YOLOv3, the backbone of which is Dacknet53 ...@@ -67,7 +67,7 @@ The network for detecting vehicles is YOLOv3, the backbone of which is Dacknet53
### 2. Configuration for training ### 2. Configuration for training
PaddleDetection provides users with a configuration file [yolov3_darnet.yml](https://github.com/PaddlePaddle/models/blob/develop/PaddleCV/PaddleDetection/configs/yolov3_darknet.yml) to train YOLOv3 on the COCO dataset, compared with this file, we modify some parameters as followed to conduct the training for pedestrian detection: PaddleDetection provides users with a configuration file [yolov3_darknet.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/master/configs/yolov3_darknet.yml) to train YOLOv3 on the COCO dataset, compared with this file, we modify some parameters as followed to conduct the training for pedestrian detection:
* max_iters: 200000 * max_iters: 200000
* num_classes: 1 * num_classes: 1
......
...@@ -18,7 +18,7 @@ Backbone为Dacknet53的YOLOv3。 ...@@ -18,7 +18,7 @@ Backbone为Dacknet53的YOLOv3。
### 2. 训练参数配置 ### 2. 训练参数配置
PaddleDetection提供了使用COCO数据集对YOLOv3进行训练的参数配置文件[yolov3_darnet.yml](https://github.com/PaddlePaddle/models/blob/develop/PaddleCV/PaddleDetection/configs/yolov3_darknet.yml),与之相比,在进行车辆检测的模型训练时,我们对以下参数进行了修改: PaddleDetection提供了使用COCO数据集对YOLOv3进行训练的参数配置文件[yolov3_darnet.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/master/configs/yolov3_darknet.yml),与之相比,在进行车辆检测的模型训练时,我们对以下参数进行了修改:
* max_iters: 120000 * max_iters: 120000
* num_classes: 6 * num_classes: 6
...@@ -69,7 +69,7 @@ Backbone为Dacknet53的YOLOv3。 ...@@ -69,7 +69,7 @@ Backbone为Dacknet53的YOLOv3。
### 2. 训练参数配置 ### 2. 训练参数配置
PaddleDetection提供了使用COCO数据集对YOLOv3进行训练的参数配置文件[yolov3_darnet.yml](https://github.com/PaddlePaddle/models/blob/develop/PaddleCV/PaddleDetection/configs/yolov3_darknet.yml),与之相比,在进行行人检测的模型训练时,我们对以下参数进行了修改: PaddleDetection提供了使用COCO数据集对YOLOv3进行训练的参数配置文件[yolov3_darknet.yml](https://github.com/PaddlePaddle/PaddleDetection/blob/master/configs/yolov3_darknet.yml),与之相比,在进行行人检测的模型训练时,我们对以下参数进行了修改:
* max_iters: 200000 * max_iters: 200000
* num_classes: 1 * num_classes: 1
......
...@@ -28,7 +28,7 @@ ...@@ -28,7 +28,7 @@
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"/home/yang/models/PaddleCV/PaddleDetection\n" "/home/yang/PaddleDetection\n"
] ]
} }
], ],
...@@ -71,13 +71,11 @@ COCO-API is needed for running. Installation is as follows: ...@@ -71,13 +71,11 @@ COCO-API is needed for running. Installation is as follows:
**Clone Paddle models repository:** **Clone Paddle models repository:**
You can clone Paddle models and change working directory to PaddleDetection You can clone PaddleDetection with the following commands:
with the following commands:
``` ```
cd <path/to/clone/models> cd <path/to/clone/PaddleDetection>
git clone https://github.com/PaddlePaddle/models git clone https://github.com/PaddlePaddle/PaddleDetection.git
cd models/PaddleCV/PaddleDetection
``` ```
**Install Python dependencies:** **Install Python dependencies:**
......
...@@ -67,12 +67,11 @@ python -c "import paddle; print(paddle.__version__)" ...@@ -67,12 +67,11 @@ python -c "import paddle; print(paddle.__version__)"
**克隆Paddle models模型库:** **克隆Paddle models模型库:**
您可以通过以下命令克隆Paddle models模型库并切换工作目录至PaddleDetection: 您可以通过以下命令克隆PaddleDetection:
``` ```
cd <path/to/clone/models> cd <path/to/clone/PaddleDetection>
git clone https://github.com/PaddlePaddle/models git clone https://github.com/PaddlePaddle/PaddleDetection.git
cd models/PaddleCV/PaddleDetection
``` ```
**安装Python依赖库:** **安装Python依赖库:**
......
...@@ -90,7 +90,7 @@ The backbone models pretrained on ImageNet are available. All backbone models ar ...@@ -90,7 +90,7 @@ The backbone models pretrained on ImageNet are available. All backbone models ar
#### Notes: #### Notes:
- Deformable ConvNets v2(dcn_v2) reference from [Deformable ConvNets v2](https://arxiv.org/abs/1811.11168). - Deformable ConvNets v2(dcn_v2) reference from [Deformable ConvNets v2](https://arxiv.org/abs/1811.11168).
- `c3-c5` means adding `dcn` in resnet stage 3 to 5. - `c3-c5` means adding `dcn` in resnet stage 3 to 5.
- Detailed configuration file in [configs/dcn](https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/PaddleDetection/configs/dcn) - Detailed configuration file in [configs/dcn](https://github.com/PaddlePaddle/PaddleDetection/tree/master/configs/dcn)
### Group Normalization ### Group Normalization
| Backbone | Type | Image/gpu | Lr schd | Box AP | Mask AP | Download | | Backbone | Type | Image/gpu | Lr schd | Box AP | Mask AP | Download |
...@@ -100,7 +100,7 @@ The backbone models pretrained on ImageNet are available. All backbone models ar ...@@ -100,7 +100,7 @@ The backbone models pretrained on ImageNet are available. All backbone models ar
#### Notes: #### Notes:
- Group Normalization reference from [Group Normalization](https://arxiv.org/abs/1803.08494). - Group Normalization reference from [Group Normalization](https://arxiv.org/abs/1803.08494).
- Detailed configuration file in [configs/gn](https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/PaddleDetection/configs/gn) - Detailed configuration file in [configs/gn](https://github.com/PaddlePaddle/PaddleDetection/tree/master/configs/gn)
### YOLO v3 ### YOLO v3
......
...@@ -86,7 +86,7 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型 ...@@ -86,7 +86,7 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型
#### 注意事项: #### 注意事项:
- Deformable卷积网络v2(dcn_v2)参考自论文[Deformable ConvNets v2](https://arxiv.org/abs/1811.11168). - Deformable卷积网络v2(dcn_v2)参考自论文[Deformable ConvNets v2](https://arxiv.org/abs/1811.11168).
- `c3-c5`意思是在resnet模块的3到5阶段增加`dcn`. - `c3-c5`意思是在resnet模块的3到5阶段增加`dcn`.
- 详细的配置文件在[configs/dcn](https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/PaddleDetection/configs/dcn) - 详细的配置文件在[configs/dcn](https://github.com/PaddlePaddle/PaddleDetection/tree/master/configs/dcn)
### Group Normalization ### Group Normalization
| 骨架网络 | 网络类型 | 每张GPU图片个数 | 学习率策略 | Box AP | Mask AP | 下载 | | 骨架网络 | 网络类型 | 每张GPU图片个数 | 学习率策略 | Box AP | Mask AP | 下载 |
...@@ -96,7 +96,7 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型 ...@@ -96,7 +96,7 @@ Paddle提供基于ImageNet的骨架网络预训练模型。所有预训练模型
#### 注意事项: #### 注意事项:
- Group Normalization参考论文[Group Normalization](https://arxiv.org/abs/1803.08494). - Group Normalization参考论文[Group Normalization](https://arxiv.org/abs/1803.08494).
- 详细的配置文件在[configs/gn](https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/PaddleDetection/configs/gn) - 详细的配置文件在[configs/gn](https://github.com/PaddlePaddle/PaddleDetection/tree/master/configs/gn)
### YOLO v3 ### YOLO v3
......
...@@ -7,7 +7,7 @@ ...@@ -7,7 +7,7 @@
该示例使用PaddleSlim提供的[蒸馏策略](https://github.com/PaddlePaddle/models/blob/develop/PaddleSlim/docs/tutorial.md#3-蒸馏)对检测库中的模型进行蒸馏训练。 该示例使用PaddleSlim提供的[蒸馏策略](https://github.com/PaddlePaddle/models/blob/develop/PaddleSlim/docs/tutorial.md#3-蒸馏)对检测库中的模型进行蒸馏训练。
在阅读该示例前,建议您先了解以下内容: 在阅读该示例前,建议您先了解以下内容:
- [检测库的常规训练方法](https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/PaddleDetection) - [检测库的常规训练方法](https://github.com/PaddlePaddle/PaddleDetection)
- [PaddleSlim使用文档](https://github.com/PaddlePaddle/models/blob/develop/PaddleSlim/docs/usage.md) - [PaddleSlim使用文档](https://github.com/PaddlePaddle/models/blob/develop/PaddleSlim/docs/usage.md)
...@@ -61,7 +61,7 @@ strategies: ...@@ -61,7 +61,7 @@ strategies:
## 训练 ## 训练
根据[PaddleDetection/tools/train.py](https://github.com/PaddlePaddle/models/blob/develop/PaddleCV/PaddleDetection/tools/train.py)编写压缩脚本compress.py。 根据[PaddleDetection/tools/train.py](https://github.com/PaddlePaddle/PaddleDetection/tree/master/tools/train.py)编写压缩脚本compress.py。
在该脚本中定义了Compressor对象,用于执行压缩任务。 在该脚本中定义了Compressor对象,用于执行压缩任务。
......
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
在阅读该示例前,建议您先了解以下内容: 在阅读该示例前,建议您先了解以下内容:
- <a href="../../README_cn.md">检测库的常规训练方法</a> - <a href="../../README_cn.md">检测库的常规训练方法</a>
- [检测模型数据准备](https://github.com/PaddlePaddle/models/blob/develop/PaddleCV/PaddleDetection/docs/INSTALL_cn.md#%E6%95%B0%E6%8D%AE%E9%9B%86) - [检测模型数据准备](https://github.com/PaddlePaddle/PaddleDetection/blob/master/docs/INSTALL_cn.md#%E6%95%B0%E6%8D%AE%E9%9B%86)
- [PaddleSlim使用文档](https://github.com/PaddlePaddle/models/blob/develop/PaddleSlim/docs/usage.md) - [PaddleSlim使用文档](https://github.com/PaddlePaddle/models/blob/develop/PaddleSlim/docs/usage.md)
......
...@@ -7,7 +7,7 @@ ...@@ -7,7 +7,7 @@
该示例使用PaddleSlim提供的[量化压缩策略](https://github.com/PaddlePaddle/models/blob/develop/PaddleSlim/docs/tutorial.md#1-quantization-aware-training%E9%87%8F%E5%8C%96%E4%BB%8B%E7%BB%8D)对分类模型进行压缩。 该示例使用PaddleSlim提供的[量化压缩策略](https://github.com/PaddlePaddle/models/blob/develop/PaddleSlim/docs/tutorial.md#1-quantization-aware-training%E9%87%8F%E5%8C%96%E4%BB%8B%E7%BB%8D)对分类模型进行压缩。
在阅读该示例前,建议您先了解以下内容: 在阅读该示例前,建议您先了解以下内容:
- [检测模型的常规训练方法](https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/PaddleDetection) - [检测模型的常规训练方法](https://github.com/PaddlePaddle/PaddleDetection)
- [PaddleSlim使用文档](https://github.com/PaddlePaddle/models/blob/develop/PaddleSlim/docs/usage.md) - [PaddleSlim使用文档](https://github.com/PaddlePaddle/models/blob/develop/PaddleSlim/docs/usage.md)
...@@ -29,7 +29,7 @@ ...@@ -29,7 +29,7 @@
根据运行结果可看到Variable的名字为:`multiclass_nms_0.tmp_0` 根据运行结果可看到Variable的名字为:`multiclass_nms_0.tmp_0`
## 训练 ## 训练
根据 [PaddleCV/PaddleDetection/tools/train.py](https://github.com/PaddlePaddle/models/blob/develop/PaddleCV/PaddleDetection/tools/train.py) 编写压缩脚本compress.py。 根据 [tools/train.py](https://github.com/PaddlePaddle/PaddleDetection/tree/master/tools/train.py) 编写压缩脚本compress.py。
在该脚本中定义了Compressor对象,用于执行压缩任务。 在该脚本中定义了Compressor对象,用于执行压缩任务。
通过`python compress.py --help`查看可配置参数,简述如下: 通过`python compress.py --help`查看可配置参数,简述如下:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册